Does anyone got this issue solved?
@Eventyret Facing same issue my database in postgress is hosted in aws and m not using socker at all yesterday the data was perfect but next day i see that all the data base got vanished only tables are there like only skeleton of data but not the data
@Eventyret Iāam facing the same Issue, I deployed strapi inside gke cluster with one pod inside my deployment and a postgres running inside the cluster. Even when configuring the settings that @twofingerrightclick mentioned, i have the same Problem. Always when a new pod is comming up, the tables inside my psql database gets deleted.
Hi, Eventyret.
Can you elaborate? Why data is removed even though the files are mounted into the container?
Also, why do you say that SQLite is only for development? Why is it not good for production, for a small, simple site?
Thanks!
If a file is mounted to the host system itās stored on the system and not just in the container.
SQLITE in a dev container will cause problems as itās normally not mounted, meaning you stop the container or redeploy the container and it will delete it.
Same issue!
this is scary
why delete?!
just add or update existing data structure,
or just delete when it has been permitted
did it help?
has anyone managed to solve this problem???
we have the same problem.
Our databaseis is postgress is hosted in Digital Ocean (docer is used)
Well I have done some analysis of Strapi and my theory is the data in database just stores records and has nothing to do with components/fields we create it rather generated custom code which is stored on local machine. So when the system restart if there is no volume the data is wiped. I have a back up restored without no luck i am happy at-least i have my record but i cannot tell you how many times Iāve had to rebuild my content type builder. However this is a major flaw in software design.
The limitation is that you will not be able to use development mode in a docker. Rather changes to content type structure will need to be done on local machine almost as a code change/pr then push the latest file updates to new docker image deploy and so on.
Anyways now my production site is down so will be spending weekend rebuilding this
Ok - so just like to quick update ā
looks like what happens is -Strapi utilizes - stapi_base_folder/src/api/content
content is typically your content manager items - that you sees as lost while server was restarted.
so basically when u create your content -strapi creates folder with controller, routes, DB schema .etc - and each time when server starts it finds the schema from there and tries to recreate / delete if one doesnāt exist
in scenarios when folks find tables/data are deleted - it must be that the content file isnāt commited if another system was used to run the strapi - it would miss those table and recreate the content schema
If you havenāt took the backup ā i can say your data is lost , but you can recreate tables if you can access those src/app files ā but it still wouldnāt have your data.
Good luck - but these things could have better controlled by strapi then each time scanning and recreating /deleting schemaās sadly
hope if helps
Hello, I need your help. In Strapi version 4.19.0, every night at 00:00 UTC, all the content on our server is being deleted.
The database server is separate, and there is a constant connection to it. The build is done through Docker with the production environment. However, everything gets deleted, even though the schema remains, specifically, there is no content in the database.
ā Environment ā production
ā Version ā 4.19.0 (node v18.19.1)
ā Edition ā Community
ā Database ā postgres
Envs:
NODE_ENV: production
APP_ENVIRONMENT: production
DATABASE_CLIENT: postgres
DATABASE_HOST: host
DATABASE_PORT: 6432
DATABASE_NAME: bd_name
DATABASE_USERNAME: bd_user_name
DATABASE_PASSWORD: bd_pass
Configuration file databases.ts
import path from 'path';
export default ({ env }) => {
const client = env('DATABASE_CLIENT', 'sqlite');
const connections = {
mysql: {
connection: {
connectionString: env('DATABASE_URL'),
host: env('DATABASE_HOST', 'localhost'),
port: env.int('DATABASE_PORT', 3306),
database: env('DATABASE_NAME', 'strapi'),
user: env('DATABASE_USERNAME', 'strapi'),
password: env('DATABASE_PASSWORD', 'strapi'),
ssl: env.bool('DATABASE_SSL', false) && {
key: env('DATABASE_SSL_KEY', undefined),
cert: env('DATABASE_SSL_CERT', undefined),
ca: env('DATABASE_SSL_CA', undefined),
capath: env('DATABASE_SSL_CAPATH', undefined),
cipher: env('DATABASE_SSL_CIPHER', undefined),
rejectUnauthorized: env.bool(
'DATABASE_SSL_REJECT_UNAUTHORIZED',
true
),
},
},
pool: { min: env.int('DATABASE_POOL_MIN', 2), max: env.int('DATABASE_POOL_MAX', 10) },
},
mysql2: {
connection: {
host: env('DATABASE_HOST', 'localhost'),
port: env.int('DATABASE_PORT', 3306),
database: env('DATABASE_NAME', 'strapi'),
user: env('DATABASE_USERNAME', 'strapi'),
password: env('DATABASE_PASSWORD', 'strapi'),
ssl: env.bool('DATABASE_SSL', false) && {
key: env('DATABASE_SSL_KEY', undefined),
cert: env('DATABASE_SSL_CERT', undefined),
ca: env('DATABASE_SSL_CA', undefined),
capath: env('DATABASE_SSL_CAPATH', undefined),
cipher: env('DATABASE_SSL_CIPHER', undefined),
rejectUnauthorized: env.bool(
'DATABASE_SSL_REJECT_UNAUTHORIZED',
true
),
},
},
pool: { min: env.int('DATABASE_POOL_MIN', 2), max: env.int('DATABASE_POOL_MAX', 10) },
},
postgres: {
connection: {
connectionString: env('DATABASE_URL'),
host: env('DATABASE_HOST', 'localhost'),
port: env.int('DATABASE_PORT', 5432),
database: env('DATABASE_NAME', 'strapi'),
user: env('DATABASE_USERNAME', 'strapi'),
password: env('DATABASE_PASSWORD', 'strapi'),
ssl: env.bool('DATABASE_SSL', false) && {
key: env('DATABASE_SSL_KEY', undefined),
cert: env('DATABASE_SSL_CERT', undefined),
ca: env('DATABASE_SSL_CA', undefined),
capath: env('DATABASE_SSL_CAPATH', undefined),
cipher: env('DATABASE_SSL_CIPHER', undefined),
rejectUnauthorized: env.bool(
'DATABASE_SSL_REJECT_UNAUTHORIZED',
true
),
},
schema: env('DATABASE_SCHEMA', 'public'),
},
pool: { min: env.int('DATABASE_POOL_MIN', 2), max: env.int('DATABASE_POOL_MAX', 10) },
},
sqlite: {
connection: {
filename: path.join(
__dirname,
'..',
'..',
env('DATABASE_FILENAME', '.tmp/data.db')
),
},
useNullAsDefault: true,
},
};
return {
connection: {
client,
...connections[client],
acquireConnectionTimeout: env.int('DATABASE_CONNECTION_TIMEOUT', 60000),
},
};
};
However, the issue is that deploying to production with the same settings, only connecting to a different database address, does not delete data. What could be the problem, and where should I investigate? Please advise.
After enable db logs,
2024-02-16 00:00:29 UTC [972367-16] 127.0.0.1(33716) bd@bd: statement: drop table if exists "public"."components_entities_text_content"
2024-02-16 00:00:29 UTC [972367-17] 127.0.0.1(33716) bd@bd: statement: drop table if exists "public"."components_content_oembed"
2024-02-16 00:00:29 UTC [972367-18] 127.0.0.1(33716) bd@bd: statement: drop table if exists "public"."author_list"
2024-02-16 00:00:29 UTC [972367-19] 127.0.0.1(33716) bd@bd: statement: drop table if exists "public"."components_blocks_block_article_image"
2024-02-16 00:00:29 UTC [972367-20] 127.0.0.1(33716) bd@bd: statement: drop table if exists "public"."components_blocks_block_article_image_components"
2024-02-16 00:00:29 UTC [972367-21] 127.0.0.1(33716) bd@bd: statement: drop table if exists "public"."components_blocks_block_with_quote"
2024-02-16 00:00:29 UTC [972367-22] 127.0.0.1(33716) bd@bd: statement: drop table if exists "public"."components_blocks_block_with_quote_components"
2024-02-16 00:00:29 UTC [972367-23] 127.0.0.1(33716) bd@bd: statement: drop table if exists "public"."components_entities_image_article_component"
2024-02-16 00:00:30 UTC [972367-24] 127.0.0.1(33716) bd@bd: statement: drop table if exists "public"."components_entities_image_article_component_components"
This appears to still be an issue that hasnāt been resolved. Are there any solutions to ālockā Strapi from making DB changes like this?
Weāve been running into this same issue running Strapi 4.22.1 Postres db in āproductionā environment. We just had a table truncate itself and lost hundreds of records.
No one on the team made any schema changes since last week (schema changes is usually when weāve noticed data getting dropped/lost which still isnāt good but something we began getting used too).
Weāre sort of at a loss here as to why this occurred and what to do to prevent it. At this point we just had a meeting about pivoting and getting rid of Strapi completely. Canāt go into a true production environment when data just vanishes.
Weāre having the exact same issue. Not necessarily every night at a specific time but randomly data just vanishes. We had hundreds of records get lost just a couple of hours ago when no one has been working on it.
Have you come up with some sort of solution?