Data get's deleted when strapi container restarts

Does anyone got this issue solved?

@Eventyret Facing same issue my database in postgress is hosted in aws and m not using socker at all yesterday the data was perfect but next day i see that all the data base got vanished only tables are there like only skeleton of data but not the data

@Eventyret I’am facing the same Issue, I deployed strapi inside gke cluster with one pod inside my deployment and a postgres running inside the cluster. Even when configuring the settings that @twofingerrightclick mentioned, i have the same Problem. Always when a new pod is comming up, the tables inside my psql database gets deleted.

Hi, Eventyret.

Can you elaborate? Why data is removed even though the files are mounted into the container?

Also, why do you say that SQLite is only for development? Why is it not good for production, for a small, simple site?

Thanks!

If a file is mounted to the host system it’s stored on the system and not just in the container.
SQLITE in a dev container will cause problems as it’s normally not mounted, meaning you stop the container or redeploy the container and it will delete it.

1 Like

Same issue!
this is scary
why delete?!
just add or update existing data structure,
or just delete when it has been permitted

1 Like

did it help?

has anyone managed to solve this problem???
we have the same problem.
Our databaseis is postgress is hosted in Digital Ocean (docer is used)

Well I have done some analysis of Strapi and my theory is the data in database just stores records and has nothing to do with components/fields we create it rather generated custom code which is stored on local machine. So when the system restart if there is no volume the data is wiped. I have a back up restored without no luck i am happy at-least i have my record but i cannot tell you how many times I’ve had to rebuild my content type builder. However this is a major flaw in software design.

The limitation is that you will not be able to use development mode in a docker. Rather changes to content type structure will need to be done on local machine almost as a code change/pr then push the latest file updates to new docker image deploy and so on.

Anyways now my production site is down so will be spending weekend rebuilding this :upside_down_face:

Ok - so just like to quick update –
looks like what happens is -Strapi utilizes - stapi_base_folder/src/api/content
content is typically your content manager items - that you sees as lost while server was restarted.
so basically when u create your content -strapi creates folder with controller, routes, DB schema .etc - and each time when server starts it finds the schema from there and tries to recreate / delete if one doesn’t exist
in scenarios when folks find tables/data are deleted - it must be that the content file isn’t commited if another system was used to run the strapi - it would miss those table and recreate the content schema
If you haven’t took the backup – i can say your data is lost , but you can recreate tables if you can access those src/app files – but it still wouldn’t have your data.
Good luck - but these things could have better controlled by strapi then each time scanning and recreating /deleting schema’s sadly
hope if helps

Hello, I need your help. In Strapi version 4.19.0, every night at 00:00 UTC, all the content on our server is being deleted.

The database server is separate, and there is a constant connection to it. The build is done through Docker with the production environment. However, everything gets deleted, even though the schema remains, specifically, there is no content in the database.


│ Environment │ production
│ Version │ 4.19.0 (node v18.19.1)
│ Edition │ Community
│ Database │ postgres

Envs:
NODE_ENV: production
APP_ENVIRONMENT: production
DATABASE_CLIENT: postgres
DATABASE_HOST: host
DATABASE_PORT: 6432
DATABASE_NAME: bd_name
DATABASE_USERNAME: bd_user_name
DATABASE_PASSWORD: bd_pass

Configuration file databases.ts

import path from 'path';

export default ({ env }) => {
  const client = env('DATABASE_CLIENT', 'sqlite');

  const connections = {
    mysql: {
      connection: {
        connectionString: env('DATABASE_URL'),
        host: env('DATABASE_HOST', 'localhost'),
        port: env.int('DATABASE_PORT', 3306),
        database: env('DATABASE_NAME', 'strapi'),
        user: env('DATABASE_USERNAME', 'strapi'),
        password: env('DATABASE_PASSWORD', 'strapi'),
        ssl: env.bool('DATABASE_SSL', false) && {
          key: env('DATABASE_SSL_KEY', undefined),
          cert: env('DATABASE_SSL_CERT', undefined),
          ca: env('DATABASE_SSL_CA', undefined),
          capath: env('DATABASE_SSL_CAPATH', undefined),
          cipher: env('DATABASE_SSL_CIPHER', undefined),
          rejectUnauthorized: env.bool(
            'DATABASE_SSL_REJECT_UNAUTHORIZED',
            true
          ),
        },
      },
      pool: { min: env.int('DATABASE_POOL_MIN', 2), max: env.int('DATABASE_POOL_MAX', 10) },
    },
    mysql2: {
      connection: {
        host: env('DATABASE_HOST', 'localhost'),
        port: env.int('DATABASE_PORT', 3306),
        database: env('DATABASE_NAME', 'strapi'),
        user: env('DATABASE_USERNAME', 'strapi'),
        password: env('DATABASE_PASSWORD', 'strapi'),
        ssl: env.bool('DATABASE_SSL', false) && {
          key: env('DATABASE_SSL_KEY', undefined),
          cert: env('DATABASE_SSL_CERT', undefined),
          ca: env('DATABASE_SSL_CA', undefined),
          capath: env('DATABASE_SSL_CAPATH', undefined),
          cipher: env('DATABASE_SSL_CIPHER', undefined),
          rejectUnauthorized: env.bool(
            'DATABASE_SSL_REJECT_UNAUTHORIZED',
            true
          ),
        },
      },
      pool: { min: env.int('DATABASE_POOL_MIN', 2), max: env.int('DATABASE_POOL_MAX', 10) },
    },
    postgres: {
      connection: {
        connectionString: env('DATABASE_URL'),
        host: env('DATABASE_HOST', 'localhost'),
        port: env.int('DATABASE_PORT', 5432),
        database: env('DATABASE_NAME', 'strapi'),
        user: env('DATABASE_USERNAME', 'strapi'),
        password: env('DATABASE_PASSWORD', 'strapi'),
        ssl: env.bool('DATABASE_SSL', false) && {
          key: env('DATABASE_SSL_KEY', undefined),
          cert: env('DATABASE_SSL_CERT', undefined),
          ca: env('DATABASE_SSL_CA', undefined),
          capath: env('DATABASE_SSL_CAPATH', undefined),
          cipher: env('DATABASE_SSL_CIPHER', undefined),
          rejectUnauthorized: env.bool(
            'DATABASE_SSL_REJECT_UNAUTHORIZED',
            true
          ),
        },
        schema: env('DATABASE_SCHEMA', 'public'),
      },
      pool: { min: env.int('DATABASE_POOL_MIN', 2), max: env.int('DATABASE_POOL_MAX', 10) },
    },
    sqlite: {
      connection: {
        filename: path.join(
          __dirname,
          '..',
          '..',
          env('DATABASE_FILENAME', '.tmp/data.db')
        ),
      },
      useNullAsDefault: true,
    },
  };

  return {
    connection: {
      client,
      ...connections[client],
      acquireConnectionTimeout: env.int('DATABASE_CONNECTION_TIMEOUT', 60000),
    },
  };
};

However, the issue is that deploying to production with the same settings, only connecting to a different database address, does not delete data. What could be the problem, and where should I investigate? Please advise.

After enable db logs,

2024-02-16 00:00:29 UTC [972367-16] 127.0.0.1(33716) bd@bd:  statement: drop table if exists "public"."components_entities_text_content"
2024-02-16 00:00:29 UTC [972367-17] 127.0.0.1(33716) bd@bd:  statement: drop table if exists "public"."components_content_oembed"
2024-02-16 00:00:29 UTC [972367-18] 127.0.0.1(33716) bd@bd:  statement: drop table if exists "public"."author_list"
2024-02-16 00:00:29 UTC [972367-19] 127.0.0.1(33716) bd@bd:  statement: drop table if exists "public"."components_blocks_block_article_image"
2024-02-16 00:00:29 UTC [972367-20] 127.0.0.1(33716) bd@bd:  statement: drop table if exists "public"."components_blocks_block_article_image_components"
2024-02-16 00:00:29 UTC [972367-21] 127.0.0.1(33716) bd@bd:  statement: drop table if exists "public"."components_blocks_block_with_quote"
2024-02-16 00:00:29 UTC [972367-22] 127.0.0.1(33716) bd@bd:  statement: drop table if exists "public"."components_blocks_block_with_quote_components"
2024-02-16 00:00:29 UTC [972367-23] 127.0.0.1(33716) bd@bd:  statement: drop table if exists "public"."components_entities_image_article_component"
2024-02-16 00:00:30 UTC [972367-24] 127.0.0.1(33716) bd@bd:  statement: drop table if exists "public"."components_entities_image_article_component_components"