Data get's deleted when strapi container restarts

System Information
  • Strapi Version: 4.1.7
  • Operating System: in kubernetes, docker image: strapi/base which uses debian 9
  • Database: postgres and sqlite (problem is present in both)
  • Node Version: v14.16.0
  • NPM Version: 6.14.11
  • Yarn Version: 1.22.5

I create a strapi project using

npx  create-strapi-app@latest project-name  --quickstart

Configured the db connection and deployed to the Kubernetes.
At first I used postgresql. The problem was whenever a pod restrarts, the db fields for the contents gets deleted. Which was weird.

Then I though maybe there is a problem with postgres so I tried sqlite and mounted the db file to a persistence volume. Still the same problem.

However I noticed that when I add content it add some files to src/api/<content-name>/.
And since those files are not in the persistent part, they are deleted on a restart.
At this point I am confused. Like should I mount another volume to here? Is there any step I am missing?

Hi @Furkan_Aksoy and welcome to the Strapi Community.

If you are using a docker container and SQLLite then you will loose your data.
Reason is it’s inside the container so when you destroy the container the file gets destroyed to.
When it comes to files. Unless you mounted or bind-mounted the files then it still gets removed.
Best thing to do is use your localhost to BUILD the environment then build and deploy a docker image.
If it’s for production use Postgres or Mysql etc to store data persistent. SQLite is meant to be used for local development and testing only.

1 Like

It is an internal project. I setup all the ci/cd pipeline. It is like LOCAL on streoids. So that we can work on that together with other team members.

I mounted the directories and it works now but not sure if what I did was correct.

I’m using postgresql still my data gets deleted.

2 Likes

Are you sure you are using it, :thinking: Like have you set it to only use PostgreSQL in production then is it running in production etc.

I’m facing the same issue. All my data gets deleted. Some code in strapi send an sql request like

delete from __TABLE__ where published_at is null

and this for all the data models.
annoying :slight_smile:

Same things I’m using postgresql still my data gets deleted in production

1 Like

Strapi removes content that for some unknown reason he does not like. It looks fantastic, if you inserted a table into the database that you need and this table is not registered in the strap, then the system deletes it.

I’m guessing this is because Strapi has a health check where it checks the tables.
Tables that are generated not by strapi should not be in the same database as strapi can’t control it.

There must still be some way to control it. Data cannot be deleted without user permission.

1 Like

Again make sure you are running it with postgres. or any other external database.
So check that example you might be using postgres in production but strapi is running in developer mode, which then will cause this issue.

This just happened to me. I was building a custom route endpoint + service and suddently I lost all data of 2 tables that had relationships and the draft system enabled. But the worst was that I also lost all the permissions of the authenticated role. I don’t know exactly what happened except that suddently my endpoint was giving a 403 and that’s when I found out that data was removed.

And for the record, I’m using MySQL 5.7

Could you share your
config/development/database.js
config/production/database.js
config/database.js

For those that you have please :slight_smile: Also where did you deploy it to ?
Also what is your NODE_ENV set as ?

I’m running strapi in AWS via ECS (service) + ECR + RDS (postgres).
I run my container via the following commands:

export NODE_ENV="production".
strapi start

I’m doing a test where I moved the db from prod to qa and my container is ran via the above commands. What i’m noticing is that prior to the container starting, my table - spotlight_articles is present and has data. However, after the container is up and running, that table is dropped.

config/plugins.ts

export default ({ env }) => ({
  'users-permissions': {
    enabled: true,
    config: {
        jwtSecret: env('JWT_SECRET'),
        jwt: {
            expiresIn: '7d',
        },
    },
  },
  upload: {
    config: {
      provider: 'aws-s3',
      providerOptions: {
        region: env('AWS_REGION'),
        params: {
          Bucket: env('AWS_BUCKET'),
        }
      },
      actionOptions: {
        upload: {
          ACL: env('AWS_S3_ACL')
        },
        uploadStream: {
          ACL: env('AWS_S3_ACL')
        },
        delete: {},
      },
    },
  },
});

How is your database.ts setup ?
Also do you have different env setups for your strapi env?

@Eventyret
I am running my Strapi Production on k8s cluster, it is backed with RDS PSQL.
I lost all my data (currently restoring from 1 day old backup)

My NODE_ENV is production
I only have config/database.ts

export default ({ env }) => ({
  connection: {
    client: 'postgres',
    connection: {
      host: env('DATABASE_HOST', '--'),
      port: env.int('DATABASE_PORT', 5432),
      database: env('DATABASE_NAME', '--'),
      user: env('DATABASE_USERNAME', 'strapi'),
      password: env('DATABASE_PASSWORD', '--'),
      ssl: env.bool('DATABASE_SSL', false),
    },
  },
});

Please provide a solution, this is basically useless if I cannot rely on data being intact.

I also have my /dist on gitignore.

below is my dockerfile that get’s executed and creates the dist folder.

FROM node:16-alpine as build

# Installing libvips-dev for sharp Compatibility

RUN apk update && apk add build-base gcc autoconf automake zlib-dev libpng-dev vips-dev && rm -rf /var/cache/apk/* > /dev/null 2>&1

ARG NODE_ENV=production

ENV NODE_ENV=${NODE_ENV}

WORKDIR /opt/

COPY ./package.json ./yarn.lock ./

ENV PATH /opt/node_modules/.bin:$PATH

COPY ./providers ./providers

RUN yarn config set network-timeout 600000 -g && yarn install

WORKDIR /opt/app

COPY ./ .

RUN yarn build

FROM node:16-alpine

RUN apk add vips-dev

RUN rm -rf /var/cache/apk/*

ARG NODE_ENV=production

ENV NODE_ENV=${NODE_ENV}

WORKDIR /opt/app

COPY --from=build /opt/node_modules ./node_modules

ENV PATH /opt/node_modules/.bin:$PATH

COPY --from=build /opt/app ./

EXPOSE 1337

CMD ["yarn", "start"]

Welcome to the Strapi Community Forums @Rohithzr :birthday:
Does this happend on every container restart then ? Or has it only happened once ?

Best practice is always to use environmental configs so things doesn’t accidentally gets overwritten if you run it locally etc.

Hello

I have made changes to the database.ts with the database name to change based on the NODE_ENV. Had a discussion on discord and found this insight among others.

However I facing another issue now where I my header is not reaching the middleware only on the production server. Creating another question on the forum :smiley:

Thanks

Hello @Rohithzr i am also facing the same issue and i am also doing in this way can you please elaborate how to resolve this issue .

Hello @handclap have you resolved that issue ?