Heroku - Memory Leak

Hey guys! I am almost ready to publish my app but now stumbled upon a huge memory issue with Strapi on Heroku. My app essentially allows users to upload multiple images and videos at once. I am using the Cloudinary plugin for this.

I did a E2E test today and went through the app like a regular user would and was super shocked after seeing the memory spike in Heroku I know this is very hard to debug but maybe someone can help me here since it is critical to my business and I am running against a wall right now.

a) I thought about first attaching a debugger to the heroku node instance and try to see if I can spot the issue. Could you help me how to do this?

b) There seems to be a memory leak issue with multiple file uploads but right now it is just plain guessing…

Would be so happy if someone could help me here! Thanks

So I implemented a simple realtime memory logger:

 const numeral = require('numeral')

    setInterval(() => {
        const { rss, heapTotal } = process.memoryUsage()
        strapi.log.info('đź’ľ Node Memory Usage |', 'rss:', numeral(rss).format('0.0 ib'), 'heapTotal:', numeral(heapTotal).format('0.0 ib'))
    }, 5000);

I found out that the memory leak really seems to be related to the image upload since I saw a spike right after I uploaded a couple images. The memory spiked from the average of 130MB to 430MB. Even after the upload has finished the memory usage seems to not go down again which indicates that there might really be a memory leak here.

Could any one look into this and confirm if this seems to be true?

Yes, the memory leak is caused by the upload feature. It’s a known issue and I think the Strapi team is working to fix it. As alexandrebodin mentioned in the git discussion they have this issue in the backlog.

1 Like

Thanks for the clarification. Great to know where the issue is coming from at least. Seems like I will have to think about a workaround for now.

Is there a way for me to override the Upload.js service from the plugin? I saw in this issue comment that a workaround for now could be to replace fs.readFile() with fs.createReadStream().

I created the Upload.js file in extensions/upload/services/Upload.js and tried to run it without modifications but even that caused various errors unfortunately.

I really need to find a workaround as soon as possible :slightly_frowning_face:

Yeah, take a loot at the Upload. js file, as it is including other files that you should copy to the extensions folder.

Ok, finally I managed to override the Upload.js plugin. Here is what I needed to import:

extensions/upload/services/Upload.js
extensions/upload/services/image-manipulation.js
extensions/upload/utils

Now I am trying to follow vinod77’s suggestion but since I have little experience with buffers and streams I can not manage to get it working. Could someone here maybe help me solving this last puzzle?

Here is vinod77’s suggestion:

The default Strapi code uses fs.readFile() which uses Buffer, leads to memory leak… So I wrote my own code, using fs.createReadStream() which reads/streams the file all the way…

:arrow_forward: Link to the Upload.js service
:arrow_forward: Link to the cloudinary upload provider

We are facing almost the exact same issue. Also using the Strapi Cloudinary provider. Since we can not roll out to production without this bug to be fixed I would be more than grateful for someone to help here!

Thanks guys!

hey, Any update ? peter_hellies ?

Just got the same problem on heroku. Uploading a few files to s3 bucket and getting the error. I can only imagine if say 300 users were to try and upload files… anyone have a work around?

Were you able to find a solution? I tryed a solution using streams: created a new api route, used streams and directly uploaded to s3 using the available aws sdk present in strapi. Example: a 40mb file will be streamed in parts of 5mb (default aws). What I found is that this “delays” the eventual memory leak. Ex: say 2 users upload a 40mb file, instead of 80mb being diretly in buffer, each user will be using 5bm at a time (the 5mb aws stream buffer). But eventually, over time, the memory still increases and isn’t returned to the server.

Any solutions for that? The problem is super frustrating. I will have to rebuild my backend from scratch if there is no solution for that. As my app is heavily dependant on images uploads…

Hey Pedro, sorry to hear that. Yes, I can imagine it being very frustrating. My business model is basically reliable on image/video uploads as well.

The only thing that worked for me was painfully going into the Upload provider code and modify it like I mentioned in the above comments (essentially replacing fs.readFile() with fs.createReadStream() which seemed to have solved the issue for now. It was quite a hassle to fix and took a few days but that was mainly due to my lack of node experience. I wish you all the best!

1 Like

Thanks a lot for the tip. I am trying right now and I will let you know about results on my side.

I have also tried switching my OS to Alpine with jemalloc.
It seems that the library that does image manipulation (sharp) has a known issue with bad memory management. And jemalloc seems to help with that.
Some reference here:
https://sharp.pixelplumbing.com/install#linux-memory-allocator

On my case it did make memory usage better, but my server was restarting with multiple uploads anyway… Now I will try your fix.

Oh super interesting. Good luck :slight_smile: