How many requests per minute can strapi handle?

[details=“System Information”]

  • Strapi Version: v14.17.1
  • Operating System: Windows10
  • Database: Default one Strapi uses
  • Node Version: v14.17.1
  • NPM Version: 6.14.13

I wanted to know how many GET requests can strapi handle if deployed on a server with 1 vCPU 1024MB Memory 1000GB Bandwidth ?
And what would be the recommended config, as my app only needs to GET from the server and not POST anything.

Thanks, in advance.


1 Like

That is a very subjective question, determining RPS is a hard metric to properly calculate. First thing is you are below our recommended RAM specs (see this doc).

Request per second largely depends on the size of the request and if you are using say paging or if you have a lot of relations that need to be populated. It also largely depends on the database infrastructure sitting behind Strapi. Ideally your database should be able to handle at least moderate connection pooling and depending on the load you expect (after testing your application) you may need to scale the Strapi backend (vertically using something like PM2 clusters or horizontally which would depend on where you are hosting).

Functionally in the past I have deployed a personal application that has the following (also see my diagram below):

  • 3x Strapi “Nodes” (4 core, 6GB of RAM) with a PM2 cluster running 4 instances each (so a total of 12 Strapi backend “nodes”)
  • 5x MariaDB Galera member nodes (2x write, 2x read, 1x arbitrator; each node is 8 core 16GB of RAM)
  • 2x (active/passive) ProxySQL nodes to handle read-write splits and some light read caching (2 core, 2GB of RAM)
  • 3x (active/active) Nginx edge nodes, specs on these are not specific as they handle my entire environments traffic. Ideally these are only used to round-robin the traffic to each of the Strapi Virtual Machines

I was handling about 4 to 8 million requests per week or roughly between 6.6 RPS to 13.22 RPS although it’s rare that I would ever spike that high (and I could probably of hit much more with better tuning of the database as that was the primary bottleneck)

3 Likes

Thanks @DMehaffy for your quick response.
As you have explained, it looks like you had a big user-base, but my app doesn’t have that many users…
Also, I have only a single json object in a single collections entry.

  • One Collection named “collection1”.
    • Only a single entry inside “collection1”.
      • Only a single json field named “jsonData” inside the entry. This json data has about 9k characters.

And the data only resets about 3-5 times, per day, which means 3-5 POST/PUT requests per day.
The GET requests happen about 5 per day per user.
Keeping the costs low is also one of my main concern.

Lets say that we use a 2CPU 4GB ram and 80 GB storage on Linode, will that be enough for my application to run smoothly?
The app doesn’t deal with large amounts of data nor it requests or posts any of it online. Its a static app with one of the parts using this Strapi implementation.
Also, I’m new to all this server and back-end related work.
Sorry, if I’m asking any dumb questions.

Thanks.

Yes, for such a small use-case that should be more than enough. If your data endpoints are public (what I’m about to suggest doesn’t work yet with authenticated GET requests). You can actually use one of the community middleware packages: GitHub - patrixr/strapi-middleware-cache: A cache middleware for https://strapi.io that would add a Redis database caching system.

Using the cache middleware means your GET requests don’t hit the database after the first time (defined by an expire time) and the response is extremely fast since Redis is an in-memory database.

I regularly used it in my own case to cut down on the requests to the database and saw a massive performance improvement for my large user-base.