How to avoid HTTP request to get Strapi data running own server as single process

I have not yet looked into doing this but, judging from this:

const strapi = require('strapi');

strapi(/* {...} */).start();

It should be possible to basically combine Strapi with another server under one Node.js process. e.g. I would like to have a Next.js and Strapi server running as one process (so I have less processes to manage and to avoid any extra HTTP requests… especially since if I do run them separately, they’d be running on the same server in my case and it just seems silly).

Given I get to that point, how could I e.g. from a Next.js callback function to get props from backend, get data from Strapi without issuing an HTTP request. Given I’d be running in a single process, is there some module I can require and get Strapi data from?

Thank you

That is exactly what Strapi is not designed to do. What you are describing is basically a “coupled” CMS. Strapi is designed as a headless, meaning it’s designed to be ran on it’s own and “frontends” also known as “channels” connect to it over a common API (in this case REST or GraphQL) via HTTP requests.

1 Like

That is common yes but the benefit being is they don’t need to and you can scale the backend (Strapi) without needing to scale the frontend. In many cases (Strapi included) we have our Strapi instances running in AWS but our frontends are deployed on another platform entirely (say Vercel).

1 Like

Here is another example of what a decoupled CMS looks like compared to a headless one:

1 Like

Thank you @DMehaffy :slight_smile:

You saved me time exploring a dead end :sweat_smile:

When you say you have your “Strapi instances running in AWS”, do you mean EC2 instances? Re using auto scaling solutions like lambdas, I haven’t considered that for my solo project to avoid skyrocketing costs in case of misconfiguration - DDoS attack somehow bypassing Cloudflare for e.g.

Anyway, thanks for replying!

I appreciate the flexibility of keeping them decoupled more now though. Your illustrations helped. More than scalability, I like the opportunity to have a different UI solution (as in your last diagram above).

Re keeping the UI solution on e.g. Vercel, though, I don’t see the point if the UI to render is still waiting for data from Strapi hosted on e.g. an EC2 instance. Even if Vercel hosts the FE solution on “edge nodes” / closest to user’s location, that edge node needs to communicate with an EC2 instance and wait for its response. I doubt whether it’s worth the extra expense and fragmentation of process running location in “startup” cases.


EC2 yes, Lambda will not work with Strapi due to the nature of how Strapi functions and the cold boot time. Many have tried over the years to get it to work without success and it’s not an environment we are likely ever to support.

Generally yes but the user never sees that in many cases were you are using a SSG or SSR in which the server is requesting the information from EC2 at build time while still supporting other UIs that aren’t SSR/SSG.

Really depends on the use-case, I’ve also seen instances were you don’t host Strapi and just run it locally to update content then trigger a rebuild and push that to your static hosting (frontend).

1 Like

Thanks @DMehaffy, I appreciate the time you took to reply. Interesting approaches you mention :thinking: Definitely a lot more flexibility keeping those APIs available.

Cheers :beers:

@justincalleja @DMehaffy I have gotten Strapi v4 (v4.1.12) to work in an AWS Lambda.

The hardest thing we had to get working was removing the dependencies that are not being used after the build.

It cannot make any changes to the files of the project. but it can make changes to the Database.

Whats the average request response time because I’d imagine that cold boot startup basically means every request is gonna take between 1 to 10 seconds to actually respond.