Strapi, Nuxt and Traditional Static Hosting ie. !Heroku etc

Looking to get Strapi working on opalstack.com (a traditional web host but close to a VPS - formally webfaction). In other words I’m using not Netlify, not Heroku etc.

Then to allow editors to create content with Strapi to update a Nuxt Static app. As the Nuxt /dist is built offline on my PC then uploaded to a simple static app on Opalstack can Strapi created content be automatically injected into the html/JS on the server? Seems improbable. This to keep the SEO benefits of the Nuxt Static approach. Many Strapi Docs seem to involve Heroku/Netlify type solutions where the site is rebuilt automagically with every change. I think.

And struggling to get Strapi working on Opalstack - a Node problem I think (never used Node on a Production server). I’ve created a Node App which contains App.js with “Hello World from NodeJS” and a stop/start script and that’s about it.

Then the npm create-strapi-app appname --quickstart which creates a subfolder with Strapi but after that I’m stumped. In the browser, I only ever see “Hello World…” I’ve tried updating the .env and config/server.js with my servers IP address and Port (from 1337). And npm run develop etc.

Tried all sorts of URLs with /admin or /dashboard and /appname/admin and /appname/dashboard etc. Either a nginx or 502 (when I remove App.js). I’m obviously not running the Node app properly.

Anyone any experience of the above i.e. Nuxt/Strapi on traditional hosting rather than fancy Platforms.

Stuart

Progress.

I can see admin now. All I needed was to make .env available and change the Port to the custom port made by opalstack for my node installation. I’d swear I did that yesterday…

Now battling with CORs. Making requests to my server from my local installation.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://strapi.mydomain.co.uk:15185/Homepage/0. (Reason: CORS request did not succeed)
Apparently CORS request did not succeed can be due to failings other than a CORS issue. Sigh.

More progress. I just needed to remove the Port number. Working.

BUT.
I don’t see how Strapi (or possibly any Headless CMS) can work regarding SEO.
I’ve Generated my Nuxt/Static app and uploaded the /dist thus getting an HTML for every page. But only when it is downloaded and ran as a SPA does the Strapi content get requested. The pre-built HTML files just have blank sections where the text should be.

That appears to mean that Google Crawlers will find no content on pages.

What am I missing?

It’s hard to say without seeing your code, but when you build your Nuxt application you need to change your target in the nuxt config. It should prefetch the info from Strapi at build time and embed it in the files.

It can do this several ways. Either it will crawl your site through your internal links, or you can provide it information in the nuxt config for what urls to include. Or both.

Thanks Krislunde. I’m asking much the same on Discord Nuxt Headless thread. I learned Nuxt from Max on Academind but that was a late 2018 course. Going over the Deployment section again - SPA is rubbish for SEO, Static is good (Run Generate with target: ‘static’) for SEO and what I’m using but as described there is a gaping hole where Strapi content should be in the prebuilt html. Perhaps I need Universal mode, with a Node.js app on my server. I think that would run MY nuxt/strapi code to fetch the Strapi words and images etc and populate the HTML so Crawlers get the full webpage. Words and all.

Alternatively, Crawlers can in theory, run the JS as a Client would see it. But I’m sceptical about that. And it’s hard to test, particularly as there are other Search Engines. DuckDuckGo etc.

Bit more. I got my Nuxt app running on my Opalstack Node app - hopefully meaning Universal mode. Went really smoothly. BUT. when I look at the website the Network tab shows the call to Strapi to fetch the content. I had hoped that running my site on the server would mean that all API calls would run before the HTML etc would be sent to the Client. Not so.
Meaning that IMHO there is still the risk that search crawlers will not see the content.
So I’ve added the site to Google Search Console. And do the View Crawled Page. That should show if the Strapi content is ‘live’ at crawl time. Will take a day or so.

The Results Are In. I’m not the most expert NodeJS/Deployment person so be patient. I’ve deployed my Nuxt app to a ‘traditonal’ hosting account: opalstack. I.e. not using trendy stuff like Netlify/Heroku. In what I think is Universal mode, ie running on Node. Settings Nuxt@v2.14.7, Environment: production, Rendering: Server-Side and Target: static. I’ve run npm build, and npm generate and use pm2 to start the app. Website works.
The Nuxt app uses @Nuxtjs/Strapi to include a line of text built from my Strapi Node app also on Opalstack.The client fetches this text: the GET request can be seen in the browser Network Tab. JSON is returned from Strapi and the Vue/Nuxt JS then updates the DOM.
BUT. Monitoring the webpage in Google Console : View Crawled Page shows a big heap of nothing where the Strapi fetched text should be.
This means, my current setup is a big massive fail for SEO. All that “awesome” content will be invisible to Search Engines.
Where am I going wrong? Or is Nuxt + Strapi Headless dead in the water? Does any Headless called from any Front End Framework really work for SEO?

Or, what do I need to do to make Prefetch (of Strapi Content) work.

Update. I found a Google Search video just sitting in Youtube Recommended (handy). It explained how google indexes HTML versions and then some time later Renders the page using JS (didn’t know that). And look what I found today: Console - View Crawled Page now states: “This text is from the CMS Strapi”. This is great. Yesterday there was a blank space where I expected content. A positive result.