I am looking to create a website that crawlers index properly, what is the best way to go about creating a site like that? Is it possible to do with shadow-cljs and these libraries to interact with Gatsby or Next.js? I would like to use Reagent I think, but I do not know if I can.
How would you create a site that renders to crawlers well?
I see that both gatsby-cljs and next-cljs have been archived. I would be interested in hearing Thomas’ perspective on where his thinking has gone since his interest in Gatsby and Next.js.
Or is the answer obvious: is shadow-grove your (Thomas?) new focus that more-or-less subsumes interest in Gatsby and Next.js?
I archived those since they were meant to demonstrate on how you could potentially use CLJS with those platforms. They were written a long time ago and I haven’t looked at either platform since. Given these are JS platforms I doubt those examples even still work. I didn’t want to delete the repos, but if you intent to build something using these platforms you are probably better off starting from scratch.
shadow-grove is in no way comparable to those platforms. Server-side rendering is not supported and as of now is a non-goal, so I do not intend to work on anything in that area anytime soon.
If you ask me how to create a static site I would tell you to create a normal dynamic website rendering HTML via CLJ (Clojure, not ClojureScript). I would use hiccup but any other CLJ lib is fine. Anything goes. Use any CLJ server lib you want. ring+hiccup is fine. To generate the static part you just
curl every URL you need, store the
.html where needed and publish it to a server. Much more manual than the JS platforms, but also inifinetly less complex.
Rum might be worth a look. It’s a minimal React wrapper that supports server-side rendering - so will tick the box for crawling purposes.
From the docs:
Rum is a client/server library for HTML UI. In ClojureScript, it works as React wrapper, in Clojure, it is a static HTML generator.