Have Clojure UIs Taken the Wrong Path? Part 1

The point is that you only need stuff like React because you’ve moved so much business logic into the client and made a fairly fat client. If you avoid that temptation, you can still have interactive apps, but with the logic on the backend (where you have your live REPL already).

sigh. I guess getting back in the habit of reflexive refreshing of the browser is a given, then. A blast to the pre-Clojure days…

Thanks for the nice write-up! A question with your “press the heart button” example, though. What about when someone else hearts one of those? In that case do you just need to leave HTMX and use a web socket? That feels against the grain of an HTML-driven workflow, but maybe that is an inherently NOT HATEOS situation. Generally speaking, what can HATEOS or HTMX offer in apps centered in modern multi-user, shared-state situations? Maybe I’m missing something here, but does HTMX inherently not apply to such cases?

Similarly common problem: how can you deal with user sessions with an HTMLX approach? There is probably a way, but suppose permissions, preferences, recommendations are all coming in to play and effecting user experience. That is pretty common, but I would love for someone to say how an HTML-driven approach (htmx, HATEOS) can handle that?

A question with your “press the heart button” example, though. What about when someone else hearts one of those? In that case do you just need to leave HTMX and use a web socket?

I didn’t cover this in the article, but htmx supports websockets actually: </> htmx ~ The websockets Extension. The Biff starter app uses them for an extremely basic chatroom feature. htmx handles the client-side connection stuff. On the backend you set up a websocket handler and then you can send snippets of html to the client e.g. when other users write messages.

isn’t the biggest thing that React gives us the virtual-DOM empowering hot reloading? Dev without that seems less joyful.

You can set things up so the browser refreshes automatically. e.g. there’s GitHub - weavejester/ring-refresh which I believe is based on polling (send GET requests to a file until the modified time changes, then refresh). Or you could set up a websocket connection with htmx, set up a file watcher on the backend, then send a refresh command whenever a file changes. I’ve had that on my todo list for a while–it’s taken me a while because I honestly have been happy enough with manually refreshing :wink:.

(Of course that’ll do a full page refresh, but I think that fits well enough with htmx–the whole point is to avoid having much client-side state anyway.)

Similarly common problem: how can you deal with user sessions with an HTMLX approach? There is probably a way, but suppose permissions, preferences, recommendations are all coming in to play and effecting user experience. That is pretty common, but I would love for someone to say how an HTML-driven approach (htmx, HATEOS) can handle that?

Plain old cookie-based sessions work fine. Put the user ID/session ID in the cookie, then on the backend use it to look up user permissions/preferences etc in the database/cache.

As an aside: I honestly am not sure I fully understand the arguments around hypermedia; I basically just think in terms of thin client vs thick client. htmx gives you a few extra tools to make a thin client approach more practical. But there are cases where a thick client is what you need (e.g. highly interactive apps like google docs/google maps, or apps where you need offline support), and those are the main cases where htmx isn’t a great fit.


I’m putting this link here because I keep losing it. Two good videos linked about HTMX.

Definitely. I work on a web app where I have to refresh the browser for UI changes. I end up prototyping a lot in CLJS/React before moving things back to the app.

I honestly am not sure I fully understand the arguments around hypermedia; I basically just think in terms of thin client vs thick client.

I think there is a better framing than thick vs thin clients.

It’s an interesting irony that in order to develop richer clients, ClojureScript SPAs purchased “sophistication” at the price of data orientation and simplicity.

I’m grateful, of course, for the front end ClojureScript frameworks that we do have. But I’m skeptical that price was worth paying. Or that it was necessary. I would argue that greater sophistication and power can be achieved at a much lower price by returning to approaches that are simple and data oriented. One of which is hypermedia.

Hypermedia has a data-oriented approach in that the server ships all the information and controls to the client. Rather than extending that approach, the RPC architecture won the decade. You don’t just ship data. You ship a big glob of JavaScript.

Hypermedia is also simpler. The React+ RPC architecture requires heavy coupling between a custom client and your backend. This becomes extremely painful where there are multiple backend services, since your frontend can be coupled to different versions of the backend. In fact, one of the main causes of the “distributed monolith” anti-pattern is precisely that so many services end up coupled through a frontend SPA.

Our backend ships pure data. The JS etc is served from somewhere else and asks our backend for pure data. Yes, it’s RPC, but you’re conflating backend and frontend and assuming that they are inherently coupled – ours were built by, and are maintained by, two separate teams

The typical backend today is already dependent on multiple other backend services: various payment gateways, search services, analytics/logging/monitoring, image analysis, geolocation… What you’re criticizing in React+ RPC frontend apps is what backend apps already look like.

Our backend services only have to be concerned with pure data: we don’t have to think about presentation, we don’t have to think about “controls” or UI or any such considerations. The frontend presentation can change dramatically and independently of the backend.

1 Like

When I say hypermedia is a data-oriented approach, I’m applying the predicate “data-oriented” to the system as a whole, not its parts individually (the server application, the custom JavaScript application, the browser).

With respect to the question of coupling, I think we can be fairly specific.

: A and B are coupled if, for a given change Δ, changing A requires changing B.

Let’s apply this definition to a React SPA and a Clojure backend.

An example could be to add a brand new field and show it to a user (our Δ). Our SPA and backend are coupled if both must change in order to implement this change. Displaying the new field on the frontend requires creating the field in the database and exposing it through an API.

Or again, let’s say users can already send chats to each other, but they want the ability to archive those chats (Δ). In order to implement this, the React SPA will have to create a control (say, a button that fires an event that sends a DELETE request to the server). The server will have to implement that DELETE API.

But a hypermedia app will typically not need to update the client application. I don’t need to submit a PR to Chrome or Firefox to add a button to a page. Nor would I need to submit such a PR if I am extending the browser as a hypermedia client using HTMX, Unpoly, or Phoenix Liveview.

RPC tends towards low cohesion and high coupling.

These are toy examples, of course. In real applications, coupling becomes way more painful. It’s easy to get into Distributed Merge Hell. It’s very common to use an API to support both the frontend and backend services, only to accidentally expose sensitive information to end users.

If possible I prefer to build a page with HTMX. But some of our pages like our video editor are so interactive and uses many browser APIs that it needs JavaScript/ClojureScript.

However, I find tight coupling between the frontend and backend fine, I even prefer it. The problems starts if separate teams build the frontend and backend, so that they need to coordinate for each additional data required by the frontend. Another source of complexity is if the same API serves different apps, like your SPA, your mobile apps, your official API, etc. Such a multipurpose API becomes a mess pretty quickly due to all the different requirements of the API consumers. Therefore I prefer to tailor made an API for on use case only. This is the approach we use for our SaaS which does tight coupling on purpose:


You cherry picked your example to match your story, and you sort of keep doing that.

Any time you are told that “it’s a matter of tradeoffs”, alarm bells should go off. It’s a good indicator that the framing of the question is fundamentally misguided.

You said this in your follow up post, which is where I stopped reading. Discussing trade-offs is exactly what you are supposed to be doing. So, let me highlight yours.

Hypermedia is a serialization format, nothing else. There is still the exact same “send a request, get a response back” RPC going on.

Instead of sending EDN, JSON or whatever other data format might exist you are sending a String. A String that is then interpreted by the client code and further “instrumented”. The client code handles it, the browser doesn’t do this magic on its own. Yes, there is still code required. Not to forget that the serialization format for the request part is also sort of limited to what your chosen code provides. Good luck sending EDN type data without extra server-side work, e.g. translating keywords, sets, etc.

A touted benefit is that you do not have to write this code, which is true until it isn’t. HTMX will often become insufficient, if you are actually starting to do things “modern frontends” want to do. Then you might add something like hyperscript, or extending HTMX by other means, i.e. writing client side code. You might get pretty far by just slapping library after library into the mix, to actually get what you need. That might be a fine trade-off to make, but know that you are trading.

You can flip your “coupling” argument the other way completely. In a data-oriented approach adding a field is a non-breaking change, as Rich explained so wonderfully. As such the backend can do it without affecting the client. The client can then be updated to make use of this field. This is actually very decoupled, as it enables different teams to work on stuff explicitely WITHOUT getting into your “merge hell”, just like Sean described earlier. You can actually decouple this even further by using things like GraphQL/EQL over REST, but thats a different topic.

Instead you are merging the frontend directly to your backend. Which quite honestly is coupled far more tightly. Your frontend devs can no longer make changes in their own “world”, as they’ll be touching backend code together with your backend devs. Often that is the same person, so that might seem like the ultimate benefit, but it potentially isn’t when working in larger teams. I have been a solo dev for over 25 years, so no clue what goes on in teams, but this is what I have heard from others, and again what Sean described. But even as a solo dev I see benefits of this decoupling.

No one here ever argued that you shouldn’t be doing “hypermedia”. It is a perfectly fine approach, just not as universal as you make it out to be. And you warning about “trade-off discussions” makes you sound like a religious fanatic, not a developer.


Discussing trade-offs is exactly what you are supposed to be doing.

Fully understanding the problem is prior to evaluating the tradeoffs of the solution.

Proposing a discussion of tradeoffs when one has already jumped to pet solutions without genuinely understanding the problem to be solved is not a good way of thinking about software.

I stopped reading…

This explains your interpretation of the argument, and I’d assume caused you to entirely overlook the recommendation I’m making.

The “all in” framing requires an extensive commitment either to a simple approach that may not cover your most sophisticated use cases or a complex approach that raises the costs of doing the most simple things.

The “opt in” approach tries instead to limit complexity, opting into it only in those parts of the application that require it.

When tradeoffs become prominent, it’s a good sign that you should be asking whether there’s some prior factor that increases the cost of your options.

I have been a solo dev for over 25 years, so no clue what goes on in teams

Having worked solo, on small teams, and on modestly large teams in large organizations, the costs of underlying complexity goes up non-linearly with the size of the team.

And you warning about “trade-off discussions” makes you sound like a religious fanatic, not a developer.

I’m not sure what to make of this. The warning about tradeoffs is explicitly couched in terms of options theory. Is the idea that I’m a Black-Scholes fundamentalist?

1 Like

Indeed, but you can save others valuable time by being upfront the tradeoffs. Letting them decide whether they want to dig in or not.

Overall I think we agree on most things. I even made many of the same points in my blog post series. I specifically described an “opt in” approach to development.

I’m sorry, that was unnecessary. I think you are overselling hypermedia/HTMX a bit, but that doesn’t make it invalid.


I appreciate that. I should have been clearer that I don’t think hypermedia is the only alternative to React+ type libraries, only that it is powerful, under-appreciated, and plays well with other more sophisticated approaches. Not being clear on that detracted from the series, and obscured the intent and purpose from readers.

I also could be clearer that what I mean by “extended hypermedia” is pretty expansive: follow the constraints of hypermedia as far as is practical, and relax them where it makes sense to do so.

I read the “Lost Arts of CLJS Frontend” after starting my series. There are a lot of points of similarity, but also some thought-provoking differences. I’ve not fully internalized it, but I plan to. Shadow Graft definitely caught my attention, and I would like to understand it better with some hands on use.

I agree that there is a point with HTMX in particular, and hypermedia in general, where you only can get so far. I’ve been thinking about this example from HTMX’s docs:

        <button class="btn btn-danger"
                onClick="let editing = document.querySelector('.editing')
                         if(editing) {
                           Swal.fire({title: 'Already Editing',
                                      showCancelButton: true,
                                      confirmButtonText: 'Yep, Edit This Row!',
                                      text:'Hey!  You are already editing a row!  Do you want to cancel that edit and continue?'})
                           .then((result) => {
                                if(result.isConfirmed) {
                                   htmx.trigger(editing, 'cancel')
                                   htmx.trigger(this, 'edit')
                         } else {
                            htmx.trigger(this, 'edit')

I’m personally not a big fan of anything after the onClick, or using hyperscript, or Alpine. (Not to say they aren’t fine for others.) The first thing I thought of when I saw Shadow Graft is that it feels like a good tool to use here.

I would want to generally prefer generic components on the frontend, though of course this does not always make sense. One big challenge I have myself faced when working on front-end scripts is how to keep generic behavior separate from behavior specific to the domain.

1 Like

Yep, thats definitely taking things too far.

Arguably this example is taking it too far into the other direction, by writing it entirely without helper libraries and pure DOM interop.

I’ve been meaning to write a couple more posts about practical examples for when shadow-graft might be useful. So I made this reusable “edi-table” “component” as a quick example.

Its generic in the sense that is doesn’t care about what the columns are. Just looks for an edit button and based on config, loads the edit HTML from the server. Since I didn’t want to setup an actual server this just loads the same HTML for each row. Saving is also not actually implemented. Also since there is no server, I wrote actual HTML. :roll_eyes:

Not gonna go into more detail for now, but I hope it shows that this is all doable in a generic, not-react way.

1 Like

This was posted here two and a half years ago on this forum:

So “Clojure UIs” already include a framework which works with HTMX, together with all the other React-based frameworks referenced by the author, as well as template-based (Hiccup, Enlive, Enfocus), DOM-manipulation-based (dommy, jayq) and many others. Based on this alone, I don’t believe “Clojure UIs” have taken any wrong turn as such.

While I certainly have various gripes with React and don’t use it any more as a result, I don’t see anything “wrong” with a web page retrieving data from a server and rendering on a client.

I admit HTMX provides some really nice features, but fundamentally it only covers a very narrow sliver of functionality which a modern day web developer should be able to use. So to do anything useful beyond client-server communications, you have no choice but to employ one or more additional JS libraries, whether you write it yourself or use an existing one.

Now the crux of the article(s) is really that HTMX/Hypermedia (herein just HTMX) with server-side rendering is a much better solution to building web-based applications than RPC-style REST returning data which is rendered client-side. The problem with this is that it doesn’t take into account the extremely limited scope HTMX has compared with RPC-style REST.

With HTMX, the client makes a request to the server. Since the response can only be HTML, the client can only be a web browser. So that rules out mobile apps entirely from using HTMX.

If you want to provide any styling in the response, you either have to use inline styles, or the client must already have those styles loaded. If the client has to have the styles loaded, then you already have tight coupling between the client and server.

If you want to provide any JavaScript functionality in the response, you have exactly the same options and the same tight coupling.

So in most cases if you use HTMX, you will have to write both the client and server.

With an RPC-style API, the client makes a request. The client can be anything you want it to be and doesn’t have to be a browser. If it is a browser, it can choose what it wants to do with the data. Most important of all, it can validate the data, so that if an API does change, it can take an appropriate course of action. If the data is valid, it can choose to render it, styling it in any way the particular client app sees fit, and providing any behaviour which that app already has loaded. If not, it can take an alternative course of action.

And the beauty of this approach is that not only can you provide the same RPC-style API to mobile apps, but you can in fact provide it to different web-based apps as well, even server-based apps, and let each app determine how they want to render the data, if at all. With HTMX you are entirely at the mercy of the server as to how the response is rendered.

Based on just a comparison between HTMX and RPC, I would argue that RPC-style APIs provide much looser coupling than HTMX and are much more consumable by a wider variety of clients than HTMX.

Another problem is that HTMX assumes that all client state can and should be managed entirely by the server, effectively eliminating client-state from the front-end. Much as we as functional developers want to try to eliminate state, this actually greatly limits the functionality of the client app and makes it entirely dependent on the server for any state changes, which can very quickly result in a much slower app. A client should have the option of maintaining its own state, and updating server state as and when it needs to, if at all.

It can also mean moving a lot of the work which a browser can and should perform onto the server, thereby creating an additional load on the server. This seems like a complete waste especially when it is something the client can easily handle and is better placed to handle.

If you have multiple regions on a page which are updated by HTMX, using just HTMX itself there doesn’t appear to be any easy way for these to communicate with each other, or to provide coordination between the different regions. They are after all just HTML/DOM elements, so this also represents a huge loss in functionality which you wouldn’t have if you were dealing with data manipulated by JavaScript.

What about a client which consumes from multiple APIs, any number of which are external to the origin serving the client? Again, RPC/REST wins hands down here, since at this time the overwhelming majority of APIs will be RPC/REST, usually serving JSON. This fact alone should be the primary driver for the design of the client app, since you will also be very hard-pushed to find an HTMX-based API which can provide you with what you are after.

These are just a few things I came up with when I first looked at the book on the page https://hypermedia.systems/ when the above developer first posted his framework. Definitely some good ideas but as I’ve hopefully shown, very limited in scope and a lot of shortcomings.


HTMX should be the go to for web development except in the following scenarios:

  • Complex interdependencies between components on the page (rare). In this scenario an update in one part of the page triggers updates in other places necessitating close to a whole page reload.
  • Shared codebase with React native mobile apps as the author mentions.
  • Putting a UI onto already existing api endpoints. If they already exist may as well use them.

95% of web apps don’t match one of these three conditions and HTMX (especially SimpleUI) is hands down better. Page load times are just so much faster, you’ve got 20, 30kb of data to load. The architecture is much simpler. Contrary to what the author asserts you are not restricted to a very narrow sliver of functionality at all, you still have the whole web platform and you can always call into js when you really need to.

There’s an interesting parallel between HTMX and Clojure. 95% of developers have never tried Clojure so they don’t know what they’re missing out on. Same for HTMX.

Try the tutorial at https://simpleui.io


I think there’s a scale of both power and complexity:

  1. Static HTML files (hard coded)
  2. Generated static HTML files (dynamically generated at site-publishing time, but static after that point)
  3. Dynamically generated server-side HTML files (dynamically generated between page navigation, thus can dynamically generate content on user action that trigger a navigation (aka hypermedia links), not just at site-publishing time, requires server round-trip)
  4. Dynamically generated in-browser (dynamically generated whenever you want: timer based, on each user click, mouse-move, scroll, page-load, render, user selection, hover, etc.) (doesn’t need server round-trip)
  5. Dynamically generated in-browser, with server sourced context (dynamically generated whenever you want in-browser, but will also communicate with server to know what to generate based on the user event, or what content to display back)

Now some people don’t need the power of #5, so they can choose something that has less power, but is also simpler. Others are so comfortable and familiar with the tools that let you do #5, that they find no issue using it even when you don’t need the power it gives you. You can also argue, the same tooling that can do #5, for a simpler use-case is also simpler to leverage. Others still find that using tools made for #5 when you don’t need that level of power is overly complex, maybe because they are not familiar with the tools, or are but still feel it’s too much for what they are using it for.

ClojureScript is made for #4 and #5, and so is React. It’s not “taken the wrong path”, it fills the void in power, Clojure itself can never reach #4 and #5.

What I think has happened though, is a confusion, maybe to newcomers, that in order to make websites or webapps, ClojureScript must be used, when Clojure can suffice if you don’t need the power offered by #4 and #5.

Now #3 is interesting, because it offers a balance. What is even more interesting is that, people have found it to not be powerful enough, so HTMX was born as a #3.5 if you want. Standard hypermedia is not powerful enough, but maybe a little bit of extra additions to it make it sufficient to a lot of use-cases that would have had to go for #4 or #5 before. The risk off course, is that you are always chasing those “extras”, and waiting for the next HTMX version or an alternative that can do just a little bit more. Where-as with #4 and #5 approaches, you have no limit except what a browser can do.


ClojureScript is made for #4 and #5, and so is React.

It is possible to use React (and indeed ClojureScript) for static site generation (#3 on your scale). As Clojure developers, there’s rather little need for this, as we already have an expressive, well-supported language that can be used for all manner of site generation purposes.

However, if you’re looking at the situation from a plain ECMAScript perspective, React will seem like an attractive option. It gives you a ‘templating language’ of sorts (JSX), a wide variety of existing third-party UI components and potentially useful features like CSS-in-JavaScript, and it fits in well with a functional programming coding style. This kind of SSR (Server-Side Rendering), as it’s known in the React world, is rarely used without any JS being sent to the client, but you do see it from time to time.

Clojure itself can never reach #4 and #5.

cries in Java applets :stuck_out_tongue:

Seriously though, we may see a revival of language such as Clojure for execution on the client. WebAssembly (WASM) has become very capable recently, and a lot of systems programming languages can be compiled for and run seamlessly on WASM already.