Building a reactive multiuser Web app


Hi everyone! I’m trying to understand how to setup a reactive Web app that would be able to sync data from multiple users in real-time with central database persistence. Imagine a user sees a list of items and modifies one of them; all the other users who were visualizing the same data get an instant update in real-time.

Here’s my thinking on this so far…

  1. Users subscribe to specified data on their respective client app list-of-items-subscription
  2. Server sends a specific database query result to these clients ["item1" "item2" ...]
  3. Clients store the data in local state
  4. Clients select a specific part of the local state to display
  5. A user updates a field on the UI, which fires an event item-description-changed
  6. Event handler on client both…
    … changes the app local state
    … sends a message to the server to update the central database
  7. Server gets the update message and both…
    … commits the change to the database
    … sends a notification to all concerned data publishers
  8. Publishers send the subscribed clients only the change that occured (description of item 1 is now “ModifiedItem”)
  9. Subscription on each client handles the incoming changes by updating the local app state
  10. All clients’ UI changes reactively to show the updated local state and displays the new list ["ModifiedItem" "Item2" ...]

And on and on… I could be completely wrong on this, so please feel free to correct me.

Is there an existing framework for this kind of process? Otherwise, which libraries would be absolute must for this? Client side UI is not the main difficulty. Re-Frame, Hoplon, Om Next, etc. explain well how to handle this part. The interaction between client and server is my main issue, especially handling publication/subscription.

Thanks in advance!



I used to great success which uses websockets for bidirectional connection between server and client.
IIRC it also falls back to AJAX if websockets are not available, which might be the case if you need to support older browsers / clients.



Not sure if it really helps, but I when last wrote a webapp with users, I had to figure out some things which weren’t really clear to me from the start.

I tried to summarize core aspects of it in

  • Uses buddy for authentication
  • Protects some parts of your back end with a user-session and some not
  • Some routes have (different) middleware

The example is not authoritative: I’m not sure that this is how things are supposed to be done.

Hope that is useful for you.



Thank you for the recommendation about Sente. I’ve read about it quite a bit already and I figured that I would need something like this.

However, even if it allows me to communicate from server to clients, I’m still unsure about the subscription process. I make it sound really simple in my process breakdown to “sens a notification to all concerned data publishers[, which] send the subscribed clients only the change that occured”, but in reality, I don’t really know how to do this…


Thank you too for the advice about user-session protected parts and the recommendation of Buddy. I’ll have a look at it. It’s nice to have some examples that go beyond the basics so I can get a feel of what it would look like in real life!

For those of you who’ve read the steps I wrote, does this seem like a proper way of doing things? This is brand new territory for me, so I’d expect to be wrong in some aspects… What might be the pitfalls?


I’ve just read something about Datomic that I found interesting:

This is thanks to a great feature of Datomic’s, which is that every connected peer is aware of changes to the database in real time.

Does that mean that with Datomic, I could only store the data change in the database (skip 7.b) and the publishers would just be listening to its content and send the data when it changes? What do you think? Did I understand this correctly?


Look at this example project using pneumatic tubes and datomic.

I’m modeling an application for work off of it. It follows the idea of event sourcing + a log based architecture. All things that happen in the system are commands and events but the value add of the pneumatic-tubes library is that it uses the same re-frame commands and events on the backend. It sort of takes the re-frame lifecycle and mirrors it on the server. Specifically what you are going to be looking for though is how the server side application polls the datomic transaction log.

Let me know if I need to clarify anything or if you have any other questions.



Wow, thanks a lot for this link! I’ll try the example out, but the way you describe it is really appealling! Direct integration with a widely used framework like Re-Frame seems like a very intuitive way of doing things!


Yeah, its pretty slick. Also, to answer your question about datomic peers. I think of a peer as a business concern, for example a web server can be a peer while you could at the same time have an analytics cluster also behaving as a peer. The advantage here is that all peers are sent transactions, thus keeping them up to date, however queries are executed against 1 peer and have zero effect on the other peers. So if you need to run some super intense analytics queries that take a lot of resources (memory, threads, machines, etc) the web server is COMPLETELY UNAFFECTED because all requests going to the web server only transact against the web server peer. In this model, the number of reads against one peer doesn’t affect the performance characteristics of any other peer. This is huge.


If you aren’t concerned about immediate consistency, pouchdb with a central couchdb server and real time replication can handle this really easily.



I did a write up on how my team structures such apps here, hope that helps.


Thanks @mjmeintjes for the recommendation! I had heard a little bit about CouchDB and its client equivalent, PouchDB, listening to the talk From 0 to prototype using ClojureScript, re-frame and friends from Martin Clausen and it does seem pretty interesting. Do you have some kind of example I could refer to as to how to use them?

@Yogthos Wow, that’s a great example of what I was trying to figure out. From what I understand, each client keeps its own state as a re-frame app-db, but when the server updates the central state (database or not), it also issues the same notification to all connected clients at the same time via websocket. There doesn’t seem to be any kind of publication/subscription mechanism needed then, right?


Yup, the server has all the information it needs to propagate messages between clients. The key to this approach is making sure that all the business logic runs server-side, and the clients are only responsible for the presentation layer.


Your article is very interesting, thank you! We happen to work in very similar spaces (in Canada no less). I was wondering if you’ve had any “lessons learned” you’d apply to the architecture you describe in the article, after working with it for a while? Any system interactions that were challenging with that architecture? Any unforeseen benefits/limitations?


We’ve been running an app using more or less the same architecture as described for about a year in production now. Overall, it works pretty well, and we haven’t run into any major issues with it.

I’d say the most challenging aspect of the architecture is ensuring UI responsiveness. The problem is mitigated by the fact that the roundtrips happen only when the user switches between fields. The client can optimistically set the value of the field the user just edited, and in cases where there are no related fields, this happens transparently to the user.

However, things get a bit trickier when you have fields that get updated via business rules. For example, consider the case where you have auto conversion between inch and cm for height. If a user updated height in cm, then you have to ensure that the height in inch field has the recalculated value before the user is allowed to focus it. Our approach to this problem is to explicitly declare the inputs and outputs for the business rules. For example, a rule might look something like this:

{:id :height-to-inch
     :type    :action
     :inputs  [[:patient :height :cm]]
     :outputs [[:patient :height :inch]]
     :fn      (fn [ctx [height-cm] [height-inch]]
               [(when height-cm (/ height-cm 2.54))])}

This allows us to know all the related fields that need to be locked when a user is focused on a specific field. The other advantage is that avoids the need to keep the entire document in memory on the server.

The client locks all related fields once the user focuses a field. Once the user moves off the field, the client sends the server the updated field as well the the current values for the related fields when they exist. The server will run the business rules on the given path/value pairs, and notify all the clients of the results.

Since the websocket will block while handling the requests, you want to avoid doing anything like IO in that thread in order to ensure fast roundtrip times. We’re using core.async to serialize the data to Postgres in the background, and notify the clients optimistically once the results have been calculated.

Another aspect the article doesn’t talk about is UI rules. The business rules, as the one above, focus on changes in data. The UI rules track UI state such as whether a field fails validation, if a section should be expanded or collapsed, and so on. These rules are triggered by changes in the data, but generate state for UI elements as opposed to updating the document. An example would be a rule that toggles whether a section is expanded or collapsed based on whether the patient is a smoker.

Finally, if you wish to do horizontal scaling, you need to introduce a queue that the servers subscribe to. When the update happens, the server that received the request will push changes to the queue, and then all servers will receive the change and push it back to their clients.


Thank you for the insight/reflection! Its interesting because we have a different architecture/implementation and experience very similar consequences/hardships; I guess just irreducible complexity of event-based/reactive systems?


Thanks, that’s interesting to hear. Are you able to share any details regarding your approach?


Absolutely, we are in the healthcare space in cardiology reporting (early stage). Our front end is typescript/react/redux with an API in Scala and a rules engine in Clojure (Clara Rules). We are hoping to do some refactoring to reduce the use of scala and move towards more responsibility in clojure. Our application is similar to yours, event-driven and responsive over a websocket for results. The general model of the backend is (Event, State) -> Actions, where Actions are concrete operations on State to transition to the next State. Actions are pushed to persistence on the API through Akka Journaling and also back to the client through the websocket. Both the client and the API perform the resulting actions on their state to keep synchronized. The client obviously adds extra UI state to the redux application state, but mostly works through responding to Actions back from the websocket.

We’ve run into very similar issues that you’ve described, particularly the need to use on-blur guards to facilitate action dispatch from the client to the backend, as otherwise we encountered performance implications as well as poor UX (input stuttering, etc). We’ve also found that we’ve overused Clara Rules in a paradigm it doesn’t quite excel in — manipulating and reasoning about state over successive rule applications over “time”, where some state (eg., BMI) is a function of other state —say height and weight, and other observations may cause any of these to change, potentially many times, over the course of logical Rule propagation. I’m very interested in your path/rule state reduction process, and how that looks/operates over many, many, possibly competing rules.

Our data “model” is entirely hierarchical, in a tree-like structure. This structure has been causing more and more headaches over time (particularly in combination with the (Event, State) -> Actions model), and so we’ve been thinking of revisiting the architecture. Happening to discover your post was quite a nice surprise!


That does sound quite similar indeed. Our approach for reconciling rules is to group them by the related paths. Since each rule has to describe its inputs and outputs explicitly, we’re able to build a DAG of rule executions for each path that triggers a rule. We supply the initial state to the DAG and it runs transactionally returning a new state that gets persisted.