Electric Clojure – a signals DSL for fullstack web UI

Re. caching - You’re right to be concerned about application performance and you’re right that caching is a way to speed up SPAs that fetch a lot of server data, especially repeat requests of similar data. Having full control over data sync is important to be able to get the acceptable performance at scale and customizing it component by component if necessary. Have I understood the question properly?

We don’t have all the answers yet. The reactive network model is different than RPC reuqest/response model and has completely different performance characteristics and knobs to tune. Electric Clojure’s network updates are fine-grained streams, not RPC, and as such does not “fetch” anything (the server intelligently streams instead). There are no JSON payloads, the network is fine-grained and streams individual scope values at the tightest possible update granularity. The server intelligence understands what the client already has and will not push again any values that have not changed.

For example, if a SQL query reruns, Electric for loops will diff the collection and stream only the individual deltas to the client (row added, row removed). Values that the client already has will never be resent, they are transparently cached. Much of this benefit comes from the fine-grained control that a reactive language makes possible. All of this already works.

Caching is already pervasive and inherent to the reactive model. In continuous time dataflow programming, virtually everything is memoized. Each intermediate s-expr result is transparently memoized, enabling what we call “work-skipping” – when doing reactive updates, we skip all computations whose inputs haven’t changed. Expressions whose inputs don’t change are never recomputed. Network values that haven’t changed don’t need to be resent.

I think fine granularity is a huge win: it is far too hard to manually orchestrate thousands of different point updates, which is why 2010-era systems send the same huge JSON payloads over network again and again, leading to massive waste. Reactive programming solves that!

Advanced optimizations are also enabled by the compiler. The DAG contains everything there is to know about the flow of data through the system. The network planner can choose to send certain things further in advance than it does today. There is a large body of compilers research in producing hardware-optimal machine code and much of that work can apply here. Stay tuned!

It’s hard for me to say more without a concrete performance problem to look at. We’re going to learn a lot in coming months.

2 Likes