Okay, I see. The library tests I was looking for are in the source files. However, I’ll still maintain that the electric components are hard to test.
I can see a lot of benefits to electric - missionary supersedes core.async and electric provides both a VM and a declarative language for data flow on top of that. I’m trying to understand the limitations
Correct me if I’m wrong. Here is my current understanding of some of the library’s weaknesses:
There is no frontend caching as yet (or maybe @xificurC said, pull in down once and save it somewhere?). I haven’t seen examples for using client side stores, only streaming examples.
It’s great for one off SPAs but integration may be an issue (I haven’t seen any examples connecting to a router or integrating with another framework)
The fetching strategies are completely controlled by the vm. Caching is pretty important so the half dozen (or more) strategies related to data synchronisation are:
A. fetch from remote,
B. fetch from cache,
C. fetch from cache but also fetch from remote and update asynchronously
D. listen to asynchronous event and update
E. fetch only updated data
F. fetch from remote but do not update cache
G. mutate data and update remote first then cache
H. mutate data and update cache first then remote
My understanding is that each strategy would have to be customised on a component by component basis (which is not good for maintainence).
I’m excited to try this but I worry that a lack of optimistic updates on the client may result in an undesirable experience re laggy components. I tried one of y’all’s demos a few months ago and iirc there might have been some lag on one of them. But that’s the only super obvious contention I can think of, not yet being familiar with most of it.
IMO this idea is as pivotal as, say, react. Big idea stuff.
One thing though, I’m a “fat client” fan. I like the idea of pushing as much responsibility to the client as possible. I think I’m partially motivated by a future where clients can go mostly serverless and go pure p2p. And in some situations you might be able to mitigate some scaling issues by pushing work to the client. I get that’s not a goal here, but in general I think this will be most exciting for folks that are big fans of thin client, fat server architectures. I guess a middle ground here might be to ‘browserify’ the e/server ns, perhaps running in a webworker. Then folks can have these benefits of unified logic without having to buy into thin client framework permanently. That being said, I don’t think a fat client option is necessary for adoption and success here with electric.
Anyway, wrt the optimistic the updates thing, if that was in place I think I’d have a hard time not using this, especially for internal apps. So I’d be curious if that’s on the roadmap.
Edit: actually, I think it was an htmx demo where I experienced the lagginess. But still.
Thanks! As far as I am aware nobody has reported a real world lag problem (though we’ve barely started work on improving the network planner, so expect it to get a lot faster in coming months).
Optimistic updates – form controls (inputs and such) are already optimistic. Do you expect server side relational queries to update optimistically on the client? Or are you looking for a web-after-tomorrow architecture where queries run client-side and server side? Or something else?
This is awesome! Thank you for making Electric public.
Two questions:
What approach do you take for routing with Electric apps?
Could an approach like this in theory be used with P2P web apps (eg: ones using say webRTC). In this case the client would be the client and I guess the server would be the peer/peers?
You need to get the data model/synchronisation right first - ie the “web-after-tomorrow architecture” @dustingetz hinted at - otherwise you’re going to be in a whole lot of pain, even with a compiled network.
What approach do you take for routing with Electric apps? We have a goog.history integration, a user contributed retit integration, and an experimental composable “tree router” which implements IAtom for storing component state in the URL and supports nesting & recursion. See the datomic browser demo
Is Electric theoretically suitable for P2P web apps, say WebRTC? Yes theoretically, not currently supported, we would want to understand the concrete use case before saying more.
It was more an idea, than an actual need. I messed around with WebRTC at a game jam a few years ago for a P2P game and was wondering whether you could compile the network code for P2P and what that would even look like. Whether you would need to or even want to is another matter.
As @zcaudate1 pointed out probably easier with some sort of data sync across nodes, although our experience with games it was much easier to elect one of the peer nodes as the “leader” and send a very small subset of data closer to an event stream to and from each peer rather than keep all the state in sync for latency.
So I guess I was thinking a context where the (e/server part of the code runs on the leader peer, so for the leader that would be local, and for the follower peers that would be over the network.
Anyway, in practice client/server is much simpler and practical when it comes to real world apps. So no real concrete need/use case outside of weird hobby projects.
Cool! “Compiling network” here basically means automatically translating a program’s lexical/dynamic scope access patterns into a publisher/subscriber protocol. The trick is collapsing out request waterfalls, making it dynamic, and chunking things into the optimal granularity - this is where the network planner comes in.
Re. the leader pattern, agree - I’d imagine something like Cloudflare Durable Objects to store shared real-time state. (Not P2P but not fully centralized either, the state is in the edge) Haven’t looked hard at that yet but looks interesting
Fwiw the e/client and e/server markers are arbitrary and configurable, support for more than N=2 sites is on the roadmap. The 3-site case will be immediately valuable for reaching microservices/apis under user control and to which we can install an Electric “sidecar”, giving Electric apps efficient zero-api access to a whitelisted set of functions in the microservice.
I’d highly recommend http://gun.eco for any p2p stuff though it may not be fast enough for the types of games you are doing. It does ALOT of heavy lifting and should look quite familiar to Datomic users.
I had seriously considered gundb whilst building out statstrade but ended up going with postgres/sqlite instead as relational queries and standard infrastructure were more important than the syncing aspect in this case. Syncing is really hard when there are permissions involved. v1 used Hasura but that was dropped in favour of raw sql for speed.
It’d be great to understand a bit more about the underlying primitives. Also, are there plans to add caching?
Re. caching - You’re right to be concerned about application performance and you’re right that caching is a way to speed up SPAs that fetch a lot of server data, especially repeat requests of similar data. Having full control over data sync is important to be able to get the acceptable performance at scale and customizing it component by component if necessary. Have I understood the question properly?
We don’t have all the answers yet. The reactive network model is different than RPC reuqest/response model and has completely different performance characteristics and knobs to tune. Electric Clojure’s network updates are fine-grained streams, not RPC, and as such does not “fetch” anything (the server intelligently streams instead). There are no JSON payloads, the network is fine-grained and streams individual scope values at the tightest possible update granularity. The server intelligence understands what the client already has and will not push again any values that have not changed.
For example, if a SQL query reruns, Electric for loops will diff the collection and stream only the individual deltas to the client (row added, row removed). Values that the client already has will never be resent, they are transparently cached. Much of this benefit comes from the fine-grained control that a reactive language makes possible. All of this already works.
Caching is already pervasive and inherent to the reactive model. In continuous time dataflow programming, virtually everything is memoized. Each intermediate s-expr result is transparently memoized, enabling what we call “work-skipping” – when doing reactive updates, we skip all computations whose inputs haven’t changed. Expressions whose inputs don’t change are never recomputed. Network values that haven’t changed don’t need to be resent.
I think fine granularity is a huge win: it is far too hard to manually orchestrate thousands of different point updates, which is why 2010-era systems send the same huge JSON payloads over network again and again, leading to massive waste. Reactive programming solves that!
Advanced optimizations are also enabled by the compiler. The DAG contains everything there is to know about the flow of data through the system. The network planner can choose to send certain things further in advance than it does today. There is a large body of compilers research in producing hardware-optimal machine code and much of that work can apply here. Stay tuned!
It’s hard for me to say more without a concrete performance problem to look at. We’re going to learn a lot in coming months.
Oh wow. By that description of electric, it’s much more thought out than I had first anticipated. I’ve been complaining that no one in the clojure community is doing compiler stuff for a while now so it’s fantastic to know that there are teams doing this type of stuff.
I guess my only question mark is that you’re putting a lot of pressure on the server to store changes for each client - fantastic for back office apps but I’m not too sure about user facing apps.
Having said that, are you guys using a relational model on the ui as well? Because I can’t imagine you guys doing that sort of fine grained stuff without some sort of reconciliation model at the client side?
Also… the tab sharing stuff that @John_Newman has done might be useful to limit clients on the browser side.
I’m hopeful that compilers and chatgpt can solve some of the technical pain points with statstrade in the future (right now its other things). We have more than 150 different tables (most are pretty useless but there are at least 30 or so that are pretty important) and it’s an absolute pain to generate interfaces because on one hand, it’s super repetitive but also not due to the fact that every table is different and there could be about 10 or so views on the same table depending on function. It’s super frustrating.
I’m pretty sure this isn’t something exclusive to us and a lot of platforms are in worse situations so best of luck to you and your team.
Well I didn’t impl cross tab “shared workers” (would be a good addition) but my cljs-thread lib does have a service worker impl you could extend to use as a proxy to multiplex multiple clients in any tab through the service worker.
e/for-by is an Electric for loop and it indeed diffs the query result (the list of todo records). The :db/id is the “react key” used to stabilize the DOM so that over time as the collection changes, the same DOM nodes can be reused. When there is a client/server transfer inside the body of an Electric for (which here there is, down-stack in the TodoItem electric function), the deltas (insert/remove/update/move) are sent over the wire, resulting in fine grained network and fine grained DOM writes.
To be clear, with Electric you bring your own data layer, we manage reactive network sync only. (Unless your database, say Datascript, was coded in Electric … )