React looks at the diff between the current virtual DOM and and its predecessor and figures out which parts of the actual DOM need to be updated, delivering good performance. What’s the reason Clojurescript React wrappers take additional measures to optimise this further? An example is reagent atoms which seem to selectively watch parts of the state. Is it because the performance gain of the virtual DOM was found inadequate in real-world applications?
Not to confuse, Virtual DOM can be fast with optimizations. It doesn’t mean Virtual DOM itself is fast. Diffing DOM tree can be O(n) complexity. With 10^3 or even more elements inside a web page, it can be quite slow if you have a lot components to diff. Creating 10^3 elements over and over again is not cheap too. That’s way we still need optimization to reduce rendering and diffing.
Thanks, that makes sense!
It seems to me that reagent’s optimizations are more about its own state management than any perceived slowness in react itself.
Consider the common case where a single atom contains all program state. Without some kind of optimization, you’d end up running the render function for every single component whenever any part of the program state changes. Setting aside whether react’s reconciliation algorithm is fast, you’ve already done a lot of pointless work to produce the new virtual dom before you even get to reconciliation.
Reagent also batches component updates until the next animation frame, which effectively debounces state changes. Again, it seems to me this is more about reagent’s state mechanism than react itself.
I think it was also very in-vogue for ClojureScript React wrappers to implement async rendering as a way to differentiate. It’s a performance enhancement that isn’t really felt unless you’re building a certain class of application, although as @jmlsf said, reagent’s state management system did make it more attractive.
Luckily, async rendering is coming in React.js core so hopefully we can start removing bits of it from our wrappers.
Reagent atoms effectively provide cursors which was originally copied from Om so that you can write a component that depends on a specific data structure that can be part of a much larger program state – the component only needs the cursor to that data, and therefore does not need to know about the entire program state. This makes components more reusable. It wasn’t about performance.
Caveat: this is all based on my memory of early work on Reagent, because I’m the one that originally created Reagent cursors based on Om, and I believe that was just folded directly into Reagent’s atoms. Maybe someone more familiar with Reagent’s history since my involvement can confirm or deny that!
I’m glad that there is some disagreement on the topic, it shows that it’s not immediately obvious.
It seems that cursors are a recurring theme for all React wrappers, which in turn seem to have a big overlap with the idea of lenses. I’m a bit surprised that a common lens lib hasn’t emerged that could be shared by all frameworks. Also, I wonder whether spectre has a place here as another way to transform large and deeply nested maps.
Expanding what @jmlsf touched on, sooner or later more granular data dependencies are always required to keep functional views performant. Many approaches predating the single state atom never run into this, because state is already split-up (e.g. into Flux stores) according to dependency.
Single state systems like Vuex or the modern Angular introduce granular dependency-tracking via different flavours of observables, which works but also puts a barrier between you and the underlying values and often leads to hierarchical modelling with many redundancies. Cursors or lenses are another way of re-introducing granularity, usually with interfaces much closer to values, but still…
Another approach is to declare data dependencies via queries that can be resolved by something like DataScript. The immutable nature of a DataScript value together with the queries contain enough information to provide components with pure values, while still being able to check for change efficiently.
With additional techniques (e.g. the Rete algorithm, or incrementalized query engines) we’ll hopefully get to a best of both worlds one day!
A few libraries came out trying to generalize the lens thing. You tend to only need it over a “one true atom” though, so most libs that use that pattern (reagent/om) have a default implementation. Perhaps if someone blogged some examples on how the idiom could be used more generally, that’d help adoption.
I like how posh posh uses the datascript transaction log (and information on the queries themselves) to decide whether it needs to re-render the component, but I’m a bit unclear on whether there is a way to do the same with rum+datascript (I would expect it to be possible, those two libs are both written by Nikita).
I’m considering using that idiom in a desktop (swing) app, so it would be useful if that existed as separate lib. It shouldn’t be too hard to extract/reimplement…
That can indeed be done by manually calling
rum/mount inside of a DataScript connection listener, after checking the new transaction for relevant novelty. I don’t know what niceties posh offers, but it is then also possible to query the tx-data using the DataScript query engine, or query across both, the novelty and the current db snapshot, etc…
In theory this gives you capabilities similar to a rule engine (e.g. clara-rules), although simply matching on attributes might not be efficient enough for more complex rules.
Personally, I’d actually like to see the opposite happen. Reagent only uses a small part of React, and that could be implemented natively in ClojureScript. There is actually an issue open with some discussion around that https://github.com/reagent-project/reagent/issues/271
Ideally, Reagent would have a protocol around the parts that are currently leveraging React, and you could swap in alternative implementations if you wanted to.
I don’t think leaving react is a good idea, you lose all the interop and tooling that is constant improvements. React does a very good job at VDOM diff and has a huge ecosystem of components, libraries, and tools available, moving out you lose all of it.
I’m advocating making React plugable as opposed to leaving it. I can see value in having a native ClojureScript VDOM optimized for Reagent. This would also allow plugging libraries like Preact or Vue.js.
I agree with @wilkerlucio that the value of React is in the ecosystem not in the library itself. There also isn’t much you can optimize with regards to ClojureScript. Sure the API could be a bit more friendly but everything is very easily wrapped so its not a problem at all. It feels dirty but re-implementing everything just to get a cleaner API is hardly worth it.
preact you can switch that with
react today since its API compatible but you’ll miss out on some of the latest React additions as well as the coming async stuff. React definitely has the most momentum as far as a single library is concerned.
vue.js is growing as well but its much more of a full framework with a not-quite CLJS friendly underlying setup. I don’t see that ever becoming a big thing in CLJS circles.
As I said, you could still use React if it was plugable, there really wouldn’t be any downside that I can see in that regard.
At the same time, React is a big library that does a lot of things that aren’t needed by Reagent. Personally, I think there is value in developing a native ClojureScript ecosystem that’s not being fundamentally dependent on React. This would allow ClojureScript ecosystem to move at its own pace, and not be dependent on the direction React takes.
I think that Elm demonstrates this approach quite well.
I agree with @Yogthos. We use rum that does server side rendering on the JVM (so no React at all), and we use a single React component. We would be happy to switch to a pure CLJS VDOM implementation if it provides real gains. At the same time, React doesn’t bother us at all for the moment.