When is immutability fast?

A group of us have been digging into the latest React changes, and we ended up having a bunch of questions about how best to optimize React renders now using Hooks and immutable data.

I decided to write up some of the things I’ve learned so far in a new blog post, When is immutability fast?

The TL;DR: If you can assume that two object references are different implies the data is different, it’s real fast!

The post also digs into some surface-level optimizations of React components, and some of the thinking around how to locate and optimize hot paths.

4 Likes

My understanding is that a lot of speed has been gained in the past from such UI frameworks solely based upon the observation that if two object references are equal, then they are “deep equal”, without having to do the “deep” check. If those two objects are deeply nested object descriptions, you know that none of it has changed, and can make use of that knowledge when deciding what needs to be re-rendered.

If two object references are not equal, then those immutable values might be “deep equal”, or they might not. You do not know until you check.

Even if the object references are different, it is possible that o1 and o2 are big deeply-nested Clojure maps, and the only thing that changed from o1 to o2 is the value of key :foo, whereas the values of :bar and :baz (which are also themselves deeply nested immutable maps) have not been changed in any way. So the values of (:bar o1) and (:bar o2) will have reference equality, and thus guaranteed also deep equal, and need not be updated. Similarly for (:baz o1) and (:baz o2). (:foo o1) and (:foo o2) will not be reference equal, and thus may or may not be deep equal, so you have to keep going deeper to find out which parts of them are equal, or which have changed.

If the UI component for o1 and o2 is such that if anything changed, then you have to re-render it all, then yeah, the fact that parts of it can quickly be determined to be deep equal doesn’t help you. But if o1 and o2 are such that re-rendering parts of it, and leaving other parts un-re-rendered, saves lots of time, then you are still getting a big performance gain.

2 Likes

Haha, props to @alexmiller for the fast yes/fast no term, not me :slight_smile:

2 Likes

Interesting read! I have a question. Why can’t we keep a global content hash for our immutable data structure? Are the suitable hash functions too slow?

1 Like

A small lesson I’ve learned from testing these kinds of performance issues is to always test advanced-compiled ClojureScript code – performance tests in development can be very misleading. (This would go hand-in-hand with not perf-testing development builds of React.)

It would be nice if we could set up some benchmarks to get a sense for the real-world implications. I don’t currently have a good intuition for how long the React render/diff/commit phases take for variously-sized components.

1 Like

I agree. I would like to setup a number of benchmarks and test cases for React and various wrapper libraries, including hx. I want to understand better how certain decisions might effect people’s applications.

Unfortunately I am not very experienced with creating benchmarks, so it’s been slow to turn that from a want to a real thing. :smiley:

Regarding this question “Why can’t we keep a global content hash for our immutable data structure? Are the suitable hash functions too slow?”

Clojure immutable collections do keep a global content hash. It is not always calculated, but if you do call (hash x) on a collection x, it is calculated, and then a cached copy of the value is saved in the Java data structure representing the immutable collection (so yes, that bit of the Java object is mutable, and updated if you call (hash x), but that does not change the value equality of the Clojure collection in any way). The hash values of sub-collections of a larger collection are also cached, independently.

One return question I have for you: why do you ask about keeping a global content hash? What use case do you have in mind for one? Note that these hash values are different than the object references.

For object references, if they are equal (very fast pointer equality check), then the collections are guaranteed deep equal. Otherwise, they may or may not be deep equal. You have to traverse to find out.

For collection hashes, if they are equal, then you do not know whether the objects are deep equal or not. Maybe they are, or maybe their contents are such that they happen to have the same hash value. If the hash values are not equal, you are guaranteed that at least somewhere in the collection, something is different. It could still be that large subcollections are deep equal, and you could find that out by doing reference equality comparisons between those large subcollections. Again, my understanding is that if you give up at the top level and say “oh, they are not deep equal”, you might be giving up on large performance gains in UI updating if you don’t at least look for some large subcollections that are reference equal, and thus deep equal.

Don’t think I have an exact answer.

  • I wasn’t aware that Clojure data structures cached the hash value. Which sort of solves my question.
  • I’ve been trying to compare Clojure’s data structures and the IPFS’ merkle tree, and I’m not quite sure how similar and/or different the approaches are.

I’d appreciate any insight here! I feel like both represent immutability, but for different purposes. Also, what cryptographic hash means compared to just hash. More questions in every direction!

Crypto hash is designed to be slow to compute, and very hard to crack, i.e., find any pattern in the hash generated from the input, or even a way to reverse it.

Normal hash is designed to be fast to compute, avoid collision as much as possible, but not to the extent that crypto hash do.

1 Like

(quick, dirty) testing seems to imply that content address hash functions as SHA-256 are about a factor 100 slower than the Clojure hash function

;; Built-in Clojure hash functions
(time (hash "stuff"))
"Elapsed time: 0.039033 msecs"

;; using [digest "1.4.8"]
(time (digest/md5 "stuff"))
"Elapsed time: 1.415241 msecs"

;; I think Git and IPFS use sha256 for addressing
(time (digest/sha-256 "stuff"))
"Elapsed time: 1.884376 msecs"

(time (digest/sha-512 "stuff"))
"Elapsed time: 2.658791 msecs"

When using a hash function designed to be computationally intensive, it’s even more of a difference. Factor 10 000 when comparing the built in hash function with the first example from the buddy-hashers package.

;; from [buddy/buddy-hashers "1.3.0"] 
(require '[buddy.hashers :as crypto])

(time (crypto/derive "stuff"))
;; => "bcrypt+sha512$af4ba316e65dcaad729ac85075592860$12$6a7bcb6911c772c73f64f8a7553e39070bdbd6337f5213c1"
"Elapsed time: 423.114087 msecs"

Note that derive from the last example creates a different result on each call. It both creates a random seed and hashes with seed + “password” (“stuff”). This protects against rainbow table attacks. Crypto is interesting!


My current understanding:

  • Crypto hashes are designed to avoid collisions; so that you’re not able to produce two inputs that give the same hash – even if you wanted to.
  • Clojure’s built-in hash may produce collisions, but the hash table implementations are able to handle this in a different way.

Well, it’s not so much collision in my understanding. As they have to appear random.

Like they’re not GUID, collision is going to occur. Because the range is restricted. And the domain of input can be bigger then that of output. But, if they were to collide, it shouldn’t be from an obvious pattern. So you shouldn’t be able to predict the probability of two input colliding.

For example, if abcd and abce collide. You start to realize a pattern. It looks like things collide alphabetically for example. And maybe the beginning of the hash is the same:

abcd -> shdhs23
abce -> shdhs24

This would be a terrible crypto hash. You could quickly build a probabilistic model and reduce the input range based on observation from the output.

shdhs35 most probably starts with abc, now I have much fewer combination left to try to crack the password.

Also, they need to be slow, so the cost of brute forcing them is high and unfeasable. Though still fast enough not to prevent their use case. That’s why there is a range if crypto hash from faster to slower. But in terms of security, slower is better.

Now generally, for the output to appear random, it also means it creates a even/uniform distribution. That will inevitably also mean that the collision will be low. But it’s more then less collision, for the probability distribution to be uniform, it means each new hash can be any of the output range. Every time its equal probability. Consider:

a -> 1
b -> 2
c -> 3
d -> 1
e -> 2
f -> 3

Lets say we hash from ascii to ints from 1 to 3. Now this avoids collisions as much as possible. But is not uniformly distributed. So even though it is perfect at minimizing collision, it’s still a weak crypto hash.

Disclaimer: I’m not actually a security engineer, just a senior software engineer with enough security know how to design secure services and software. So I might be wrong.

You’re talking about Clojure’s hash function now? Or how to create a crypto hash?

From man sha1sum, man sha256sum, man sha512sum and Java’s UUID documentation, I get the following:

Algoritm Range
Java UUID 128 bits
SHA1 160 bits
SHA256 256 bits
SHA512 512 bits

No security background here either, I’m just interested.

I mean for crypto hashes, not Clojure’s.

1 Like

I do not know enough about merkle trees to compare them to Clojure’s implementation.

I do know that hash functions returned by Java’s .hashCode() method return 32-bit integers, and so does Clojure’s hash function on immutable collections. Collisions on keys in hash maps, and on set elements in hash sets, are handled by putting the colliding elements in a linked list and scanning through it in linear time if you search for an element with the same hash value.

When the result is only 32 bits in size, then according to the Birthday Paradox, if you generate a random collection of sqrt(2^32)=2^16 things, then there is close to a 50% chance that at least two of those 2^16 = 65,536 values have the same 32-bit hash. That is if the hash functions are as evenly diistriuted as you can make them.

Thus the Java and Clojure built-in hash functions are made for speed more than they are for cryptographic strength. Cryptographic strength wouldn’t buy you much when the result is only 32 bits in size.

1 Like

@lilactown for benchmarks I don’t know, but for test cases I would start by just replicating the tests of rum.

1 Like

Blog link is down so didn’t read

If you can assume that two object references are different implies the data is different, it’s real fast!

Andy touched on this already: The problem with React.js performance is that this convenient memoization case only happens in nice little trees. But real app state is not nice. Throw in a bunch of ordered collections (not trees) and a single insert into the middle of the collection, or a single little edit to a record inside a collection, breaks equality on the collection, and now anything downstream from entire collection has to recompute – all derived state, all vdom. And then it has to be diffed.

If your state is a graph (Dustin’s rule of UI programming: any sufficiently complicated UI contains an ad-hoc, informally specified, bug ridden, slow implementation of a graph) - basically game over. Any trivial edit anywhere in the graph means you need to re-compute (poll) all queries of the graph.

And in UI, any computation is probably synchronous, and now your UI thread is blocked for seconds at a time.

Reagent reactions don’t solve this (Reagent state is trees). The solution is an incremental computation framework. Slow is smooth and smooth is fast. This is the thing after React.js and Clojure/Script is well positioned to solve it.

1 Like

In your opinion, does re-frame or datomic address this?

All traversal into a graph where you only visit nodes once are trees though. Wouldn’t that apply for rendering? Thus rendering is always on a tree and never a graph, even if the components self reference like a graph.

Edit: Or forests to be more precise.

Datomic and DataScript are not incremental or reactive, they are value-based and you poll them when you want updates. So if your UI renders N collections and each collection is sorted and filtered, even a single datom added can impact the sort and the filter of every single query and they all have to be checked (e.g. refresh the page/data)

React.js / Reagent best case here is going to re-render at the collection level, and then N-1 rows will memoize out with the identical? check, but an O(n) computation was unavoidable because the collection sort order changed or the item count changed. So for large count of items N=100k, the view render will look at each of 100k items of the collection, and then run shouldComponentUpdate 100k times, and it will return false 99,999 times and React will issue one DOM effect. This computation is synchronous (because your render functions are synchronous) and thus it blocks the UI thread, takes longer than 16ms and drops animation frames.

What you want is for a delta like [:db/add 1001 :user/name “Dustin2”] to stream directly to subscribed queries, accumulate into the result collection in O(1) without recomputing the entire collection, and also stream directly into any bound entities and accumulate into the result map in O(1). This is basically the same as database incremental view maintenance. Now your view is basically a streamy reduce and you can send an infinite stream of deltas and which flicker into the view one update at a time, and each individual update fits into 16ms, and it is smooth which means it feels fast.

This technology doesn’t exist yet in Clojure/Script. You need new incremental datastructures and the interface will be different than clojure/core datastructures. OCaml has it. The PureScript guys (John De Goes’s team) tried to build it circa 2016 and afaik that effort failed.

https://blog.janestreet.com/incrementality-and-the-web/

2 Likes

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.