In.mesh: one step closer to threads on the web šŸŽ†

Announcing in.mesh

Iā€™m pleased to announce an early sneak peak at in.mesh, a webworker library for Clojurescript. Yā€™all know how I like to release libraries on holidays, and itā€™s 4th of July, yā€™all! :sparkler: :fireworks: :tada: :smile:

This library extracts out the non-SharedArrayBuffer (SAB) bits that are in tau.alpha. The security around SABs was uncertain for a while - they settled on requiring new headers (COOP/COEP) on your server, unfortunately hamstringing their usability in some deployment situations - but with that settled, I dusted off the olā€™ repo and got another version working here with a demo running here: https://simultaneous.netlify.app/

That all depends on the magic of SABs though. Unlocking their full potential in Clojurescript will require more work around SAB-backed persistent data structures - itā€™s a longer term effort.

in.mesh just focuses on webworker communciation and allows you to spawn them programmatically. At the moment it still requires you create separate builds - one for the main thread, another for the main worker, maybe more, depending on what kind of repl usability you want - but Iā€™m working on further simplifying it as much as possible. My ideal is getting to a place where you can follow the default quickstart tutorials for regular Clojurescript, shadow-cljs and figwheel, and then just add the in.mesh lib and then start programmatically creating webworkers, without having to deal with extra configuration. A stretch goal is to allow you to build a library on top of in.mesh and then your downstream consumers can also use your library without having to fuss with build configurations - a request also asked for previously here on Clojureverse.

Thatā€™s still under development, but in.mesh provides two other innovations that are worth looking into presently: spawning webworkers into a mesh and communicating between them using the in macro - thus the "in" dot ā€œmeshā€.

The mesh

By default, in.mesh starts with a :root node. From there, you can spawn others:

(def s1 (mesh/spawn {:x 1}))

s1 ;=> "id/d37f262d-1566-45ad-9904-da671bb0cc9c"

You can also spawn workers with a more unique name:

(def s2 (mesh/spawn {:id ::s2 :x 2}))
s2 ;=> :in-mesh.figwheel.root/s2

Note: spawns immediately return a new ID, but the worker creation is asynchronous. Chaining work synchronously on a worker after creation wonā€™t be available until we can get blocking constructs ported back into in.mesh.

We now have two branch workers, spawned off of the root worker. In Calva, with the current Figwheel configuration, you can switch between the main thread (screen), root and branch builds:

If we require in figwheel.replā€™s tools, we can then view the different webworkers connected to a given build:

(conns)
;Will Eval On:  Rosie
;Session Name     Age URL
;Rosie             0m /figwheel-connect
;Tanner            5m /figwheel-connect
;=> nil

Letā€™s switch to Tanner:

(focus "Tanner")
;"Focused On: Tanner"

Letā€™s see Tannerā€™s peers:

(keys @mesh/peers)
;=> ("root" "s2")

Hmm, Rosie should be ::s2. Thatā€™s a bug.

Bottom line, every time you spawn a worker, it automatically creates a connection to all other workers, creating a fully connected mesh. This makes communication between workers fairly transparent and low ceremony.

The in macro

Letā€™s send a message to ā€œs2ā€:

(in "s2" (println :hi :from :s2))
;:hi :from :s2

(in "s2" (println :my-id-locally-is (:id mesh/init-data)))
:my-id-locally-is :in-mesh.figwheel.root/s2

The symbol mesh/init-data is being resolved on the ā€œs2ā€ side.

mesh/init-data
;=> {:id "id/64f6525e-4974-43fb-9ad5-6876a0b8ee00", :x 1}

Notice our ID is no longer :root because we switched to Tanner.

Local binding conveyance

The in macro will also automatically convey local bindings across the invocation:

(let [some-name :sally]
  (in "s2" (println :hi some-name)))
;:hi sally

Even locally bound functions get conveyed:

(let [some-name :sally
      fix-name (fn [x] (keyword x))]
  (in "s2" (println :hi (fix-name some-name))))
;:hi :sally

Not everything works, obviously, (some things canā€™t cross the serialization boundary) and only the scope of locally bound variables get conveyed - not everything at the top level scope:

(def some-name :sally)
(let [fix-name (fn [x] (keyword x))]
  (in "s2" (println :hi (fix-name some-name))))
;:hi nil

Note: In this configuration, :branch nodes are sharing the :root build config, so saving the file and thus redeploying the code for the saved namespace will result in branch nodes also having some-name defined within the same namespace, so sometimes it will just work, without any conveyance. Iā€™m executing all of these in a comment block at the moment, so that you can see how conveyances are scoped in communication between build-separated clients.

Bound variables within functions also work:

(defn print-in [id some-name]
  (let [double-name #(str (name %) "-" (name %))]
    (in id (println :hi (double-name some-name)))))

(print-in "s2" :bob)
;:hi bob-bob

For situations where the automatic binding conveyance isnā€™t doing the trick, you can convey bindings manually:

(def some-name :sally)
(let [fix-name (fn [x] (keyword x))]
  (in "s2"
      [fix-name some-name]
      (println :hi (fix-name some-name))))
;:hi :sally

You canā€™t currently mix the two techniques together though - patches welcome.

Now letā€™s do some in chaining:

(def bob (mesh/spawn {:id "bob"}))
;repl.cljc:371 REPL eval error TypeError: Cannot read properties of null (reading 'postMessage')

Ah crap, thatā€™s a bug - we route all spawning activities to the :root node and their IDs are getting turned into strings, so the spawn function is broken for non-root nodes at the moment. Should be an easy fix. Letā€™s switch back to the root node for creating new workers for now:

And then click root:

Screen Shot 2022-07-04 at 9.53.16 PM

Okay, now letā€™s try our in chaining:

(def another-name :bill)
(in ::s2
    [another-name]
    (println :hi another-name :from (:id mesh/init-data))
    (in "bob"
        (println :hi another-name :from (:id mesh/init-data))))
;:hi bill :from :in-mesh.figwheel.root/s2
;:hi bill :from bob

Here we created a new value :bill; we conveyed that value to ::s2 and printed that value and the local ID; then we implicitly conveyed that value to "bob", printed it and then printed bobā€™s local ID.

In prior iterations of tau.alpha, I implemented more complex mechanisms like an executor service and an implementation of Clojureā€™s agents on top, all using this in chaining formalism. Itā€™s a lot easier to reason about the flow of data between workers when you donā€™t have to construct a new RPC handler for every possible kind of message and logic you want to use.

Anyway, thereā€™s more to delve into but thatā€™s probably a good intro for now. Again, my hope is that we can boil the build configurations down to the simplest possible thing, perhaps eliminating manual configuration altogether. The repo contains /figwheel, /shadow and /cljs example project folders and the Figwheel one is currently the most usable, but the Shadow one should be fixed up soon.

And I hope in.mesh can develop into a solid base for building higher level constructs on top of, including tau 2.0.

Finally, in the spirit of Independence Day, Iā€™d like to celebrate the freedom that Clojureā€™s power and simplicity allows for - I couldnā€™t imagine hacking together these kinds of tools in a language that doesnā€™t give you the ability to redefine itself. So thanks to everyone involved, cheers!!! :tada: :clojure: :clojurescript: :fireworks: :smiley:

10 Likes

Hey folks, quick update on this - good news! I got blocking semantics working (using a service worker hack). Not ready to release the update yet - hopefully tomorrow or in a few days - but I wanted to get an update out.

Blocking semantics

future

@(future (println :blah) (+ 1 2))
;:blah
;=> 3

You can yield as well:

@(future (println :blah) (yield 4) (+ 1 2))
;:blah
;=> 4

On the main thread, future returns a promise:

(-> (future (println :blah) (+ 1 2))
    (.then #(println :res %)))
;:blah
;:res 3

Where yield is especially useful:

(-> (future (-> (js/fetch "http://api.open-notify.org/iss-now.json")
                (.then #(.json %))
                (.then #(yield (js->clj % :keywordize-keys true)))))
    (.then #(println "ISS Position:" (:iss_position %))))
;ISS Position: {:latitude 46.5746, :longitude 3.4638}
;=> #object[in-mesh.core "e7e5a816-b530-4ddc-b389-d6b6713605af" {:status :pending, :val nil}]

Because it returns a promise on the main thread, you can use promesa or use your usual promise tricks:

(-> (js/Promise.all
     #js [(future 1)
          (future 2)
          (future 3)])
    (.then #(println :values (vec %))))
;:values [1 2 3]
;=> #object[Promise [object Promise]]

Of course, in a webworker, you can just use synchronous blocking semantics, returning the actual values:

(let [a @(future 1)
      b @(future 2)
      c @(future 3)]
  (println :values [a b c])
  [a b c])
;:values [1 2 3]
;=> [1 2 3]

injest: Auto-parallelizing transducification, now in CLJS

So, with that stuff in place, I was able to port the parallel thread operator =>> from injest to Clojurescript:

(defn flip [n]
  (apply comp (take n (cycle [inc dec]))))

(->> (range 1000000)
     (map (flip 100))
     (filter odd?)
     (map (flip 100))
     (map inc)
     (map (flip 100))
     (apply +)
     (println)
     time)
;"Elapsed time: 11452.300000 msecs"
;=> 250000500000

Thatā€™s the vanilla ->> operator. With any significant work, the auto-transducifying thread macro x>> doesnā€™t win you much because you spend much more time on actual work thanyou do boxing:

(x>> (range 1000000)
     (map (flip 100))
     (filter odd?)
     (map (flip 100))
     (map inc)
     (map (flip 100))
     (apply +)
     (println)
     time)
;"Elapsed time: 10722.200000 msecs"
;=> 250000500000

This is where the auto-transducifying, auto-parallelizing thread macro comes in handy:

(=>> (range 1000000)
     (map (flip 100))
     (filter odd?)
     (map (flip 100))
     (map inc)
     (map (flip 100))
     (apply +)
     (println)
     time)
;"Elapsed time: 5615.500000 msecs"
;=> 250000500000

On the main thread, =>> returns a promise. Here, weā€™re moving the println into the .then:

(-> (=>> (range 1000000)
         (map (flip 100))
         (filter odd?)
         (map (flip 100))
         (map inc)
         (map (flip 100))
         (apply +)
         time)
    (.then #(println :res %)))
;"Elapsed time: 5780.800000 msecs"
;:res 250000500000
;=> #object[in-mesh.core "516360f7-7593-4fba-bbc2-8c1a94cf4c6f" {:status :pending, :val nil}]

Coming soon

So thatā€™s pretty fascinating. Thereā€™s still some polish I have to add around the API; and some resiliency around the worker pools; and some better error handling; and automatically transfer transferables; and nail down simpler build configurations across all three main build systems (cljs built-in, figwheel and shadow); and port it to node and nbb/sci; and maybe one day a version of =>> that can work completely async, without the service worker hack.

But all in all, Iā€™m pretty satisfied with the result, doubling performance over non-parallel versions in the browser, even while weā€™re serializing everything across the workers. For some workloads, you can see 3 to 5 times the performance, but Iā€™m not going to get into a shootout in this post - more later on the metrics. Interestingly, once weā€™re automatically transferring the transferables, parallelizing work on Typed Arrays across worker pools with =>>, I think we will see speedups on par with what we see on the JVM.

Anyway, more to come - should have another beta out soon. Happy hacking!

5 Likes

Quick update on this:

The first official alpha release of inmesh is out: net.clojars.john/inmesh {:mvn/version "0.1.0-alpha.1"} - ā€œOne step closer to threads on the webā€ - GitHub - johnmn3/inmesh

Now spawn, in, and future all provide:

  • simple implicit binding conveyance as well as explicit binding conveyance
  • derefability (synchronous results) in workers and returning promises in the main thread
  • yield for converting async functions into synchronous ones

For now, you have to declare some configuration details in an ns loaded by the screen/main thread. By the beta releases though, Iā€™d like to provide some automation so that you only need to declare configurations for advanced scenarios.

In general, spawn can take a half-second of overhead to launch, depending on loaded libs.

in usually takes 4 or 5 milliseconds of overhead for small invocations.

future usually takes 8 to 10 milliseconds of overhead for small invocations

Also, inmesh now comes with a wrapper for js/IndexedDB, providing a synchronous db-get call, similar to the synchronous API of js/localstorage. js/localstorage isnā€™t available in webworkers, so having this synchronous interface is super convenient for libs and frameworks that expect synchronous access to storage.

As such, the repo now also provides a standard dashboard demo built on re-frame/mui-v5/comp.el/sync-IndexedDB, with application logic (reg-subs & reg-event functions) moved into a webworker, where only react rendering is handled on the screen thread, allowing for buttery-smooth components backed by large data transformations in the workers.

Screen Shot 2022-07-30 at 5.46.23 PM)

Issues and PRs are welcome. Itā€™s a pretty early alpha and the APIs are subject to change. Might provide a pmap before the beta. Happy hacking!

1 Like

@John_Newman this is really cool.

  • Do you have a complete cljs environment running in the web worker?
  • What is the protocol you are using for sending commands/receiving data?
  • Does it work on Safari?

Thanks @zcaudate!

Yeah, itā€™s a pure cljs artifact with a :webworker build target. The screen launches first a service worker node (:sw) and then a :root node, then the :root node launches a :db node and the :core node. The example dashboard uses the same :core artifact for all the workers except for the :sw node. So they all have the same compiled libraries. You can optionally target the :root, :db and worker pool nodes at a thinner artifact that doesnā€™t have your :core libs, but then you canā€™t use your libs in your future and =>> calls and in the workers you spawn. Depends on what kind of work you intend on getting done. Pure CLJS is available in all of them.

Protocol wise, itā€™s all just strings - stringifying everything and edn/read-string on the other side. Then calling js/eval on read out functions, wrapped in some javascript to hoist the functions. Works in :none and :advanced, as long as the artifacts match on both sides of the ā€œwire.ā€ I tried transit a few years ago but didnā€™t see a huge benefit over just using strings, at least for just postMessage. Most messages end up being pretty small. Advanced compiled functions end up being a pretty well minified bytecode over the wire.

Yes, it does work in Safari. Took a lot of banging! :sweat_smile: Try opening the dashboard in Safari: https://johnmn3.github.io/inmesh/ and then either toggle the drawer or toggle the theme and then reload the page. The settings will be restored from IndexedDB from the :db worker, then the :core worker will ship it to the :screen thread and youā€™ll see your last setting restored. That demo is advanced compiled. Could pull the settings on first paint on the main thread out of IndexDB directly, for a little faster data loading, but itā€™s pretty fast. And the whole point is when youā€™re working with many megabytes of data and want to offload the processing from the main thread, doing complex projections off of some normalized data set.

Before the js/eval stuff, it uses rpc calls dispatched via a defmethod. Thatā€™s only used for distributing ports to construct the mesh. Once the mesh is constructed, it then bootstraps the in macro that allows for js/evaling over the wire, and everything else after than is just invocation through the in macro.

Edit: Oh, and in Safari all spawn calls are proxied to the screen because it doesnā€™t yet support nested workers. In Chrome and Firefox :root handles all spawns.

Wow. It is fast and quite compact too.

  • Whatā€™s the advantage of using ServiceWorkers over WebWorkers?
  • Are you sharing the mesh between tabs? or are you relying on IndexDB for that?
  • How hard is it to debug this?

ā€“

say you created a new function that wasnā€™t in the service worker, Iā€™m assuming you have to send that function into the worker via postMessage and have that be evaled as well? Is that right? So are you building some sort of dependency graph as well? or do all necessary functions get compiled into the worker file initially and youā€™re just calling them?

The Service Worker is just for being able to do blocking waits. You create a fresh work id, send work to another worker, then we issue a synchronous XHR request (which are still allowed in workers) to a fake url that the SW intercepts. It then waits for the other work to post up the result. When the result comes in, the SW then drops the result into the response back to the original invoking thread, where it wakes up with the result.

Weā€™re not yet doing multi-tab comms - would need to use shared workers instead. Might be easy, havenā€™t tried yet.

Debugging is a little harder - in some situations, if an error is thrown on a worker pool thread, I catch it and ship it back to the calling thread to be thrown there, but I havenā€™t covered all the bases yet - still a lot of polish to be done there.

OTOH, implementing ciderā€™s debugger for Clojurescript only requires blocking semantics. So, maybe with inmesh, we can finally have a stepping debugger for CLJS in cider and similar IDEs!

Either all the variables one references must be compiled beforehand, available to both sides of the invocation, or one must ship it over via the implicit or explicit binding.

(let [my-fn (fn [x] (+ x 2))]
  @(in :w1 (my-fn 1)))
;=> 3

That will work because the fn weā€™re implicitly shipping is pure cljs, which isnā€™t referencing anything else not known on the other side.

(def my-fn (fn [x] (+ x 2)))
@(in :w1 [my-fn] (my-fn 1))
;=> 3

Here we had to explicitly send my-fn over in the optional explicit conveyance vector. If my-fn is in an ns that is compiled on both sides of the invocation though:

@(in :w1 (my-fn 1))

Will just work.

yeah. containerisation is the right way to go for the future. Iā€™d love it if that were the default for runtimes where you can just start and stop as needed.


Is this part of a product or a proof of concept? how do you see the end use case? The reason why I know a little bit about this stuff is because we wanted to share a web socket connection between tabs and it turned out to be stupidly hard.

RIght now itā€™s a proof of concept but Iā€™m hoping to develop it into a full fledged open source library that can be used to accomplish hard things more easily in production code. Iā€™ll be dogfooding it on some in-house projects. Others are encouraged to try it out and kick the tires. If folks can try it and file issues itā€™ll get to beta and RC1 faster.

1 Like

Yep thatā€™s awesome. Hope it gain traction. If immutable datastructures do become standard for js, then this is probably one area that wouldnā€™t be affected that much.

1 Like

A couple updates to this lib:

Name change: cljs-thread

The library is now called cljs-thread. More on that below.

Namespace locals conveyance

Before, you couldnā€™t do this:

(def my-fn (fn [x] (+ x 2)))
@(in :w1 (my-fn 1))

Now you can. More in the docs on binding conveyance.

Nested function conveyance

Before, symbol representing functions could be implicitly conveyed, but if functions were embedded within a data structure, they couldnā€™t be. Now, we walk the conveyed params and look for fns, tag them and then rehydrate them on the other side, so now you can do this:

(let [x {:v 2 :f +}]
  @(future (+ 1 @(future ((:f x) (:v x) 3)))))

Thereā€™s still some papercuts here and there, but definitely an improvement.

Delayed results requests

Kind of an impl detail, but now results are not requested unless theyā€™re derefed. Before, for promise scenarios (including on the main thread), a request for the result was automatically kicked off, allowing a .then to have it more immediately. Now, you signal that you want the results asap by derefing the promise:

(-> @(spawn (+ 1 2 3))
    (.then #(println :ephemeral :result %)))

When not derefing, it is implied that you want to do a side effecting thing, or youā€™re pushing results across a circuit of the mesh in a forward-only fashion. When constructing multi-node computations, youā€™ll usually want to design them in a forward-flowing manner - like source ā†’ fan out ā†’ sink.

pmap, pcalls, pvalues and sleep

Per their clojuredocs examples from Clojure proper:

;; A function that simulates a long-running process by calling thread/sleep:
(defn long-running-job [n]
  (thread/sleep 1000) ; wait for 1 second
  (+ n 10))

;; Use `doall` to eagerly evaluate `map`, which evaluates lazily by default.

;; With `map`, the total elapsed time is just over 4 seconds:
user=> (time (doall (map long-running-job (range 4))))
"Elapsed time: 4012.500000 msecs"
(10 11 12 13)

;; With `pmap`, the total elapsed time is just over 1 second:
user=> (time (doall (pmap long-running-job (range 4))))
"Elapsed time: 1021.500000 msecs"
(10 11 12 13)

And they all convey local and namespace scoped symbols:

(let [x 1]
  (pcalls #(long-running-job x) #(long-running-job (inc x)))

(pvalues
 (long-running-job 1)
 (long-running-job 2)
 (long-running-job 3)
 (long-running-job 4)
 (long-running-job 5))

pcalls is a macro here.

Stepping debugger

Thereā€™s now a rudimentary debugger. Setting up keybindings in your IDE makes it pretty usable though:
stepping-debugger

=>> binding conveyance

Now, like the other macros, =>> also has binding conveyance.

Moving to GR

Iā€™ve always thought the best open source libs tend to either be sponsored by the community or by some company. I tend to work a lot, so I canā€™t in good conscience ask clojurists-together to sponsor my open source work - I donā€™t have as much time to dedicate to open source work as Iā€™d like. So, to give this library a better future, Iā€™ve decided to move its ownership under the umbrella of Guaranteed Rate, where I work. Weā€™re a CLJ/CLJS shop making heavy use of community provided tools and libraries and weā€™re hoping to ā€œgive backā€ a little bit by taking over stewardship of this library as a company.

To commemorate this move of ownership to GR, we had a little renaming ceremony and decided to go with cljs-thread. Nice, simple, to the point - and conveys how it tries to give you an API similar to what youā€™d experience back in Clojure-land.

The moving of the repo hasnā€™t happened yet and we havenā€™t made any other formal announcements yet - Iā€™ll probably clean up the docs and code a bit before moving the repo over - but weā€™re hoping that yā€™all will help us iron out issues and polish it up into something we can all benefit from as we approach a beta release.

Thanks, and happy hacking!

1 Like

FYI:

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.