Project Loom is to intended to explore, incubate and deliver Java VM features and APIs built on top of them for the purpose of supporting easy-to-use, high-throughput lightweight concurrency and new programming models on the Java platform. This is accomplished by the addition of the following constructs:
We envision tail-call elimination that pops one or perhaps even an arbitrary number of stack frames at explicitly marked call-sites. It is not the intention of this project to implement automatic tail-call optimization.
It sounds like Loom would change the game and allow Clojure and JVM PLs to implement first-class, full TCO unlike is possible at the moment. Although I like Rich’s points about the other merits of having recur even if full TCO was possible.
I like the design decision of exposing the low-level construct. It’s a sound choice leaving room for a wide range of possibilities and that makes me really confident about the future of the JVM.
I think it will be more disruptive for java ecosystem than for clojure, since clojure can already solve problems of this kind with syntax manipulation. Project loom’s continuations are strictly more powerful as they’re not restricted to a single stack frame, but I’m still not sure how much of a game changer it is.
One other benefit I see is better debugging and monitoring. The main downside of syntax manipulation techniques is the mess of stacktraces it makes, and that could be a good reason to rely on host platform capabilities instead. This has already been discussed for clojurescript : Async/Generator functions in CLJS - (requesting feedback!)
Fibers
I remain dubious about structured concurrency. I still see it as an imperative solution to an imperative problem, in addition of being quite complicated. I just see more possibilities in the functional path.
Tail calls
That sounds like an orthogonal problem to me, I don’t quite understand why it’s part of the project.
From what I’ve seen so far, it looks like the big win of Fibers is that current code doing synchronous I/O on a Thread will happen to do async I/O when run on a Fiber. So your current MySQL driver will become async, like you have in Go lang (or better, it will still be synchronous, but you can start a gazillion threads at no cost - so you have easy synchronous coding without call backs, and run a real lot of them in parallel). Fibers will provide even a credible Thread object with thread-locals.
The big problem we are having today e.g. with core.async is that you need async all the way down, because anything blocking will (ahem) block, and it’s not easy to know what is blocking or not. Database call? most likely blocking. But - like - logging? who knows for sure? what if it blocks for one second but only when it needs to rotate files on Friday night?
This said, from a Clojure point of view, the more concurrency, the more the Clojure way pays off. If it’s expensive to read a synchronised object from 100 threads today, things won’t get better when you have 10,000 or 10,000,000…
I read into that statement that the “Clojure way” provides immutability by default, so you don’t need to necessarily read a synchronized object a lot of the time. If you do, you have the Clojure standard concurrency values like atom and ref that’s already done a lot of the mechanical bits for you.
Ah, okay, but that’s still true for threads. The changes to blocking IO to instead perform parking IO when in the context of a fiber I think are the real benefits, if they succeed at doing it. It seems Fibers are also stackful, meaning you could park from inside a function call deeper in the stack. I’ve never used stackful coroutines before so I can’t comment if that’s good or bad. @leonoel is right though, most of this is very imperative overall, and has a lot of similarity with go-tos. It’ll be interesting to see how it all evolves.