Thatâs cool, and another viable option, I would like to see a variant that is like better-loop where you can also recurse to arbitrary locations in the loop.
I created a library which I use in production for 2 projects GitHub - fmnoise/flow: Functional (and opinionated) errors handling in Clojure
Readme has pretty much everything but in a few words it utilizes instances of ex-data(or any other Throwable) as error container, while everything else is assumed to be non-error, and has some functions as well as let like macro for building flow using that convention. Works perfectly for my needs, zero-dependencies, no new abstraction invented (despite thereâs a possibility to do that using supplied protocol), pretty much easy to combine with traditional exception throwing approach and bit more tricky but also doable to combine with Either-style stuff.
Thanks for this question, it sparked a very interesting discussion. Iâm not going to add much to the discussion, but just throw in a small interesting technicality that might leave a bad taste in your mouth:
Several of the solutions offered here feel very monadic and/or âtype-heavyâ and do not feel idiomatic to me for use with Clojure. Like several posters in this thread, Iâve also explored the idea of having a pipeline that âunderstandsâ success/failure modes and conditionally does the right thing. I made a library of it, years ago, but after using it at work we decided that it just wasnât a nice way to write Clojure in larger-scale applications and we refactored away from it (I linked to it in passing above but Iâm including a more descriptive link this time â and Iâll lift some of the readme into this thread so folks donât actually have to click through and read about why I archived it):
Some of Engineâs ideas were too abstract to work well but in particular:
In addition, the core concept of running an âEngine requestâ through your entire business logic produced monadic code that was very hard to read and non-idiomatic, from a Clojure point of view. Weâve recently rewritten that code to use ânativeâ Clojure to query data sources and access hash maps etc, and to create a simple pipeline of closures to be executed on success. The result is much simpler, more idiomatic code (which could still stand a bit more simplification but is already an improvement on Engine-based code).
So where we ended up was building a queue of thunks â no-arg functions â to be executed in order on completion of a pipeline of operations. Itâs stateful and dynamic: we use a dynamic var holding an atom wrapping a (persistent) queue to hold the âexecute on successâ thunks and swap! conj to add each thunk. It turned out to be simple and pragmatic and allows us to treat the âcompletionâ as a consistent transaction.
The monadic code looks good âin the smallâ but in complex business logic it just didnât scale well. Similarly, our solution looks terrible âin the smallâ but turns out to be simple and surprisingly elegant for complex business logic.
I agree, I think the current âproblemâ is too simple. What we want is a way to describe a flowchart with code.
We need to tackle a few more complexities I think:
Conditional branching on more than errors. Sure, an error happens, you want to just short-circuit the whole flow and return an error. But what if it isnât an error, but instead itâs a lookup of a config or the computation of some value that dictates the decision of where to go next?
Looping, this is often needed in more advanced use cases of implementing a complex process. And here I mean looping back to prior steps so they are done over again, possibly with more data or changed data going to them at that point.
Handling errors at different levels. Itâs only a matter of time youâll need to actually handle some of the errors you get, you wonât always just short-circuit the whole flow and return to the user an error. Youâll want to handle the errors, and you might need to handle them at various levels. Imagine a set of transactions and you need to add a Saga pattern to undo the prior changes if any of the later changes failed for example, and then imagine once recovered you want to retry the whole thing again in case it was a transient issue.
Actually getting an exception thrown by the runtime or something not under your control, and needing to handle it or short-circuit on it.
Reuse of parts of the flowchart in others. Eventually youâll see that chunks of a given flow exist in other flows, and you might want to share those across the board so changes to all of them can be made more easily, and so they remain consistent.
Composition of the steps into new flows. Ideally, you donât want the way youâre doing things to couple the steps together, steps shouldnât have to care what happened before and whatâs happening after, they shouldnât need to know anything related to the flow, otherwise theyâll be hard to compose and reuse to build other flows. But you also donât want that to make it hard to wire up a new flowchart using your existing steps.
Debugging, itâs only a matter of time you get an error or a bug where things donât work, and you need to figure out where and how things broke down to fix it. How easy is it to introspect and identify the point of failure and what caused it?
Idempotency of some steps or skipping over parts of the flow. When the process involves none repeatable operations, say sending an email, and say you have retries, or errors thrown after and the user retry the whole operation, but you donât want two emails sent out? Imagine sending the email is itself 5 steps composed together in a flow, maybe you want to skip over that, or you want to make that sub-flow idempotent on its own, etc.
Maybe we could come up with an actual smaller example that needed all of these? I think that be a much better demonstration of the real life challenges that could be encountered when dealing with real control flow on an API implementation which models a business process.
Thatâs actually an approach proposed by Cognitect in GitHub - cognitect-labs/anomalies and I just built a toolset around it. The idea isnât bad but as for me itâs bit weird that some map could represent error while other map couldnât. Itâs basically the same as Either build on maps/records, so ex-info with data containing error details feels way better as error container as for me. And thatâs the reason why I archived that repo and switched to flow.
I wonder why no one has mentioned interceptors nor middlewares here on this thread. Recently, a colleague has done a cool experiment with metosin/sieppari to splitting business procedure into a pipeline with short-circuits between some of the steps. Have anyone used interceptors to build something alike?
I understand that monadic approaches may not feel idiomatic to Clojure. However, I wonder what about promises on ClojureScript? Since weâve stepped on that landscape, weâre haunted by monads because lots of functions in JavaScript nowadays will return a promise.
@fmnoise nice library you have out there. Thanks for sharing it. Feel free to add some examples on this gist, if you want.
@seancorfield Iâm having difficulties picturing what youâre trying to say and how does the queue o thunks looks like. At least for me, itâs hard to visualize abstract concepts without some examples. That said, I would love to read how does the queue solution look like and whatâs idiomatic Clojure code is to solve business logic pipelines for you. Would you mind sharing some snippets with us here (they donât have to follow the problem proposed on the thread if you donât want to)?
@didibus thanks for sharing the specs of a complex pipeline. Iâll try to get a new example diagram covering some of your points and will share it here soon.
I donât have anything outside the context of work code â which is proprietary and canât be shared â but the concept is pretty simple: as you work through the (pure) business logic you (swap! *actions* conj #(some-func :args)) for âstuffâ that needs to get done, in order, if the entire threaded pipeline succeeds. Then at the end of the (pure) business logic, if you succeeded, you âcommitâ those actions by doseqâing over the queue and calling each thunk.
For context, this grew out of code that originally used a monadic library that âforcedâ all external âreadsâ out to the front and all external âwritesâ to the end â Engine â so our focus was on lifting all side-effecting code out of the pipeline to the end, which goes back to my comment about the small examples in this thread being far too simple to illustrate the problems in the real world (and why I appreciated @didibus 's post about all the edge cases that need to be considered). Essentially, use the full range of Clojureâs control structures, use exceptions, separate logic from side effects (by pushing the side effects out to the edges â which we also see in imperative shell, functional core).
Thus sounds very similar to how missionary works. You first setup the graph of tasks to run, and then you execute the graph. Each task is just a function that takes a success and failure callback, and returns a cancel function (although missionary abstracts this away so you donât deal with the functions directly). If one task fails, the entire graph is cancelled.
I looked at Missionary after I saw in mentioned in this thread and it doesnât appear to be anything like either Engine or what we are now doing at work (and I have to say the example code in the Missionary repo and in the Gist posted here is impenetrable as far as Iâm concerned: short, cryptic names that donât tell me what they do, and bizarre syntax).
Alright Sean I was trying to stay out of this thread but you pulled it out of me. Youâre right that Missionary needs better documentation and is hard to learn in its current state, so let me elaborate about how important it is. Missionary is very low level concurrency primitive for referentially transparent IO actions, RT dicrete streams and RT continuous signals. You might think of missionary as a reactive virtual machine with reactive assembly instructions. Missionary is a competitor to core.async, but with a much more modern functional programming approach that has benefitted from studying advances in functional effect systems, for example Scalaâs ZIO, as well as reactive systems like Jane Streetâs Incremental. Missionary is implemented with metaprogramming, not monads, but can express monadic control flow for IO actions, streams and signals. Missionary has made it clear that pure functional programming and reactive dataflow programming unify, they are the same thing.
Youâll be interested in hyperfiddle/photon, a new library from team Hyperfiddle that I teased on twitter, which is a reactive and distributed dialect of Clojure/Script positioned for web development. Imagine React.js but full stack: incremental view maintenance all the way from database views to DOM views, as one streaming computation.
Photon implements a custom clojure analyzer in a macro (like core.async) to compile Clojure syntax to an abstract DAG which is then executed in reactive fashion by missionary. Ironically, we implement try/catch syntax on purpose, compiling it back into missionary operations to get reactive try catch (as well as reactive if, reactive control flow, reactive for, reactive fn, etc). Photon has distributed, reactive closures. You can intuit this as distributed mapreduce distributing your reactive program across the client/server distributed system, serializing the dynamic environment and streaming it on the fly. Itâs really fast, and it is way better at web client/server datasync than hand-coded IO, and it composes properly. Distributed client/server expressions where the database flows directly to the view, and with the full, undamaged composition power of lisp.
The problem with try/catch in Clojure is that it interacts terribly with lazy sequences, which are total macro hackery and these hacks break equational reasoning in really unfortunate ways. To fix this, the language itself must be lazy. Which is the case with Photon.
The syntax takes some getting used to, but once you have used it for a while it makes a lot of sense. Probably similar to the core.async syntax.
Iâll try to explain why I think the approaches seem similar (keeping in mind that Iâm just a beginner at missionary):
From how I understand your approach, you would have a atom of thunks that will get executed at the end of your business logic - *actions*. Each of the thunks is just a no-arg function that then gets serially executed at the end of your business logic (via doseq).
For example:
(defn business-logic [actions]
*some pure business logic*
(swap! actions conj #(some-func1 :args))
*some more business logic*
(swap! actions conj #(some-func2 :args)))
(comment
(business-logic *actions*) ;; pure
(doseq [a *actions*] (a) ;; side-effectful
)
The core of what missionary provides is the task abstraction - where a task is just defined as a function that takes a success and failure callback, and returns a no-args cancel fn. Missionary provides an interface on top of this abstraction to make it very simple to compose tasks into a graph via normal clojure code.
So, for example, using missionary to implement your approach (as far as I understand your approach), I would simply add
(defn business-logic []
(m/sp
*some pure business logic*
(m/? (m/sp (some-func1 :args))
*some more business logic*
(m/? (m/sp (some-func1 :args))))
(comment
(def task (business-logic)) ;; pure
(m/? task) ;; execute task while blocking, side-effectful, cancel using normal thread interruption
;; OR, for non-blocking execution
(def cancel (task #(println "SUCCESS" %) #(println "ERROR" %))) ;; cancel by calling (cancel)
The difference with the second approach is that your business logic can use the results from tasks (for example, some-func1 might be a task that calls an external api and returns something that you need for subsequent steps.
If you wanted to replicate your approach more fully, you could do:
(defn business-logic [actions]
*some pure business logic*
(swap! actions conj (m/sp (some-func1 :args)))
*some more business logic*
(swap! actions conj (m/sp (some-func2 :args))))
(comment
(business-logic *actions*) ;; pure
(def task (apply m/join vector @*actions*)) ;; returns a task that runs all the tasks concurrently (but all on the same thread in this case)
(m/? task) ;; execute and run the tasks, returns the result of executing each sub-task in a vector
)