How are clojurians handling control flow on their projects?

My choice has been to use cond-let

  (invalid-input? input)
  (err :invalid-input)

  :let [data (read-data somewhere)]

  (nil? data)
  (err :no-data-at-somewhere)

  :let [ok? (protect-business-rules data)]

  (false? ok?)
  (error :protected-rules-failed)

 :let [data' (transform data)
        ok?  (save-data! data')]

  (not ok?)
  (err :failed-saving-data)

1 Like

I’m reposting the gist link – thanks @pieterbreed for adding the failjure example :raised_hands:

I’ve created a Gist with a problem example and different implementations for it based on what has been discussed here (it’s working in progress).

Feel free to leave your comment there as well.


That’s cool, and another viable option, I would like to see a variant that is like better-loop where you can also recurse to arbitrary locations in the loop.

I don’t understand, please expand, what is this better-loop?

After thinking about this for a second; here is what I pasted in the gist referenced in this part of the thread.

(defn validate-email
  [{:as user-record
    :keys [email]}]
  (if (re-matches #".+@.+\..+" email)
    (f/fail ["Invalid e-mail format" {:email email}])))

(defn find-db-user-by-email
  (case email
    "" {:id    "df19e9e0-20a4-4aa8-9e89-320a5edc1950"
                      :name  "Alice"
                      :email ""}
    ""   (throw (ex-info "Error establishing a connection to the database." {}))

(defn validate-not-taken
  [{:as user-record
    :keys [email]}]
  (let [r (find-db-user-by-email email)]
    (if-not r user-record
            (f/fail ["Email already in use. {}"]))))

(defn save-user!
  (assoc user-record :id (str (UUID/randomUUID))))

(defn create-user!

    (let [result (f/-> user-record
      (if (f/failed? result)
        {:status 400
         :body (f/message result)}
        {:status 200
         :body "OK"}))

    (catch Exception e
      (error e "Exception while attempting to create new user")
      throw e)))

Sorry, I realize I really didn’t explain myself and was imagining something not really obvious.

I am referring to GitHub - Engelberg/better-cond: A version of cond that supports :let clauses, and a number of other conveniences. and was thinking that I’d like something like this that could also let you loop inside it. I’m not sure it makes total sense, but I’m picturing something like:


  :let [valid-data (validate-data input)]

  (nil? valid-data)
  {:result :invalid-data}

  :loop :retry-get-user [user (get-user (:username valid-data)) attempts 3]

  (and (= :error user) (pos? attempts))
  (recur-to :retry-get-user (get-user (:username valid-data)) (dec attempts))

  :loop :retry-get-user-project [project (get-project user) attempts 3]

  (and (= :error project) (pos? attempts))
  (recur-to :retry-get-user-project (get-project (get-project user) (dec attempts))

  (nil? project)
  (recur-to :retry-get-user (get-user (:othername valid-data) 3)

  {:result project})
1 Like

Hi! I have made several approaches to this problem and I use the next solution.

1 Like

I created a library which I use in production for 2 projects GitHub - fmnoise/flow: Functional (and opinionated) errors handling in Clojure
Readme has pretty much everything but in a few words it utilizes instances of ex-data(or any other Throwable) as error container, while everything else is assumed to be non-error, and has some functions as well as let like macro for building flow using that convention. Works perfectly for my needs, zero-dependencies, no new abstraction invented (despite there’s a possibility to do that using supplied protocol), pretty much easy to combine with traditional exception throwing approach and bit more tricky but also doable to combine with Either-style stuff.

1 Like

Thanks for this question, it sparked a very interesting discussion. I’m not going to add much to the discussion, but just throw in a small interesting technicality that might leave a bad taste in your mouth:

user> (def nine 9)
;; => #'user/nine
user> (:pipeline/error nine)
;; => nil
user> (def kw :keyword)
;; => #'user/kw
user> (:pipeline/error kw)
;; => nil
user> (def some-error {:pipeline/error "this didn't work"})
;; => #'user/some-error
user> (:pipeline/error some-error)
;; => "this didn't work"

So you could build pipelines where you check for the presence of an :pipeline/error key, and abort if so.

This is basically what GitHub - fmnoise/anomalies-tools: Anomalies handling tools has done.

1 Like

I found this thread really interesting and enlightening. I recently took some old code and made it into a library which lives in this space (GitHub - kardan/taxa: An experiment in hierarchical domain logic in Clojure and ClojureScript.). It’s essentially control flow using variants, a registry and some utilities. Some of the projects mentioned here sounds really similar.

Several of the solutions offered here feel very monadic and/or “type-heavy” and do not feel idiomatic to me for use with Clojure. Like several posters in this thread, I’ve also explored the idea of having a pipeline that “understands” success/failure modes and conditionally does the right thing. I made a library of it, years ago, but after using it at work we decided that it just wasn’t a nice way to write Clojure in larger-scale applications and we refactored away from it (I linked to it in passing above but I’m including a more descriptive link this time – and I’ll lift some of the readme into this thread so folks don’t actually have to click through and read about why I archived it):

seancorfield/engine: A Clojure library to implement a query → logic → updates workflow, to separate persistence updates from business logic, to improve testing etc.

Some of Engine’s ideas were too abstract to work well but in particular:

In addition, the core concept of running an “Engine request” through your entire business logic produced monadic code that was very hard to read and non-idiomatic, from a Clojure point of view. We’ve recently rewritten that code to use “native” Clojure to query data sources and access hash maps etc, and to create a simple pipeline of closures to be executed on success. The result is much simpler, more idiomatic code (which could still stand a bit more simplification but is already an improvement on Engine-based code).

So where we ended up was building a queue of thunks – no-arg functions – to be executed in order on completion of a pipeline of operations. It’s stateful and dynamic: we use a dynamic var holding an atom wrapping a (persistent) queue to hold the “execute on success” thunks and swap! conj to add each thunk. It turned out to be simple and pragmatic and allows us to treat the “completion” as a consistent transaction.

The monadic code looks good “in the small” but in complex business logic it just didn’t scale well. Similarly, our solution looks terrible “in the small” but turns out to be simple and surprisingly elegant for complex business logic.


I agree, I think the current “problem” is too simple. What we want is a way to describe a flowchart with code.

We need to tackle a few more complexities I think:

  1. Conditional branching on more than errors. Sure, an error happens, you want to just short-circuit the whole flow and return an error. But what if it isn’t an error, but instead it’s a lookup of a config or the computation of some value that dictates the decision of where to go next?

  2. Looping, this is often needed in more advanced use cases of implementing a complex process. And here I mean looping back to prior steps so they are done over again, possibly with more data or changed data going to them at that point.

  3. Handling errors at different levels. It’s only a matter of time you’ll need to actually handle some of the errors you get, you won’t always just short-circuit the whole flow and return to the user an error. You’ll want to handle the errors, and you might need to handle them at various levels. Imagine a set of transactions and you need to add a Saga pattern to undo the prior changes if any of the later changes failed for example, and then imagine once recovered you want to retry the whole thing again in case it was a transient issue.

  4. Actually getting an exception thrown by the runtime or something not under your control, and needing to handle it or short-circuit on it.

  5. Reuse of parts of the flowchart in others. Eventually you’ll see that chunks of a given flow exist in other flows, and you might want to share those across the board so changes to all of them can be made more easily, and so they remain consistent.

  6. Composition of the steps into new flows. Ideally, you don’t want the way you’re doing things to couple the steps together, steps shouldn’t have to care what happened before and what’s happening after, they shouldn’t need to know anything related to the flow, otherwise they’ll be hard to compose and reuse to build other flows. But you also don’t want that to make it hard to wire up a new flowchart using your existing steps.

  7. Debugging, it’s only a matter of time you get an error or a bug where things don’t work, and you need to figure out where and how things broke down to fix it. How easy is it to introspect and identify the point of failure and what caused it?

  8. Idempotency of some steps or skipping over parts of the flow. When the process involves none repeatable operations, say sending an email, and say you have retries, or errors thrown after and the user retry the whole operation, but you don’t want two emails sent out? Imagine sending the email is itself 5 steps composed together in a flow, maybe you want to skip over that, or you want to make that sub-flow idempotent on its own, etc.

Maybe we could come up with an actual smaller example that needed all of these? I think that be a much better demonstration of the real life challenges that could be encountered when dealing with real control flow on an API implementation which models a business process.


That’s actually an approach proposed by Cognitect in GitHub - cognitect-labs/anomalies and I just built a toolset around it. The idea isn’t bad but as for me it’s bit weird that some map could represent error while other map couldn’t. It’s basically the same as Either build on maps/records, so ex-info with data containing error details feels way better as error container as for me. And that’s the reason why I archived that repo and switched to flow.

I wonder why no one has mentioned interceptors nor middlewares here on this thread. Recently, a colleague has done a cool experiment with metosin/sieppari to splitting business procedure into a pipeline with short-circuits between some of the steps. Have anyone used interceptors to build something alike?

I understand that monadic approaches may not feel idiomatic to Clojure. However, I wonder what about promises on ClojureScript? Since we’ve stepped on that landscape, we’re haunted by monads because lots of functions in JavaScript nowadays will return a promise.

@jumar check @mjmeintjespost on this gist for a simple example using missionary.

@fmnoise nice library you have out there. Thanks for sharing it. Feel free to add some examples on this gist, if you want. :smiley:

@seancorfield I’m having difficulties picturing what you’re trying to say and how does the queue o thunks looks like. At least for me, it’s hard to visualize abstract concepts without some examples. That said, I would love to read how does the queue solution look like and what’s idiomatic Clojure code is to solve business logic pipelines for you. Would you mind sharing some snippets with us here (they don’t have to follow the problem proposed on the thread if you don’t want to)?

@didibus thanks for sharing the specs of a complex pipeline. I’ll try to get a new example diagram covering some of your points and will share it here soon. :raised_hands:

1 Like

I don’t have anything outside the context of work code – which is proprietary and can’t be shared – but the concept is pretty simple: as you work through the (pure) business logic you (swap! *actions* conj #(some-func :args)) for “stuff” that needs to get done, in order, if the entire threaded pipeline succeeds. Then at the end of the (pure) business logic, if you succeeded, you “commit” those actions by doseq'ing over the queue and calling each thunk.

For context, this grew out of code that originally used a monadic library that “forced” all external “reads” out to the front and all external “writes” to the end – Engine – so our focus was on lifting all side-effecting code out of the pipeline to the end, which goes back to my comment about the small examples in this thread being far too simple to illustrate the problems in the real world (and why I appreciated @didibus 's post about all the edge cases that need to be considered). Essentially, use the full range of Clojure’s control structures, use exceptions, separate logic from side effects (by pushing the side effects out to the edges – which we also see in imperative shell, functional core).


Thus sounds very similar to how missionary works. You first setup the graph of tasks to run, and then you execute the graph. Each task is just a function that takes a success and failure callback, and returns a cancel function (although missionary abstracts this away so you don’t deal with the functions directly). If one task fails, the entire graph is cancelled.

1 Like

I looked at Missionary after I saw in mentioned in this thread and it doesn’t appear to be anything like either Engine or what we are now doing at work (and I have to say the example code in the Missionary repo and in the Gist posted here is impenetrable as far as I’m concerned: short, cryptic names that don’t tell me what they do, and bizarre syntax).

Alright Sean I was trying to stay out of this thread but you pulled it out of me. You’re right that Missionary needs better documentation and is hard to learn in its current state, so let me elaborate about how important it is. Missionary is very low level concurrency primitive for referentially transparent IO actions, RT dicrete streams and RT continuous signals. You might think of missionary as a reactive virtual machine with reactive assembly instructions. Missionary is a competitor to core.async, but with a much more modern functional programming approach that has benefitted from studying advances in functional effect systems, for example Scala’s ZIO, as well as reactive systems like Jane Street’s Incremental. Missionary is implemented with metaprogramming, not monads, but can express monadic control flow for IO actions, streams and signals. Missionary has made it clear that pure functional programming and reactive dataflow programming unify, they are the same thing.

You’ll be interested in hyperfiddle/photon, a new library from team Hyperfiddle that I teased on twitter, which is a reactive and distributed dialect of Clojure/Script positioned for web development. Imagine React.js but full stack: incremental view maintenance all the way from database views to DOM views, as one streaming computation.

Photon implements a custom clojure analyzer in a macro (like core.async) to compile Clojure syntax to an abstract DAG which is then executed in reactive fashion by missionary. Ironically, we implement try/catch syntax on purpose, compiling it back into missionary operations to get reactive try catch (as well as reactive if, reactive control flow, reactive for, reactive fn, etc). Photon has distributed, reactive closures. You can intuit this as distributed mapreduce distributing your reactive program across the client/server distributed system, serializing the dynamic environment and streaming it on the fly. It’s really fast, and it is way better at web client/server datasync than hand-coded IO, and it composes properly. Distributed client/server expressions where the database flows directly to the view, and with the full, undamaged composition power of lisp.

The problem with try/catch in Clojure is that it interacts terribly with lazy sequences, which are total macro hackery and these hacks break equational reasoning in really unfortunate ways. To fix this, the language itself must be lazy. Which is the case with Photon.


The syntax takes some getting used to, but once you have used it for a while it makes a lot of sense. Probably similar to the core.async syntax.

I’ll try to explain why I think the approaches seem similar (keeping in mind that I’m just a beginner at missionary):

From how I understand your approach, you would have a atom of thunks that will get executed at the end of your business logic - *actions*. Each of the thunks is just a no-arg function that then gets serially executed at the end of your business logic (via doseq).

For example:

(defn business-logic [actions]
  *some pure business logic*
  (swap! actions conj #(some-func1 :args))
  *some more business logic*
  (swap! actions conj #(some-func2 :args)))

  (business-logic *actions*) ;; pure
  (doseq [a *actions*] (a) ;; side-effectful

The core of what missionary provides is the task abstraction - where a task is just defined as a function that takes a success and failure callback, and returns a no-args cancel fn. Missionary provides an interface on top of this abstraction to make it very simple to compose tasks into a graph via normal clojure code.

So, for example, using missionary to implement your approach (as far as I understand your approach), I would simply add

(defn business-logic []
    *some pure business logic*
    (m/? (m/sp (some-func1 :args))
    *some more business logic*
    (m/? (m/sp (some-func1 :args))))

  (def task (business-logic)) ;; pure
  (m/? task) ;; execute task while blocking, side-effectful, cancel using normal thread interruption
  ;; OR, for non-blocking execution
  (def cancel (task #(println "SUCCESS" %) #(println "ERROR" %))) ;; cancel by calling (cancel)

The difference with the second approach is that your business logic can use the results from tasks (for example, some-func1 might be a task that calls an external api and returns something that you need for subsequent steps.


If you wanted to replicate your approach more fully, you could do:

(defn business-logic [actions]
  *some pure business logic*
  (swap! actions conj (m/sp (some-func1 :args)))
  *some more business logic*
  (swap! actions conj (m/sp (some-func2 :args))))

  (business-logic *actions*) ;; pure
  (def task (apply m/join vector @*actions*)) ;; returns a task that runs all the tasks concurrently (but all on the same thread in this case)
  (m/? task) ;; execute and run the tasks, returns the result of executing each sub-task in a vector