6 years of professional Clojure

After 6 years of working professionally with Clojure I have gathered my thoughts and poured them out in Nanit’s engineering blog

Would be glad to get your feedback and opinions


Nice write-up and congrats on using Clojure professionally for 6 years!

How long did it take before you felt productive working with Clojure?

How does Nanit onboard new devs who don’t know Clojure? (Curious about how much help devs get and what expectations are for how quickly they’ll contribute.)

Have you also tried YourKit for profiling? Curious to know how it compares with VisualVM.

I stumbled upon a function that received a dictionary argument and I found myself spending a lot of time to find out what keys it holds. Sometimes I had to put a log in our integration environment to see what message it receives and what are the fields that are available for me in that message. Sometimes I would go to the tests for that function and look for the example argument value we used in the tests but that might not be enough because there might be other fields that exists in that dictionary and are just not being used in the function at the moment so they might be missing from the test value as well. Sometime I would look at the function’s call site to understand what argument has been passed and how it was built.

Can this issue be solved by documenting what keys a function’s dictionary argument accepts?

Clojure is far simpler than haskell. Clojure is simpler than OCaml.
It is simpler than java and javascript. It is simpler than typescript.
It is simpler than C and C++ and Rust. Its simplicity rivals that of python.

Onboarding developers who want to learn clojure isn’t going to take long.

documentation might mitigate this, but my issue with documentation is that it tends to rot since it gets out of sync with the code.
the better solution would be testing so the fixtures are supposed to demonstrate the input but you will still be missing keys that your current logic does not rely on.
another solution might be core.typed but I never used it on a large code base to testify how usable it is and how it affects the development cycle

I agree that clojure is simpler in its nature but the comparison I referred to in the article is not onboarding engineers new to clojure vs engineers new to other languages. my point was that if your code base is mostly python, you have good chances to find an engineer that is already proficient with python so the onboarding process, at least for language related topics, almost does not exist. Since the number of experienced clojure engineers is significantly smaller, the chance they will have to learn the language and adapt to the new eco system is larger. so in average, if you team is built on clojure you will end up spending more time onboarding new engineers

an interesting argument to make would be that while you spend more time onboarding on language related topics, entering a large code base and getting familiar with the existing system is easier due to clojure’s simplicity and functional nature

thank you I glad you liked it

I don’t remember exactly but I think it took me about a month or so to start delivering value with clojure in an idiomatic way

when new engineers arrive nanit we walk them through clojure for the brave and true and then they go through http://clojurekoans.com/

after they are familiar with the basics of the language, they build a restful api using our common libraries (httpkit, compojure, ring, honeysql etc)

they usually start delivering value after 2-3 weeks with close guidance from the team’s senior engineers

I have never tried YourKit but it does look like a more modern solution for that purpose. I will definitely give it a try the next time we see memory issues in our apps.


The JVM is a known memory eater […] it always seems to require more memory than needed to run the application

It probably doesn’t, the JVM is tuned to reserve all the available memory of the server it is running on in order to maximize throughput. That doesn’t mean that it needs all of it to run, but that it will use it all if given.

If you want to check the memory you actually need, you need to restrict the -Xmx flag which dictates how much heap it’s allowed to use. A lot of applications can still run with quite a small -Xmx.

Newer JVM GCs have some flags as well to push them to release more of the unused memory back to the OS and such, but there’s not a lot of good information on how to configure them to do so and how effectively they will do it.

The JVM is really only tuned for server applications, where there shouldn’t be anything else running on the machine beyond it. But if you see that you’re using up all the RAM, that doesn’t mean you need to get more RAM, just that the JVM won’t waste that RAM on you, and will use it to better performance and avoid unnecessary GC passes.

To sum it up, I’m not a fan of the JVM

I feel like most people with that take don’t have an objective stance, and just reflect bias, not sure it’s the case for you, but the article didn’t offer any reasons for this statement.

You have a virtual machine that has a state of the art JIT, offering some of the best single and multi-threaded performance of any GC runtimes, which is able to run on an insane number of different CPU architectures and OSes, while not needing your program to change in any way as it handles all of the adaptation to those platforms for you, and which has a very sane self-contained dependency and deployment story.

How can you not be a fan?

The second topic I find difficulty with when working in large, unfamiliar Clojure codebases is Typing. Clojure is a dynamic language which has its advantages but not once I stumbled upon a function that received a dictionary argument and I found myself spending a lot of time to find out what keys it holds

This issue you face isn’t related to dynamic typing, it’s related to the lack of ADTs (abstract data-types).

For example, if you were to use records instead of maps to model all you program data, you wouldn’t have this problem. Yes, you could theorically still not know what record a certain function is meant to receive, so a declared and validated argument type would help for that, but in general you’d not have this problem you’re describing with this simple change, because a function taking a foo-context where you have a (defrecord FooContext ...) would allow you to see what FooContext contains.

Now even more so, the problem is caused by a lack of closed ADTs, so even records are open, but if Clojure had a way to create closed records that would be remediated.

So if Clojure had closed records and idiomatic Clojure was to use those to model your data, similar to how a lot of statically typed languages force you to do things, even though Clojure is dynamically typed, your problem would go away:

;; closed meta doesnt exist in Clojure, but let's pretend it 
;; had it where the record would now be closed to
;; having not declared keys added to it
(defrecord ^:closed FooContext
  [name email wtv])

(defn foo
  (assert (instance? FooContext foo-context))

Clojure could have chosen that data should be modeled in closed records like this, and simply adding an assertions or a pre-condition to your functions, now you’d know unambiguously and guaranteed what keys foo-context has. No static types required at all.

That’s not what Clojure chose to do though, it favours openness and generally doesn’t like mandatory keys because it creates the “maybe not?” problem of modeling optionality as a present key with a missing value. Also, working with such closed types isn’t as flexible, you can’t as easily merge/join/split the data, etc. It’s a design tradeoff.

Sometime I would look at the function’s call site to understand what argument has been passed and how it was built.

Personally I’ve found looking to the function implementation is the quickest way to figure this out. Just look at what keys from the argument it uses, and if it passes the map to another function follow that trail as well, rarely takes more than 10min to have it all figured out, even when the map is passed downs and down and down a lot. Though personally I’d consider any such deep call stack a sign of a bad Clojure code-base that needs refactoring.

I’d recommend keeping call stacks shallow, especially if passing a map along the stack.

Also, I recommend a convention which I tend to use for this which is that I don’t pass a map I took as an argument down to another function, I explicitly destructure:

(defn foo
  [{:keys [a b c d] :as foo-context}]
  (println (keys foo-context))
  (+ a b (bar {:c c :d d :e (/ a b)})))

(defn bar
  [{:keys [c d e] :as bar-context}]
  (println (keys bar-context))
  (* 12 c d e))

This is just a convention in my code bases, but basically I always use destrcutured keys, even when I need to pass-down the map I will explicitly create the subset of the map that the further down functions actually need.

So now you know clearly that foo needs to be passed a, b, c and d. And that bar needs to be passed only c, d and e.

I can still use the whole map with :as if I need to do any non-key based manipulation, like simply print all the keys in it and things like that.

But overall I don’t deny those “challenges”, there’s a lot of disdain for the JVM, people tend to like lean runtimes where lean means low memory footprint, quick startup, quick warmup, and small bundle size, even if it actually runs slower overall. It is true that knowing what keys are in a map is a challenge and something to get used too taming and dealing with. And ya, on the recruiting side and ramp up I agree as well.


I would turn to Clojure Spec for this – and I have used it specifically to explore and understand code I did not write. You can describe the required and optional keys of a hash map in Spec, and you can describe the possible values using predicates to any degree you want (Spec is substantially more powerful than a type system because it can describe runtime constraints using the full semantic power of the language itself – but Spec is not a type system and should not be used like one: a mistake some people make if they come to Clojure from a statically typed language background).

A Spec for a data structure can be as loose or as precise as you need it to be for your purposes. It can start out very simple (“this hash map always has these three keys but we don’t know/care about their values”) and can evolve into a detailed description of the data structures – and therefore the signatures of functions that accept and/or return those data structures.

Spec can be used in production for explicit validation. It can be used in dev/test for automatic runtime assertions. It can be used in dev/test for property-based tested (a la Haskell’s QuickCheck). It can be used in dev/test for generating compliant random data structures – so it can be used to create example data to explore a solution during development or seed example-based tests. I wrote up our various uses of Spec at work in this blog post about two years ago: An Architect’s View: How do you use clojure.spec (corfield.org) (we adopted Spec as soon as it became available in the Clojure 1.9 prerelease cycle).

1 Like

I use truss as a lightweight alternative to spec to ensure that my functions have the data that they need.

1 Like

thanks @didibus for the detailed response I really appreciate your opinions and the time you’ve taken to write them down

Regarding the JVM - I know I can use the xmx flag to limit the heap size and we do that in several services but I do not like the initial/default configuration that directs the JVM to use all of the server’s memory. Even using the xmx flag, the JVM is not efficient with memory (not sure about CPU) as other runtimes like GoLang. Moreover, allocating memory and GC’ing it consumes resources and other runtimes (Golang for example) measure benchmark by memory allocations which is a more conservative and leaner approach.
Also, the argument about cross architecture compatibility is less relevant for me as we’re using Linux distros to run our applications with Docker on top of it.
Lastly, I’m still stand behind my argument that profiling memory usage on the JVM remotely hasn’t been a pleasant experience to say the least. There might be other tools or methodologies I’m not aware of to make this process more efficient and I’d be happy to hear about some.

About dynamic typing / ADTs - thank you for clarifying my argument and I totally agree that is a language design tradeoff made consciously by the Clojure design team (or Rich Hickey himself). I know of core.typed and clojure.spec that tried to provide some solution to this problem but I’ve never worked with them on a large code base to say how useful they are.
I like the idea of constantly destructuring keys on the function argument list but it still won’t show me keys that are not currently used inside the function.

thanks again for taking the time to write up your thoughts - I learned a lot from your reply

Thanks for the article, it was a good read as well. These anecdotal pieces I think are really valuable, especially when coming from people who used Clojure for work.

Do you mean like if you need to add a feature and you don’t know if the map has some key that you’d need for the new feature?

Like if you had:

(defn foo
  (do-something (:name bar))
  ;; I want to add another piece functionality here
  ;; and I would need the email, but I don't know if
  ;; bar has the email?
  (do-new-thing (:email bar)))

For that I’ve always used either records, schema or spec, where I always define the structure for my domain model.

Also, you’re not exempt from this even when using closed ADTs and have static types, because it the temporal element and the presence of optionality.

Here’s an example where the User map might have an auth-token on it, but only after it went through the authentication function:

(defrecord User
  [name email session-token])

(map->User {:name "John"
            :email "[email protected]"})

#cljs.user.User{:name "John",
                :email "[email protected]",
                :session-token nil}

Now let’s say you have that function again, but this time we need the session-token for our feature:

(defn foo
  (do-something (:name bar))
  ;; For our new thing, we'd need the session-token
  ;; but since it's optional, we.dont know if it'll be present
  ;; or not at the point this function is called
  (do-new-thing (:session-token bar)))

In languages with ADTs people have come up with the idea of micro-types for this problem, but I’m not sure people necessarily find it a worthy tradeoff. The idea would be that you would never have optional things, and instead would have a type for each valid combination of optionality:

(defrecord User
  [name email])

(defrecord SessionUser
  [name email session-token])

(defn foo
  (assert (instance? SessionUser session-user))
  (do-something (:name bar))
  ;; Now we know that session-token is set, because of
  ;; the type being SessionUser
  (do-new-thing (:session-token bar)))

This is the problem that I believe Rich Hickey is looking into and hoping to solve with Spec2, but I think it’s a hard problem, and one of the reasons why Spec2 is taking a while to come out.

That’s true, it’s probably not as relevant nowadays that CPU architectures have standardize and Linux is king on servers.

Go is interesting, but it works so differently that it gets hard to compare against the JVM I think. But in the end, to the user, Go does feel a lot leaner and often just as fast. That said, let’s say the Go runtime is #1, I think it be hard to say the JVM doesn’t at least compete to be #2. So I still feel the amount of “I don’t like the JVM” look at Go seems very unfair to me, that there is a single other runtime that bests it doesn’t make it suddenly a terrible thing.

Ok, but getting into the details, which I like to discuss. Go always perform AOT (ahead of time), that’s a very different runtime design. Most JVMs do JIT compilation (just in time).

So one difference in memory use and startup time is that every JVM program must include raw bytecode, a full JVM bytecode interpreter and a full JVM bytecode compiler. So even in a hello world app, you need to load in memory when the program starts an entire interpreter and compiler for JVM bytecode. As it performs JIT it will also start to have in-memory the compiled machine code, and it’ll start to track a whole bunch of other metadata about the program runtime behavior that it will use to drive how to perform JIT, what to optimize or not, etc. All those things take a lot of base memory and slow the startup, even for just a Hello World.

Go will perform AOT and distribute a fully compiled.to.machine code binary. So there’s no need to load an interpreter and a compiler in memory when the program runs, making a hello world app take much less memory.

You can see how much impact on start time and memory this has by using a JVM that supports AOT as well like GraalVM or the Android VM. When using AOT, JVM programs can also achieve instant startup and much smaller memory footprints.

There’s something else though, Go has an interesting use of OS threads, it doesn’t use many, instead having its own implementation of lean threads (fibers), that take a lot less memory, I think 4kb each. Java uses OS threads, and each of those start out using around 1MB I think.

On top of the fact that OS threads are memory hungry compared to Go lean threads, Java threads each get a fixed size stack. So if the stack size is set to 1024kb, each thread will eat up 1024kb of stack size even if the stack isn’t actually used. Java will reserve all that stack memory from the OS regardless (last I heard at least).

Go has a variable size stack where it’ll dynamically release and allocate less/more memory as the stack needs it.

So if your JVM app has 50 threads, that’s 50 OS threads at 1MB each + a stack which tend to default to 1MB as well, for 2MB for each thread and their respective stack for a total or 100MB. Add to that the full bytecode compiler and interpreter, all of the bytecode itself, and all the additional metadata for it and you might be looking at 300MB of memory. That’s even before your own code has even begun allocating anything else to the heap.

In Go, you’d be looking at 50 fibers each using 4KB and maybe 4 real OS threads for them to be scheduled over for 4MB worth of OS threads, and the stack for each is variable,.so it start at zero and grow only as needed. And because it’s AOT compiled, you don’t need anything else in memory, no compiler, no metadata, no interpreter. That makes it 4MB and 200kb worth of memory.

Now Clojure will add even more to the JVM memory needs, because you’ll also now need to load the entire Clojure compiler and all of the Clojure runtime and its bytecode.

And now we get to your actual program. In Java, well the stacks are already fully reserved, but you also can’t actually choose to allocate anything on it, so everything you create will be an object on the heap. Those objects aren’t lean either, since they are reflective, they hold a lot of meta info about themselves. Now I’m not sure if Go does much better here, I’d assume a little bit, since it has structs and might make more use of primitives and doesn’t have any exception and all that. But let’s just say the “everything is an object on the heap” tends to use a bit more memory, if you could do more primitive and packed primitive structures directly on the stack or even on the heap but just leaner then objects it would probably help.

Anyways, that was just a fun deep dive, the runtimes between JVM and GO differ a lot, so they are a bit like apple to oranges, and it might be more fair to compare AOT with AOT like GraalVM vs GO. But I understand that from a user point of view who cares, if AOT meets your needs better, and I think nowadays it does, because you will own the servers yourself with cloud and SAAS, you’re not selling on-prem stuff where you need to support all the different servers the clients might have, the write-once/run-everywhere a JIT provides you might not be worth it. And when IO has become the bottleneck, the 5% benefits a JIT might be able to optimize a hot routine also doesn’t really matter.

For those reasons I’m really happy to see the arrival of GraalVM, but it’s clearly not as polished as the Go AOT compiler quite yet.

Edit: One last thing though, Clojure would be pretty hamstrung if the JVM was not interpreter or jitted, that’s why you haven’t seen a hosted Clojure on GO yet, runtime compilation for REPL driven development and hot code reloading can’t be done on AOT as far as I know.


Isn’t typing annotation going to incur less runtime overhead than spec? Spec, as far as I know, does runtime validation.

If you just want to specify type as a form of documentation, type annotation incurs no runtime cost.

Spec is far more than runtime validation, it’s can also do parsing, data generation, data shape transformation, coercion, etc.

This is an example that using Spec for macro argument parsing.

(s/def ::method-def
  (s/cat :name simple-symbol?
         :binding :clojure.core.specs.alpha/binding-form
         :body (s/* any?)))

(s/def ::defclass-args
  (s/cat :name        simple-symbol?
         :?docstring  (s/? string?)
         :*method-def (s/* (s/spec ::method-def))))

(defmacro defclass
  [& args]
  (clojure.pprint/pprint (s/conform ::defclass-args args)))

(defclass Foo
  "some doc"
  (foo [a] a)
  (bar [a] b))

The arguments will be parsed as

{:name Foo,
 :?docstring "some doc",
 [{:name foo,
   :binding [:seq-destructure {:forms [[:local-symbol a]]}],
   :body [a]}
  {:name bar,
   :binding [:seq-destructure {:forms [[:local-symbol a]]}],
   :body [b]}]}

common lisp (particularly sbcl) is probably a counterpoint. I imagine other native lisps/schemes are too.

I don’t think so, but there’s a bit of semantics involved here, in the context I’m talking about, SBCL and other compiled Lisp’s with REPLs and hot code reloading are not AOT.

What SBCL does you could count as a JIT, in that you still need to load a full compiler in-memory which will at runtime compile a Lisp form or file into in-memory machine code that it can link against and load all at runtime.

Where one definition could argue that’s not a JIT, because a JIT will generally perform runtime analysis and so could choose to run things interpreted at first, gather some analysis data, and based on that decide how and what to compile, and might later decide to recompile things that were already compiled if it thinks it can better optimize it as it keeps gathering analysis data.

I would still argue while maybe not JITed in that sense, it is still not compiled ahead of time, and really is still compiled just in time when the user launches the program or is about to load some form or file. And that means it must bundle and load in-memory the full compilation machinery.

It might need less machinery to have in-memory than JVM, since it won’t need an interpreter or any of the fancy analysis and recompilation and all that the JVM might do for optimization. But if you take GO, you still cannot achieve that, or any true AOT where by AOT I mean the program has been fully compiled to machine code prior to the user launching it.

Edit: Well I believe SBCL does have an AOT mode as well as some other Lisps, but when in that mode I don’t think you can use a REPL or do hot code reloading.

or any true AOT where by AOT I mean the program has been fully compiled to machine code prior to the user launching it.

Semantics (or maybe nomenclature) are important. If we follow this restriction, then we can’t include things like dynamically linked libraries, since they are linked/loaded at runtime and we have no guarantee they haven’t been recently compiled (even after the program has started). Maybe that’s a further distinction (statically linked, AOT compiled).

SBCL’s compilation process does in fact compile “everything” yet it supports incremental compilation and codeswapping. So the unit of compilation can be extremely small (like a function). This is, IMO, no different than having a live-coding setup with e.g. a C compiler that’s watching your code and swapping in incrementally compiled stuff as you make changes to the source (this is what Casey Muratori does in the HandMade Hero series on gamedev, and basically just uses C as a scripting language). It is not a JIT; everything is statically analyzed and compiled 1 time ahead of usage. There isn’t any fallback to an interpreted or “lesser optimized” mode; the compiler is optimizing according to the flags you set (just as with gcc or any other static lang) and producing the relative assembly which is then loaded.

If you want to have this kind of thing, you need to pack along a compiler though, hence even stripped down SBCL images are like 23mb or so (with core compression). They still support this development style (incremental AOT compile + code swapping). What they don’t have is whole-program analysis (although the Stalin scheme compiler does).

Go read the blog post I linked to about how we use Spec since it covers that. The TL;DR is that Spec covers a lot of different use cases and you would pretty much never use the instrumentation in production – but for dev/test, the instrumentation and data generation can be very useful, and that’s the scenario that was raised early on in this thread: to help understand the data structures that functions manipulate.

I’m not sure what you mean by “type annotation” in the context of Clojure but I expect you’re referring to Typed Clojure. That’s an academic, experimental system. It’s research-level material. We’ve tried using it at work a couple of times over the years but it really isn’t practical (for a number of complicated reasons – which underlines why this is still subject to academic research). I don’t think anyone uses Typed Clojure on a commercial project but it is definitely a fascinating piece of work. We ran into a lot of the same issues that CircleCI did: Why we’re no longer using Core.typed - CircleCI

That’s why I said it’s a matter of semantics, but the reality doesn’t change, that you prefer to put SBCL in the AOT category within your own mental taxonomy or that you prefer to put it in the JIT category or create a third category for it does not change the reality of what the differences are in the design.

And the difference within the context of my current discussion is that of “lean runtime”, and I’m saying GO can be lean by doing a full program compilation to a target machine code and statically linked binary. This gives GO its leanness, but it wouldn’t work (as far as I know) with such a model to have a REPL or hot code reloading. At best you could have a hybrid model, where during development it runs interpreted or JITed (or your third category of incremental AOT) and when distributed it becomes full AOT with statically linked binary (or dynamically linked against known existing AOTed libs).

So SBCL and other Lisps don’t count in my mind, they don’t employ Go’s model, because they will have to include a full compiler in the bundle as well as in-memory when the program is running. And loading code will also take a hit, so startup and other various point in the program runtime, since things will need to be compiled as they are loaded.

And in that sense, to me at least, their runtime is closer to the JVM model.

Yes, this seems obvious. If you don’t pack a whole program analyzer/compiler along and live with closed world assumptions, you reap the benefits of uncompromising stillness (namely space/infrastructure). You also live in a static world (hopefully you dotted all your eyes and crossed your t’s before deploying into production).

And loading code will also take a hit, so startup and other various point in the program runtime, since things will need to be compiled as they are loaded.

Not necessarily. Again, since everything can be statically compiled, you can just dump the memory/lisp image and get efficient starts (sbcl already does this). Not so with JVM.

You can also opt into a design like Stalin (compiler, not the murderer), where you trade off having eval and load, and get whole-program optimization and the traditional statically linked compilation model. I think there are options in the lisp realm.