Is there a sales pitch for switching to deps.edn from Lein in 2020?

especially if the alternative is so easy (just enable :aot :all in your project.clj )

But it’s not that simple at all unless you understand the exact details of the leiningen/JVM/Clojure interactions. It’ll work fine until you have some crazy error with straightforward code caused purely by having AOT’d .class files. It might stay simple if you always “lein clean” before running any other lein task, but you know that at some point this will be forgotten and time will be wasted, more time than you save with a better start-up time. I would only use it if it was actually needed and I had the time to fully research the mechanics and ensure that it will never bite me.

Same deal with deps.edn having a better dependency resolver. I know how to fix the problems when they pop up even if I have no idea why they do, but it cost me significant time to get to that point. I would not recommend lein to anyone for this reason alone. It’s fine to use leiningen or AOT if you are already an expert in their use, but if you’re not, they can break your build out of the blue. I’d much rather spend my time coding than fighting with my build tools.

1 Like

To be fair, most of the issues of AOT aren’t due to Lein, but the Clojure AOT itself. Specifically because it is transitive, and it always apply to the whole file. Which makes sense, when you’re looking to uberjar. That’s why if you are going to do AOT (not for interop), better make sure it is the last step and done as part of the uberjar creation.


AOT also can be important when it is not desired to deliver your plaintext clojure source to a client. Obviously you can decompile classfiles, but there is more of a barrier there and it isn’t as easy as just opening the jar. I’ve been at companies where the legal advice is to not deliver the source in these cases to make it clearer that decompiling is not part of the contract agreements.

I do see similar comments have been made, so it’s not too uncommon of a concern.

It’s been discussed here quite a bit already, but I’ll add a few more points.

Leiningen is basically an entire declarative, framework-style build tool. It is based around the unifying concept of the “project” (project.clj) and merging “profiles”. Plugins are written to operate within this framework to perform miscellaneous build tasks.

deps.edn seems to be designed with the more targeted goal of being classpath building tool that let’s you configure it in various ways via aliases. It manages dependencies associated with constructing these classpaths [1].
In doing this, it facilitates builds being composed of separate tools that are targeted more specifically to the various sorts of tasks you need to do to build and deploy something.

I agree that deps.edn seems more aligned with many popular clj libs in the ecosystem in that it only attempt to do an isolated part of the overall build, and therefore favors composing the pieces however you want, ie. using “simpler tools”.

However, when it comes to build tools there are tradeoffs that I think others may be able to relate to here as being similar to the (old) debates of Maven vs Ant in the Java ecosystem.
When you actually end up configuring a production-like build & deploy cycle for a project, you will eventually (likely) end up:

  1. composing a lot of tools somewhat ad hoc w/deps.edn and friends,
  2. or you can use lein with a bunch of plugins all conforming to the same declarative framework approach.

The deps.edn + other tools ad hoc composition could be thought of as more “imperative” in style. Also, it also may be more difficult to apply the same pattern to many projects - like what a framework specializes in doing.
The lein approach is monolithic, but has configuration be defined in a more “declarative” way. This can lead to more patterned reuse, at the expense of less flexibility at times & a need to possibly understand the framework system more upfront.

This Stack Overflow post (along with quite a few others if you search around) provides insight that I believe relates to this topic

That said, you can actually write a lein plugin that uses these “simpler” composable tools, such as deps.edn, figwheel-main, etc. eg. It’s already been done for deps.edn as described Combining tools.deps with Leiningen (note: I’m not sure how much this has been used still).

Lastly, from personal experience, I’ve experimented a little with the composing-simple-tools path (eg. deps.edn) vs the monolith framework (eg. lein) to try to understand the pro’s con’s of each.

A few takeaways:

a) Simpler tools can be nice since each part is more isolated & clear where a step occurs. It can be easier to immediately understand & debug issues.
With Leiningen, you have to understand the framework more when debugging issues. If you understand it well, it becomes easier, but there is more overhead to learn there. Debugging can still more of a chore since things happen often in the “framework stack”, which can be difficult to walk through.

b) Within a organization you may end up with many projects that are quite similar in structure and build/deploy process. I’ve found the ability to share build tasks less ad hoc and more repeatable with Leiningen where you can have a bit more dynamism in the definition of the project.clj as well as write/use plugins to deal with the individual tasks you want configured.
When composing individual tools, it may be redundant or manual (like using scripts) to communicate common “variables” across the build, such as project name + version + asset locations (web), etc.
When unifying around Leiningen projects and using plugins, the project is the “source of truth” you can use to communicate across all the tasks.

I’ve seen some deps.edn composition setups that ended up using several bash scripts to “glue”/“wire” thing together. This was mentioned already here I see too
This concerns me a bit given the difficulty of writing portable bash scripts, as well as the need to understand more ad hoc build steps per project in a language like bash (for someone who is proficient/loves bash more than me, this may not be a concern).

c) It seems to me that there may be projects that are better suited to the piecemeal composition approach to the framework approach, or vice versa. There may be times when the framework is very useful due to it’s common patterns and shared “glue”/“wiring” infrastructure.
There may be times when the framework is more of a complexity problem than a useful tool - such as when the lack of flexibility becomes too much within a project’s build setup.


[1] In addition to this point, it provides it’s own dependency resolution infrastructure and includes more features than the maven/aether/pomegranate libs used by Leiningen - such as pulling dependencies from git sha’s. This was mentioned in other posts here.


I agree with you. I think ideally, tools.deps would become the defacto universal Clojure classpath builder, dependency manager, and Clojure application launcher.

And there would be something else, which would become the defacto build tool integrated with tools.deps for all its class-building and dependency pulling needs. lein-tools-deps is one attempt, but as someone whose pocked at the lein internal, there are so many assumptions that it will manage dependencies and classpath as well, that its a bit tricky. It be nice to see something more integrated, but which offers like lein a more holistic framework and bundles most common task already in a way that they all work well together and with tools.deps.


Yes, you have a good point that Leiningen has too many assumptions about controlling certain parts of it’s core infrastructure - the classpath+deps manager being a key one.

I agree with it being nice seeing something more integrated - or at least more designed to be pluggable in this way.

This is why I gave up on boot-tools-deps. I originally created it as a migration aid from Boot (which we switched to at work in 2015 from Leiningen), since we already had our deps in external EDN files, albeit a different format. Ultimately, however, boot-tools-deps just didn’t work well in anything beyond the simplest project – and I suspect lein-tools-deps suffers from similar limitations: I don’t think an add-on / plugin / task can do the integration fully – I think support for deps.edn needs to become baked into those tools (lein/boot) at a fundamental level and they need to switch to tools.deps.alpha completely for dependency resolution.


Could you add a bit more detail about your minimalist tool setup? If not CIDER, do you use Cursive, or just a plain editor with paredit support? And for the socket REPL, you just use a regular terminal session to do any interactive work? A colleague of mine swears by Cursive, shadow, and nothing else. No hot reloading, even. I’m intrigued.

I used to use Emacs but switched to Atom several years ago. I’ve been using Chlorine for just over a year now with Atom. I blogged about my switch to Chlorine at the time.

I have a Paredit package installed in Atom and use it heavily for structural editing. I also have Parinfer installed – which I like for regular code entry and the ability to regroup expressions just via indentation.

Chlorine connects to a bare Socket REPL – in any process – and provides pretty much all the usual Clojure editing niceties: load file, eval form, eval top-level form, eval selection, views docs for a symbol, jump to definition, run tests, code completion, etc, and Chlorine is easy to extend using CoffeeScript in the init file. I’ve added support for Cognitect’s REBL which I also run alongside Atom – I love REBL for visualizing data and for exploring code (it can inspect namespaces and vars and show dynamic call/caller information, as well as doing things like navigating through a database – based on datafy/nav support in or next.jdbc).

I never type into the REPL. I type into my editor and eval forms for every change I make. I do not use any sort of refresh/reload workflow. I have my REPL running for days, sometimes weeks. I use the add-lib branch of tools.deps.alpha so I can load new dependencies on the fly without restarting the REPL.

Periodically, I run test suites from the terminal but I mostly run tests inside my editor/REPL setup.

I posted a few videos of my Atom/Chlorine/REBL workflow on YouTube which may give you more insight.


Speaking about the add-lib branch, it seems to work well for you, any idea why it still isn’t merged in to the default tools.deps?

Because they’re thinking of adding a version of add-lib to Clojure itself, according to Alex. They just don’t know exactly what it will look like.


Does socket repl have any other downsides compared to other repls? Why isn’t everyone doing this?

AFAIK, Cider doesn’t support it, so that might be one reason.

Because “everyone” is using nREPL, statistically. It’s old/established, it’s in all the books and tutorials. The Socket REPL only appeared in Clojure 1.8 four years ago. The prepl only appeared in Clojure 1.10.

There’s not as many features built for it. Currently, the way it gets certain operations is that it injects dependencies over the socket to it, mainly leveraging unrepl and I think it does it in some custom way for ClojureScript.

And ya, the other issue is Cider has a lot of work done on nRepl. The person behind unrepl was apparently working on adding support for injecting nRepl over a socket REPL, for best of both world, but I think there’s a few challenges involved.

Is there a specific reason you avoid auto-reloading?

I’m also working most of the time in a buffer connected to a REPL. I have to admit this is enough 95% of the time. But there are cases where auto-reloading helps (specifically in the context of local deps). Also once you copy the resulting code in the file you can imediately catch any errors (missing ns / vars / etc) as soon as you save it.

I’ve been using Roll in most of my projects and I always enable the auto-reloading of both source code and Integrant components. Because deps.edn allows for local deps, I can watch any local lib folder and make experiments without starting another REPL.

I’ve never needed it – and I see lots of people get into trouble with auto-reload/refresh tooling, so I try to encourage folks to avoid it. Like all “magic”, it can do surprising things, and you have to write your code in a particular way to ensure you avoid weirdness that can crop up in certain situations.

Not sure what you mean here. I work in a monorepo with lots of local deps and never have a problem.

I suspect this is down to differences in workflow. I type into files – never into the REPL – if it isn’t ready to be “production code”, it goes inside a (comment ,,,) form (Rich Comment Forms – Stu Halloway calls them this because it’s a workflow that Rich himself uses). Every single change I make, I evaluate via a hotkey (either eval form or eval top-level form), so the code running in the REPL always stays up-to-date. I don’t even need to save changes for this – I can edit/eval, edit/eval, edit/eval as much as I want, and save changes whenever I feel like. I can load the current file into the REPL via a hotkey – although I do have to save before I do that.

The closest thing to a “reloaded” piece of workflow for me is a hotkey bound to remove-ns which I will occasionally use before the load key hotkey to ensure I have a (brief) clean slate for the (re)load of that namespace – to confirm that I don’t have declarations out of order etc.

Both Eric Normand and Stu Halloway talk about very simple REPL-based workflows like mine. No typing into the REPL, no additional machinery, no reloading “magic”. Eric’s REPL-Driven Development course covers this in detail (well worth a month’s subscription to to check this one course out). Stu has talked about REPL workflows in a couple of conference talks (that are available online), as well as in a recent podcast… can’t remember which one, sorry.


I had the wrong assumption that in Emacs + Cider if you open up a REPL, and then open a file from another project / folder, you lose the connection (and can’t eval the forms). But it seems all works fine as long as the other project is in your deps.edn as a local deps.

You can even open a file that’s not defined in your deps, and run sesman-map-with-buffer, going on memory, but I think that’s the name. And you can then eval and all.


I have no idea how Emacs/CIDER handles files outside your project. When I’m using Atom/Chlorine, I can open any Clojure file anywhere and eval it into my connected REPL.

As mentioned above – about two weeks ago – I use the add-lib branch of tools.deps so I can load additional dependencies, regardless of deps.edn files, which means even if I start the REPL in one project, I can work with sources from any other projects, because I can easily add those projects’ dependencies to my running REPL, and then eval source (and test) files from those projects.

That’s my workflow for modifying next.jdbc when I’m working on other projects, and I want to bring in all the database drivers I test next.jdbc against. See the Rich Comment Form at the bottom of next.jdbc's fixtures file:

1 Like