Is there a sales pitch for switching to deps.edn from Lein in 2020?

In case some folks reading aren’t aware, you don’t need AOT to run programs via an uberjar – AOT really only serves two purposes:

  • to make startup time of the uberjar faster.
  • to avoid the need to specify a main class at startup

If you build an uberjar without AOT, you can run it with java -cp path/to/the.jar clojure.main -m entry.point – or if there’s a manifest specifying clojure.main, you can do java -jar path/to/the.jar -m entry.point

1 Like

Agreed. I had some really weird errors that popped up after making an uberjar. It was recommended to me on the Slack to never use AOT (barring a real need), which took a little research because leiningen AOTs by default when making an uberjar. No errors since. Apparently lein keeps target/ in the classpath at all times, which creates some really fun interactions when you are writing code in the REPL.

1 Like

Definitly avoid AOT if possible. I’d say the only case I still use it is interop, and even then, if I can use other means, like Clojure’s Java API I’d use those instead. And when doing AOT for interop, it helps to be selective as well, and AOT only what’s needed, not the whole code base.

1 Like

This. I tried AOT / genclass for interop in some stuff I was doing in my previous job, and it was not smooth. In the end, I ended up using the Clojure Java API, and a small “factory” function in Clojure that reified what I needed.

For a server side application, would it be bad practice to NOT deploy an uberjar to QA or the production server, but rely instead on raw deps/git? Basically, do a git pull for the version of the server code’s .clj files and then use clj -m to run the application.

Is that nuts?

1 Like

That’s what we do. It’s a calculated risk since you might get a left-pad situation on your hands. Plus releases are not truly self-contained. You have the same risks when building locally though, so the best way to be sure is to run a local maven mirror…

2 Likes

I’ve done a variant where I create a docker container where I download all the dependencies at build time, then just run with clojure -m myapp.main at runtime.

3 Likes

Another legitimate use case for AOT is when you do not want your source code deployed to your production servers.

2 Likes

yeah, totally agree with the local maven mirror. That is what we do as well.

I guess theoretically also, you can have just a startup deps project on the production server with 1 .clj file that starts up you application from the dependencies in the deps.edn, instead of putting the entire application clj files out there. Then you just really update any version numbers in the deps and restart.

This was new to me. Leftpad: live pickup of breaking changes in a dependency https://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos/

orestis via ClojureVerse notifications@clojureverse.org writes:

If folks can get at source code on your production servers, you have bigger problems than can be solved by AOT :smiley:

I get that some folks might want to AOT their app if they are distributing it to end users and want it to be closed source and/or commercial (such as Cognitect’s REBL and, I suspect, Colin Fleming’s Cursive?).

For years, we ran from source code on our production servers, using Leiningen (lein run) at first, then Boot. We only switched to uberjar-based deployment in the last few years – and we used source uberjars (no AOT) up until very, very recently.

We use AOT (built into depstar) now purely to improve startup time so our automated rolling deployments come back into the cluster faster.

1 Like

I’d say its not just left-pad. The dependency closure is not guaranteed idempotent.

First off, the maven repository isn’t immutable. A published artifact for a given version can be updated or removed. So the same artifact + version could return something different in the future.

But, the dependencies can themselves be declared as RELEASE or LATEST or SNAPSHOT. Even if you don’t do that, your transitive deps might. So even if no artifact version gets mutated, the closure could pick up a newer one.

Finally, the repos themselves can go down or be unavailable for some time.

So you could have an internal repo, which you configure immutable, which proxies the public ones, but still, you can be at risk of a dependency closure returning a newer deps. I think you can even use version wildcards and things like that.

Uberjar solves all that. The Uberjar doesn’t only lock all dependencies to an exact version, it also makes a back up of them all and puts everything in one zip file.

With that, you are guaranteed that all your hosts will always be running exactly the same thing.

1 Like

And this was a real issue with Clojars, back before they added a CDN! We ended up running our own instance of Apache Archiva as a proxy for Clojars (and Maven) because we couldn’t rely on Clojars during our build processes. We also had a couple of third party libraries we used that weren’t on a public repo and we hosted those on Archiva as well.

Since we switched to CLI/deps.edn, we can have those third party libraries locally and use :local/root for their dependencies – and once Clojars’ CDN was up and running, we decom’d our Archiva instance (and we’ve never had a build failure due to an unavailable repo since).

1 Like

There are cases where having an additional security boundary where your full source code is not deployed onto all production servers is a good thing. If my production servers were to be breached, I don’t see why source code should also be immediately available, especially if the alternative is so easy (just enable :aot :all in your project.clj) and has other nice benefits as well (such as improved startup time).

All I’m trying to say is that I think AOT should not be dismissed as completely unnecessary.

1 Like

thanks. These are very good points.

especially if the alternative is so easy (just enable :aot :all in your project.clj )

But it’s not that simple at all unless you understand the exact details of the leiningen/JVM/Clojure interactions. It’ll work fine until you have some crazy error with straightforward code caused purely by having AOT’d .class files. It might stay simple if you always “lein clean” before running any other lein task, but you know that at some point this will be forgotten and time will be wasted, more time than you save with a better start-up time. I would only use it if it was actually needed and I had the time to fully research the mechanics and ensure that it will never bite me.

Same deal with deps.edn having a better dependency resolver. I know how to fix the problems when they pop up even if I have no idea why they do, but it cost me significant time to get to that point. I would not recommend lein to anyone for this reason alone. It’s fine to use leiningen or AOT if you are already an expert in their use, but if you’re not, they can break your build out of the blue. I’d much rather spend my time coding than fighting with my build tools.

1 Like

To be fair, most of the issues of AOT aren’t due to Lein, but the Clojure AOT itself. Specifically because it is transitive, and it always apply to the whole file. Which makes sense, when you’re looking to uberjar. That’s why if you are going to do AOT (not for interop), better make sure it is the last step and done as part of the uberjar creation.

3 Likes

AOT also can be important when it is not desired to deliver your plaintext clojure source to a client. Obviously you can decompile classfiles, but there is more of a barrier there and it isn’t as easy as just opening the jar. I’ve been at companies where the legal advice is to not deliver the source in these cases to make it clearer that decompiling is not part of the contract agreements.

I do see similar comments have been made, so it’s not too uncommon of a concern.

It’s been discussed here quite a bit already, but I’ll add a few more points.

Leiningen is basically an entire declarative, framework-style build tool. It is based around the unifying concept of the “project” (project.clj) and merging “profiles”. Plugins are written to operate within this framework to perform miscellaneous build tasks.

deps.edn seems to be designed with the more targeted goal of being classpath building tool that let’s you configure it in various ways via aliases. It manages dependencies associated with constructing these classpaths [1].
In doing this, it facilitates builds being composed of separate tools that are targeted more specifically to the various sorts of tasks you need to do to build and deploy something.

I agree that deps.edn seems more aligned with many popular clj libs in the ecosystem in that it only attempt to do an isolated part of the overall build, and therefore favors composing the pieces however you want, ie. using “simpler tools”.

However, when it comes to build tools there are tradeoffs that I think others may be able to relate to here as being similar to the (old) debates of Maven vs Ant in the Java ecosystem.
When you actually end up configuring a production-like build & deploy cycle for a project, you will eventually (likely) end up:

  1. composing a lot of tools somewhat ad hoc w/deps.edn and friends,
  2. or you can use lein with a bunch of plugins all conforming to the same declarative framework approach.

The deps.edn + other tools ad hoc composition could be thought of as more “imperative” in style. Also, it also may be more difficult to apply the same pattern to many projects - like what a framework specializes in doing.
The lein approach is monolithic, but has configuration be defined in a more “declarative” way. This can lead to more patterned reuse, at the expense of less flexibility at times & a need to possibly understand the framework system more upfront.

This Stack Overflow post (along with quite a few others if you search around) provides insight that I believe relates to this topic https://stackoverflow.com/questions/14955597/imperative-vs-declarative-build-systems

That said, you can actually write a lein plugin that uses these “simpler” composable tools, such as deps.edn, figwheel-main, etc. eg. It’s already been done for deps.edn as described Combining tools.deps with Leiningen (note: I’m not sure how much this has been used still).

Lastly, from personal experience, I’ve experimented a little with the composing-simple-tools path (eg. deps.edn) vs the monolith framework (eg. lein) to try to understand the pro’s con’s of each.

A few takeaways:

a) Simpler tools can be nice since each part is more isolated & clear where a step occurs. It can be easier to immediately understand & debug issues.
With Leiningen, you have to understand the framework more when debugging issues. If you understand it well, it becomes easier, but there is more overhead to learn there. Debugging can still more of a chore since things happen often in the “framework stack”, which can be difficult to walk through.

b) Within a organization you may end up with many projects that are quite similar in structure and build/deploy process. I’ve found the ability to share build tasks less ad hoc and more repeatable with Leiningen where you can have a bit more dynamism in the definition of the project.clj as well as write/use plugins to deal with the individual tasks you want configured.
When composing individual tools, it may be redundant or manual (like using scripts) to communicate common “variables” across the build, such as project name + version + asset locations (web), etc.
When unifying around Leiningen projects and using plugins, the project is the “source of truth” you can use to communicate across all the tasks.

I’ve seen some deps.edn composition setups that ended up using several bash scripts to “glue”/“wire” thing together. This was mentioned already here I see too
This concerns me a bit given the difficulty of writing portable bash scripts, as well as the need to understand more ad hoc build steps per project in a language like bash (for someone who is proficient/loves bash more than me, this may not be a concern).

c) It seems to me that there may be projects that are better suited to the piecemeal composition approach to the framework approach, or vice versa. There may be times when the framework is very useful due to it’s common patterns and shared “glue”/“wiring” infrastructure.
There may be times when the framework is more of a complexity problem than a useful tool - such as when the lack of flexibility becomes too much within a project’s build setup.

Notes:

[1] In addition to this point, it provides it’s own dependency resolution infrastructure and includes more features than the maven/aether/pomegranate libs used by Leiningen - such as pulling dependencies from git sha’s. This was mentioned in other posts here.

8 Likes

I agree with you. I think ideally, tools.deps would become the defacto universal Clojure classpath builder, dependency manager, and Clojure application launcher.

And there would be something else, which would become the defacto build tool integrated with tools.deps for all its class-building and dependency pulling needs. lein-tools-deps is one attempt, but as someone whose pocked at the lein internal, there are so many assumptions that it will manage dependencies and classpath as well, that its a bit tricky. It be nice to see something more integrated, but which offers like lein a more holistic framework and bundles most common task already in a way that they all work well together and with tools.deps.

2 Likes