Deploying to production server

From the server’s point of view, a Clojure application is a Java application. It may be helpful to point out that Clojure at least adds no special concerns in this area, except possibly that some situations indicate AOT-compiling for faster startup; “lein uberjar” can do that. But if you are not doing Kubernetes you probably do not need AOT compilation.

This is what I do, if a single .jar deployment on a single Linux box is sufficient:

Either manually but preferrably through something like Ansible I set up the server to have

  • Java
  • nginx (as reverse proxy for the app for SSL)
  • a database if needed *

(* Nowadays I prefer to use docker to run the various processes, but this is by no means necessary)

With CircleCI I make an uberjar, if it’s a lein project via lein uberjar
I scp this jar to the server from CircleCI
I restart the jar.

The jar runs as an ubuntu service, using systemd for instance

Environment variables are put in that systemd script.

This way it’s pretty bare bones yet very stable IME.

I urge you to use something like CircleCI to help the automation, then it’s just git push to master and it auto deploys. This is a creature comfort that looks small, but even for solo projects gives lots of benefits:

  • building does not have to be done on your own pc
  • you can leave your repl etc all running, and lein uberjar doesn’t screw up things locally
  • you have a well-documented (for yourself as well as others) playbook of what needs to be done to get something up in production.

If you want some example config yaml files, let me know, I’ll dig something up!

Here’s another approach based on SystemD.

3 Likes

We build an “uberjar” for each application, using depstar (Clojure CLI/deps.edn) with AOT compilation to improve startup speed and specifying the -main namespace, then we run each of those JARs with java -jar and all we need is a JVM in production.

3 Likes

Curious: Do you handle SSL in your jar? Or is that pushed out to load balancers or something like fronting it with nginx? (In cases where it’s a webapp or API)

If you do, what is your approach?

SSL is handled on the load balancer for most things. We have some apps just fronted by Apache and SSL is handled in Apache.

3 Likes

In case it’s useful, I just wrote a blog post about how I deploy web apps to production.

4 Likes

I recommend you use SystemD.
It will take care of restarting your app when it fails.

Java Service Wrapper might be good too for similar reasons:
https://wrapper.tanukisoftware.com/doc/english/introduction.html

Hi,

I was wondering if deploying *.war file to a servlet container is a common practice in Clojure world?

If one wants to do a hot deployment I don’t see an easy way using *.jar deployment without some form of orchestration of at least two jar processes that can smartly back the other while update is being carried out.

(I’m currently considering setting up Tomcat for that purpose in combination with lein-uberwar. Using server container is also appealing because I would be using cheap VPC for personal projects and this way I can save resources on memory usage, etc)

Any kind of comment or feedback is much appreciated.

Kind regards,

It is not common, no. I generally only see folks doing this if they are restricted to WAR files by their ops folks.

1 Like

Thanks! Is there a common practice on how to do hot deployment/hot swapping in production?

Not speaking from experience here, but two options I’ve heard of are

Dokku - does hot swapping for you
Blue-green deployment. Basically two machines behind a load balancer, and you update them one at a time.

If your goal is to save money on hobby projects, do check out the blog post I linked to earlier, because the hosting I describe is free for small projects. I use it to host a half-dozen small projects, and they’re all free because the RAM + CPU usage is low.

1 Like

I always forget this has a specific name! Yes, this is what we do for all our deployments: We have three instances of nearly all our processes and do rolling deployments across the cluster, spaced five minutes apart, all scripted.

2 Likes

Hi, thank you very much for your feedback. This comment helped out a lot - I was thinking along similar approach however I have never done it before and wasn’t very sure (didn’t know what it was called either). I’ll look more into blue-green deployment.

I am doing some hobby projects right now :smile: will check your blog post out too! Once again thanks!

I asked a similar question a couple years ago and the answers were educational: What's your deployment method?

Back then I was doing war deployment with Immutant, but with its deprecation I’m starting to move to the systemd setup described by others here.

That’s not blue-green, yours is just called a rolling deployment (at least I don’t know of any other name).

Blue-green is when you have two fleet, the blue and the green, and you fully deploy to the one that is not bound to your load balancer, and when the whole deployment is done, you switch your LB in one go to point to the other fleet.

Rolling is when you deploy one host (or some percentage of hosts) at a time.

The advantage of blue-green is rollback is immediate. The downside is it’s more costly, since you need two fleets.

Thanks for the clarification. I guess that’s why I didn’t think our approach had a specific name (other than a “rolling deployment”).

Calling a two server deployment “blue-green” seems a bit disingenuous though since it means you only have one server in production so you have no redundancy – I’d say you need at least four servers to have a real “blue-green” deployment?

It’s supposed to be you have double of everything. So if you plan to have redundant hosts, you’d have double of them as well.

Any server today with only one prod host does feel a bit weird, so a Blue-Green deployment with two fleet each of only one host seems illogical. Especially, when you do Blue-Green it’s because you want zero downtime. If you only have one host in your fleet like you said, you’re at a high risk of downtime if anything happens to the host.

To save money, what people do with cloud providers, is they’ll deploy to Green, wait a bit to make sure there is no issue and no need to rollback, and then they’ll release Blue, so you don’t pay for them when they aren’t used. And on the next deployment, they’ll create a new Blue fleet, deploy to it, than switch the LB from Green to Blue, wait a bit, and if there’s no need to rollback they’ll now release Green.

Oh, and you can also do a slow switch. Like say you have 5 hosts in Blue and 5 in Green. You can move one host at a time between Blue and Green, if you want a lower impact in case of issues. But the idea is that you don’t want to wait for the time it takes to redeploy in case of a rollback.

Also, Blue-Green in some way can actually be easier to setup than a rolling deployment.

1 Like

For the “rolling updates”, many cloud providers can provide them for free - e.g. when you use AWS Elastic Beanstalk and configure it to use 2 or more instances it will always try to perform upgrades so that your service remains available.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.