How do you manage re-deployments with jar-based deployment?

Deploying your Clojure app is not hard; you can run a jar file with a port exposed, slap a war in a Wildfly/Tomcat/Glassfish server, run it from the REPL on the server with an exposed port, stick a Clojurescript app in a regular http directory, and on and on. My question is, for those doing uberjar deployments (I assume run with java -jar), how do you orchestrate your re-deployments? In particular, how do you stop the running version in order to start your other version? Do you somehow keep track of your process ID and kill it? Something more sophisticated?

This is the reason I’ve used Wildfly so happily in the past, but now that Immutant is long-deprecated and incompatible with Java 11+, I’m considering moving to something shell-script-based.

There are lots of options for turning a JAR-based process into a daemon service that can be monitored/managed with standard system tools. Since we have changed how we run processes over the years, we started off with our own daemon.sh script that is symlinked to /etc/init.d for each named service, and it tracks the PID in a file and so all the standard service foo status, service foo start, service foo stop stuff works. Then as we changed how each process was run, we just updated daemon.sh so the system is protected from how we actually package/run a process.

When we deploy new JAR files, we have an auto-deploy script that does service foo stop, copies the new JAR to the expected location, and does service foo start. That’s staggered across the cluster and the processes all have a standard URL that the load balancer probes frequently to establish health, so requests are routed to another server when one is restarted. It loses in-flight requests and can deny a few more requests before the load balancer moves traffic but given that almost all our traffic is between our SPA and our REST API, and the SPA knows how to handle such failures, it’s pretty much invisible to our users. We could make it more robust by interacting with the load balancer to drain requests before restarting the processes but it hasn’t been a priority so far.