What's your deployment method?

I’m very curious what deployment methods folks are using for their applications. I think it would be great to get a flavor of the options being used. Please include the general type of your application (e.g. web application, java component, js library, etc). I’ll start in the comments.

On our CI/QA server, we build an uberjar for each process – using my fork of depstar – and push the JARs to the appropriate servers (using an SFTP process written in Clojure, run via clojure/deps.edn). Each server has an “auto-deploy” shell script that looks for updated JARs that are relevant to that server and shuts down the current service, updates the JAR file, and restarts the service.

We also use this model for command-line / cron-based processes too (except no service to stop/start).

We have a mix of web apps, pure REST API apps, and background (command-line) processes. Our web apps and APIs are load-balanced so when a service is shut down, requests just route to another server, and the auto-deploy process is staggered across the services to produce a “rolling restart” effect.

1 Like

I’m a web developer, deploying full-stack web applications. I’ve used the java -jar method with helper shell scripts to upload/run the deployment, and have also looked into (but not applied) a docker deployment method. My current deployment method of choice is to generate an Immutant uberwar and simply place it in a Wildfly standalone deployment directory, letting Wildfly handle the rest. For this I have a local shellscript that runs lein immutant war and then scp’s the output to the target destination.

1 Like

Since we use clojure/deps.edn for everything and depstar for uberjar building, we start our processes with java -cp path/to/the.jar clojure.main -m entry.point and we use the embedded Jetty server that is “standard” with Ring. We’ve never seen the need for an app server.

I’m curious as to what benefits you get from using WAR-based deployment and Immutant/Wildfly?

I started as a webdev who was forced to become an amature devops
practitioner by the nature of my edu workspace (this allowed me to
justify using Clojure from the start) where I work with varied,
unrelated applications on shared servers. Wildfly + Immutant has
been terrific because, out of the box, it gave me a process as
almost simple as the Django and PHP setups elsewhere in use in my
operations, also giving me some up-time guarantees because Wildfly
automatically performs A/B swapping when I drop a new version of the
war into the directory. Of course, being a descendent of JBoss,
Wildfly has all kinds of bells and whistles I haven’t yet mastered;
from the beginning, though, it was because it was so easy and
provided such uptime guarantees for my shared-servers (where I
currently host around a dozen totally different apps).

Historically we have been using uberwars deployed to Jetty/Tomcat. Since this has also been a learning process for me, I have experimented with the combination of a VPS+uberjar for some of our APIs, using Ansible for deployment. None of those have been perfect for us though: The first was very brittle because of different build environments and issues with the servlet containers across many redeploys; the second is a very complicated setup and requires all our servers to be directly connected to the internet.

We are in the process of switching fully to a Docker-based setup, where each Docker image contains a copy of the uncompiled code, which is run using the clojure CLI / tools.deps. The images are deployed to a private registry, from which the individual API servers get their images. The redeployment process has not yet been automated, so I don’t know if that ends up being something triggered by our CI, or something different.

The docker+tools.deps process sidesteps the issues with building uber{jar,war} files, and removes the need for a complicated Ansible provisioning process for the servers and deployments, while still being immutable snapshots of the code. It also makes it easier for us to have a server setup with a load balancer on a public IP as the single point of entry to a private network of docker containers.

3 Likes

We separate building from deploying; when we build (with Jenkins, and some help from my own https://github.com/l3nz/say-cheez to capture git revision and build numbers) we create a Uberjar, that is uploaded to s3.

Then we do a manual deployment through Ansible, by referencing the name of our artifact, and either it goes to a small production canary set, or it is deployed rolling-style to production.

1 Like

I’m currently deploying cljdoc by downloading a zip file from S3 and restarting a systemd service that picks the version (~zip file) to use based on a configuration file. There is one script to update this file and another one to download and run the specified version. This is what I run when deploying:

./ops/deploy.sh $SHA

Unfortunately this restart procedure means that there’s maybe 30-40 seconds of down time every time I deploy. @Webdev_Tory For that reason I’d be very interesting in hearing more about your WildFly setup — it sounds like WildFly only swaps the service once it has started?

3 Likes

Wildfly swapping the service once it has started is the goal, yep. Lately I’ve had a couple problems just because the Database backend isn’t prepared for that swap and I haven’t had a graceful fallback written into my software, so I actually get a couple minutes down-time until the system can reach the database. That’s on me, though, for not writing it robust enough (and probably needing to tweak my DB server a bit).

After installing Wildfly with something like this (which is what I used when I set it up a few years back), I end up with a directory structure like this:

torysa@humpre:/srv/wildfly/standalone/deployments$ ls -R
.:
fttv  fttvstaging  funding  htmlvalidator  humgrants  isp  maladroit  pinfo  README.txt  scraper

./fttv:
fttv.war  fttv.war.deployed

./fttvstaging:
fttvstaging.war  fttvstaging.war.deployed

./funding:
funding.war  funding.war.deployed

./htmlvalidator:
htmlvalidator.war  htmlvalidator.war.deployed

./humgrants:
humgrants.war  humgrants.war.deployed

./isp:
isp.war  isp.war.deployed

./maladroit:
maladroit.war  maladroit.war.deployed

./pinfo:
pinfo.war  pinfo.war.deployed

./scraper:
scraper.war  scraper.war.deployed

Wildfly be default deploys them to, e.g.,localhost:8080/scraper and I front the apps with a reverse proxy from a regular server (the example above is from an NginX server, but we usually use Apache) and end up with the desired http://scraper.myurl.com. There’s a little bit of hassle involved with internal routing against a reverse-proxy, but it’s easy enough to work around. For deployment I have a local script that looks like this (using ssh config to make remote work easier):

### scraper/publish.sh
#!/bin/bash
lein clean
lein immutant war
scp target/scraper.war humpre:/srv/wildfly/standalone/deployments/scraper/

I write my code, run from the shell “./publish.sh”, and deployment is done.

1 Like

Sean, mind sharing which load-balancing you’re using in this setup? Does the proxy detect the service is down or does the deploy trigger the changes?

We have a number of layers. CloudFront (CDN/caching), F5 BigIP (load balancing), Apache (rewrites/proxies), Jetty/http-kit/Clojure. If an instance goes down, F5 routes traffic to other members of the cluster until the instance comes back.

For my freelance work, I use Ansible to provision hosts, Jenkins to build all-in-one .jars from specific branches and push them to the target host. There, everything is done with docker via docker-compose, which takes care of databases, monitoring and so on. I use an openjdk base image to run jars. Ansible, Jenkins and Docker are a pretty strong combination with the right setup, as you can have a lot of the infrastructure as code.

Ideally, I‘d like to utilize a container/image registry and do things a bit more properly, but at the moment it’s just not feasible.

I think like other people.
My dev-prod working on Linux : using saltstack, containers, Kuberentes, ssh or/and some scripts which pull gitlab/github repos.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.