How I initializing a new web application

I spent the last couple days creating the first draft of a new web application, initially based on a template (ie copy-paste of another current, working application). The new application will essentially have around four pages and two primary pieces of business-logic: users employing the application to make reservations for seats in their scheduled exams in the lab, and admins adding students/classes/exams and lab hours/capacity to allow the first to work properly.

The application-structure is based on Luminus of a year ago, with upgrades and tweaks since then. The full process for these two days included the following steps and I am happy so far, despite an initial “gee, nine hours with just one page to show?” . I’m curious how many of you go through a similar process and I would love to learn/explain any differences from what I do.

  • Initialize a new git repo for all code
  • Upgrading and trimming down the dependencies (this new application is considerably simpler than the one the codebase is copied from)
  • Deleting non-relevant files from the clj/s “template”
  • Finalizing the draft of the SQL schema
  • Creating a properly credentialled postgres db on my machine
  • Creating the clj namespace CRUD namespaces matching each table in the SQL (emacsed a quick bash script auto-creator of all these)
  • Migrating (creating) the tables in the database
  • Writing a draft of the complete usecase story tests to describe program operation
  • Refactoring from the copy-paste <— after this point I have a working REPL into my app
  • Debugging re-frame routing issues
  • Establishing bits of the front-end template:
    • Latest version of Font Awesome
    • Learning Bulma hamburger menu in Reagent (more concise and tasty than the burger JS code they suggest)
    • Fairly extensive CSS work (pleasure in Garden)

Still to do:

  • Decide on the 2nd party (in-house to my university, but not written by me) web services I will need to access, and obtain credentials/license for them
  • Finalize my business logic and database schema with the stakeholders
  • Implementing business-logic routes and interfaces
  • Production deployment planning: setting up Wildfly for deployment, but probably not Jenkins since full CI/CD is overkill for this little app.

In the other half of my job I am involved in using Wordpress heavily, so I need constantly to weigh effort here against effort/outcome if I implemented this there instead of with Clojure. But that’s another thread in the making.

Are there any extra things you do, or questions about what I do? I’d love to compare and learn.


It’s interesting to see it all laid out like this because I suspect many of us don’t think about a lot of those steps!

My workflow would have a REPL up and running much sooner – before I wrote any Clojure code! – and I’d be leveraging the tools.deps.alpha branch for add-lib so I can add dependencies to the running REPL without restarting.

I’d skip the CRUD namespaces I expect, and work directly with next.jdbc, and only refactor into separate namespaces later once I had a better sense of the shape of my code, and only then if I felt it was worth it.

I’d probably evolve the DB schema with migrations over time rather than trying to nail it down up front – I might even avoid a DB and just use atoms for data storage until more of the app has evolved, and then develop the schema from that instead (i.e., instead of starting with the DB).

If my code’s on GitHub, I’d leverage Actions for CI, from the get-go I expect.

If it’s a small app (as you say), I’d try to do as little “up front” as possible, and avoid complexity wherever possible (so I wouldn’t use an application server, I’d just use the embedded Jetty server).

I’d want to get a skeleton app up and running early on, without any front end polish, so I could get that in the hands of stakeholders, to iterate the app with them.


I would 100% approach this the way @seancorfield has outlined above, and that’s most of what I have to say about this that’s Clojure specific.

More generally, I see many developers doing things that feel to them like serious software engineering best practices, but which are basically a waste of time and effort for the stage of their project. For example, we can’t “finalize” our database schema at the beginning of our project (or, if we’re honest, ever). Our time in the early stages of a new project should not be spent trying to nail things down with finality, but rather with building the smallest unit of functionality from which we can grow (discover!) what exactly we’re building.


I’ll just add though that the data model, api model, chosen identifiers and the overall architecture are not things that you can cheaply evolve and modify along the way. Changes to these things come at a high cost, and sometimes it isn’t even possible to do so once the app starts being used in production.

I find people dramatically ignore the design phase for these, and that’s often the cause of long term pains in a project. Those initial choices matter quite a bit. Exploratory programming, and prototypes help a lot with figuring those things out, but you have to be careful for there’s quite the slippery slope between your prototype staying a prototype and it becoming a production system before you had a chance to evolve it to one.

Now, project size, complexity, intented scale, etc. all play into it as well, to determine how much hammock time is needed.

I completely agree with you regarding the cost of changing APIs, datas models—more generally, names—once a system is in production. That’s exactly why it’s so important to prototype and live with the prototypes for awhile during early development—we want to be as close to sure as possible by the time we ship, and thus concretize, the system.

To put my position another way: we avoid waterfall project management because to say at the beginning that we know exactly what’s going to happen over the entire life of a project is to deny that we will learn anything doing the work, which is almost never true for non-trivial projects. My contention is that this observation ultimately applies to all aspects of building complex software.

That said, a caveat to this is that there’s a form of “assembly line” programming (basically, building the same program over and over for different clients—think of the Rails boom a decade ago) where you can know quite a bit in advance about how long it will take and what the final system will look like. Not coincidentally, that’s also one of the few circumstances where frameworks make a certain kind of sense.

Hello, everyone!

Thanks, @Webdev_Tory, for bringing attention to this topic. I’m really appreciating the answers so far.

I’m going to address this point narrowly:

  • Creating a properly credentialled postgres db on my machine

I was comparing a lot off different approaches to backend development with Clojure a while back, and I was getting tired of the mess of getting started with a standard Postgres db. This is what I do now, if I want to setup a db for the hello project:

$ setup-db-commands 
  createcomamnds DATABASE_NAME
$ setup-db-commands hello_dev                  :(
Follow the steps below to create a new database hello_dev and a

1. Create the user and the database

    $ sudo -u postgres createdb hello_dev
    $ sudo -u postgres createuser hello_dev

2. Set the password for the user

    $ sudo -u postgres psql
    postgres=# \password hello_dev
    Enter new password: b80d52308e0161b5f830487c6f9d16fac0ed7c09

3. Connect with one of the following connection strings:


(Before you ask, I tried having the script run the whole shebang. I ended up preferring the current solution. Just print to the shell. The user stays in control, and you don’t have to mess with permissions and possibly break something).

Download the script from this Gist:

1 Like

Oh, super interesting discussion! :+1:

For my last two web applications I just started out in pure Clojurescript.
The data is modeled as clj data structures in the re-frame model, the logic in cljc for later sharing with the back-end. They are very cheap to change as long as they are just stting there. They can also be easily pre-filled with mock data. When I was done, I’d design the back-end and the API (usually just a question the access granularity to the data) and then the persistence layer.
For mobile apps mobile first seems to be the way to go, because 80% of the things that work in the browser to not work in react native and it sucks to find that out too late.

I did try back-end-first before, but I found it really tedious to have to go and change everything: persistence, back-end, front-end and API.

This does not mean that I would not subscribe to hammock-driven development. It’s just I would start where the hardest problems are that force us to do the most changes to the underlying model and that is (for me) usually in the actual behavior of the thing I write as the user experiences it. This is of course not true for all problem domains. Mine were a browser game and a habit tracker for mobile (+ web).

1 Like

Thanks everyone for the excellent discussion! I’d like to particularly address the DB part. As has been pointed out, this can be a tricky point because of changing needs or data-models. This is very evident in my field when I am supporting professors and so the data model is expected to shift as research progresses and there is rarely a clear line between “dev” and “production”. For this reason we code into our DB schema the certainties (ids, creation dates, and generally which tables exist, which I have not found to change too much) and then make heavy use of PostGres JSON fields. Every table has a JSON field that can easily be adjusted as the datamodel evolves. There are problems with this, particularly after the app has been released and if changes are needed after data has been collected, but it allows for the needed flexibility during the development process.

An example of how the actual tables can probaby be made concrete earlier (which might also help the researcher/stake-holder) would be when the project is about episodes of series on TV; then we end up with a table about episodes and a table about series. Each table has a JSON field that is used if details expand: rather than altering tables by adding new rows, we add them into the JSON field. Later perhaps the research becomes interested in the characters of a show; now we can either add to the JSON spec of a series, or extend with a migration which adds a new table and linking table for characters. The JSON field of the character table can be expected to change as our interests expand: do we care about a character’s gender? Actor? Life events? Then migrations make it possible to keep time snapshots of your data and even to see the evolution of the data model over time, though more loosely if using json data.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.