Question about layer dependencies

Hello everybody,

I developed a library for managing dependencies between layers of an application. The idea of ​​use is something like this:

; customer-repository.cls
(ns app.customer repository)

(defn get-by-id [id])

(defn save [data])

; define functions to "export"
(def ctx-map :customer repository {:get-by-id get-by-id :save save})

; main.cls - register repository
(ns app.ctx-register)

(ctx/define-ns :repository customer-repository/ctx-map)

; customer-service.cls
(ns app.customer-service)

(defn get-by-id [id]
	(let [customer repository (ctx/get-ns :repository :customer-repository)]
		(customer repository :get-by-id id)))

; or maybe
(defn get-by-id [id]
	(let [get-by-id (ctx/get-fn :repository :customer-repository :get-by-id)]
		(get-by-id id)))

Also, this ctx has environment control, to determine which implementation to use (test,prod,dev). Everything is very simple. With this I intend to manage the dependencies between the layers of the application, e.g. repositoty → service → view, etc… But I am in doubt if using this concept of dependency injection is a good idea/good practice in clojure.


Welcome in! These typos caught my eye and gave me a smile.


I am in doubt if using this concept of dependency injection is a good idea/good practice in clojure.

Yeah, I share these doubts. I feel like the perceived value of dependency injection in general in other communities stems from a lack of data orientation. In some object oriented environments injecting a dependency might be the only way to achieve some variation in behavior, in Clojure we have other mechanisms.

this ctx has environment control, to determine which implementation to use (test,prod,dev)

If this is the problem you’re trying to solve, we typically address this with environment variables that influence things like database connection strings so that dev, test, and prod can work on different data.

Or perhaps you’re trying to solve a different problem? If so, what is it?

1 Like

The code was really ugly, but I fixed it! =)

This problem that I intend to solve, would be solved using dependency injection in java. With this I can omit some unnecessary information about existing dependencies in the lower layers of the application. I read here on the forum that to ensure transparency, it would be good to pass the dependencies as function parameters. But I don’t like that idea very much. Imagine that to use a certain service function, I need to pass two different repositories. I’ll have to create this structure in the controller and create an unnecessary coupling. I want to solve this problem, in addition to the implementation replacement per environment.

I think this isn’t discussed in Clojure circles enough :slight_smile:

DI can be seen as partial function invocation, or as function invocation passing around a huge map of components (as is used by the “Component” approach, or less directly by the Pedastel interceptor approach.)

In Clojure you can mostly avoid this in the pure business logic, with just functions calling other functions . In non Clojure circles I’ve seen this discussed as “dependency rejection” or “functional core, imperative shell”. Scott Wlaschin also has a brilliant blog series called “Six approaches to dependency injection”.

But I think this really misses out on the opportunities of open extension through polymorphism that clean architecture and other approaches give you. Java’s conventional DI and focus on developing to the interface makes this natural in a way I don’t see enough in Clojure, even though it’s seen as critical to reducing complexity by Rich himself. So I would encourage thinking about layering as you are - particularly if your app is likely to be long lived and pass through many hands :slight_smile:


I will read that article, thanks.

I’ve worked on systems where “flexibility” took the place of organization and separation. An example is when you use a persistence framework like ActiveRecord or GORM, where you spread the database access logic everywhere, eliminating the persistence layer. This greatly compromises the refactoring and significantly increases the coupling. Defining cross-tier contracts has its benefits as time passes and the application grows in size.

1 Like

I think Sierra’s “Clojure in the large” talk is worth watching on this topic: Clojure in the Large (

I think it’s a bit dated now, and it felt very OOP-ish to me even at the time (and feels a bit more so now) but the concepts of separation and layering are approached well, as I recall.

At work, we use Component for managing start/stop lifecycle with dependencies, but for separating out dev/test/CI/QA/prod we use external configuration files (EDN) and we use Juxt’s Aero library to process those which let’s us combine env vars etc with on-disk configuration in useful ways.

We build a single artifact for each service in CI, and that artifact is deployed as-is to QA and production (and in theory you can build the same artifact locally and use it for dev/test but we tend to stick with the REPL). Having identical code in dev/test/CI/QA/production is important to us – so configuration needs to be external at runtime.


Thanks for the tip, I’ll watch the suggested topic.

For configurations, we are using omniconf. I know that there is no right or wrong, my question is whether or not creating these “factory functions” based on configurations is a best practice or could compromise long-term maintainability.

The vertical decomposition of a program is much more important than the horizontal, and »layers« should only present themselves as a kind of languages building on each other at consistent abstraction levels. I think any feeling of injection breaks these.

I cannot really make sense of the code you show, but I get a feeling that it reinvents parts of namespaces and parts of protocols.

For that specific question, I think it’s over-engineered and non-idiomatic for Clojure.

1 Like

To me this is the correct answer, and very clearly stated. We take a similar (if not identical) approach - there is a single program, which solves the problem at hand, and through configuration it is leveraged appropriately in different environments/contexts.

1 Like

this ctx has environment control, to determine which implementation to use (test,prod,dev)

If this is the problem you’re trying to solve, we typically address this with environment variables that influence things like database connection strings so that dev, test, and prod can work on different data.

Personally, I would recommend against environment variables for of configuration. There are a number of notable platforms including Kubernetes that promote the use of environment variables for this, probably because it’s the easiest way of injecting data into a container in a language-neutral fashion. However, this makes the behaviour anything but platform-neutral, which is harder to notice and a somewhat worse problem!

Is an env variable a mapping from an upper-case UTF8 string ‘KEY’ to a arbitrary UTF8 string value? No; neither the case nor character encoding is specified.

Is an env variable a mapping from an arbitrary string key to an arbitrary string value? Yes, but also no; POSIX actually defines this:

These strings have the form name=value; names shall not contain the character '='.

…but it’s up to the individual applications to parse it themselves:

The array is pointed to by the external variable environ, which is defined as:

extern char **environ;

Soooo… Is an env variable just a string? Well, POSIX says so, but that’s assuming applications follow POSIX, and there’s nothing inherently ‘stringy’ about the char in extern char **environ; anyway. Add unusual character encodings, niche string substitution commands or exotic control sequences though, and things start to fall apart quickly. Give me an application that reads environment variables, and I’ll be able to give it garbage data that is literally invisible to the poor programmer trying to work out why it doesn’t work!

In practice, this is not often an issue, as programmers typically follow convention and choose sensible key names and values, and a somewhat POSIX-compliant environment is generally assumed. That said, I don’t think many users of environment variables realise how many opportunities there are for the data to get garbled between, say, their Kubernetes ConfigMap and their application. It only takes one weird design choice or oversight in a shell’s implementation to invisibly mangle your environment variables, and there can be many shells in a seemingly simple container, from a behemoth like GNU Bash to the most naive ‘exec’ function in a language’s standard library.

That’s a long way of explaining why I fully support using ‘contexts’ in application code directly to specify differences in behaviour for development, testing or production systems: they’re easy to instantiate directly in a REPL, and the application can read them from a nicely-defined JSON (or even better, CBOR) file, skipping any chance of interference by a shell :slight_smile:

I’m sorry if you’ve been hurt in the past.

Jokes aside, your points are well taken. In practice we specify the environment configuration in an edn file, override relatively few values from the actual machine environment (which is a standardized docker container w/ predictable OS/shell/etc…), validate the config values on startup, and generally just enjoy our time inhabiting planet earth.

1 Like

Hello everyone,

Thanks for all the comments. The indicated materials helped me to think. I noticed that there are several ways to work but there is no general consensus (like there is in OOP, for example)

My approach with “function factories” is not the best one when it comes to idiomatic FP for purists.

However, the system I developed using clojure and using the “factories”, is an tiny integrator that receives data from a system, processes it, treats it and sends it to another system. The code consist of validation, transformation and CRUD. Lots of boilerplate and tests with fake api/database.

My conclusion in this case is that the my approach is questionable but valid. I don’t see the need to create a superstructure with DSL embedded in it to package a bunch of repetitive code. I also don’t see why overload all the functions with all the necessary dependencies, since the factory itself is a stateless context. The only state for the this context is the functions available for consumption according to the environment (test/dev/prod)

I even think that for a larger system this approach is still valid for boilerplate, along with other techniques known as DSL, Dependency rejection, etc…

Thank you all for your help!

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.