This is my issue with this library having been called “Component”. It is not meant to design a “Component” in any way, it doesn’t represent a way to build abstractions like that. It simply allows you to work with a reloaded workflow, as far as I understand everything from Stuart Sierra, this was the goal of Component and the sole driver for it.
I say that because this whole idea of “Component” I think confused people, because a Component is often used in software architecture term, and now we have this confusion of things. A Component in an application should serve to solve a part of the domain problem so that their combination can drive user features. Thus in software architecture, it is not generic to the domain, though you can be in the domain of providing generic features.
The domain model also is confusing, because in DDD it is the combination of data + business logic.
I think in Clojure it is best to use a different mental model. First off, you have data, and you need to define how you will model data relevant to your domain and what your application should capture of it for what it intends to do.
I think here we’re all mostly using domain model to refer to data model, and maybe the latter would be more clear.
The data model will be hardest to change, and it couples everything that uses it strongly together. Data dependencies are most important, and I highly recommend data flow diagrams in that sense.
What data you need to capture, how you should represent it, and where will it come from and go? That’s crucial to figure out. And because you can fail at doing this at first and can’t predict all future needs, it’s critical that your data modeling tool is flexible and can evolve here, which is absolutely Clojure’s best strength.
Now it’s possible to have an implicit data model, in that you have no explicit definition of what data you have and how it is structured, no record of where it comes from or goes, etc. Especially in Clojure right, unless you use Spec, Records, Schema, Malli, etc… Even then, it doesn’t mean you shouldn’t think about it.
Now that you have data, you should focus on “pure business logic”, which is purely the world of deriving more data from existing data and transforming data. Given some data I restructure it in another shape. Given some data I derive more data. This could be given a balance and a disbursement, I add to the balance the amount of the disbursement.
Finally you come to the “impure business logic”. This is the world of moving data around from one place to another, and having machine do things based on data. All the complexity lies here, though your problem starts before in your failure to having separated this impure business logic from its pure part or failed to model the data it’ll leverage properly.
Assuming you’ve succeeded at this split, the impure logic is best modeled as workflows or state machines (both are two side of the same coin). Now this part is non trivial, and I think maybe in Clojure, we spend so much time teaching people to seperate their pure business logic and to model their domain data, that we forget to teach anyone how to build these impure workflows/pipelines/state machines.
It is only in this latter part that dependency injection becomes a tool, or that libraries like Component come into play.
That is, appart for cross-cutting concerns, another aspect maybe in the Clojure community we don’t educate enough. How do you log in your pure business logic? How do you monitor its behavior at runtime? Etc.
Now in my opinion, the impure business logic implementation emerges automatically if you successfully modeled your data and separated the pure logic out of it.
But doing so is hard, and what tends to happen is interleaving pure/impure causes people to break the split, which means future recombination of behavior become difficult, coupling appears, code becomes rigid and inflexible, and changes to one part breaks others in unexpected ways.
Reuse and parameterization is the other challenge. If you have 10 different user commands, but they all share 50% of the same process, do you create a sub-workflow? A sub-state-machine? For that “shared” piece, and then use it inside the 10 others?
Or maybe instead of having 10 workflows, one for each, you have one workflow with branching behavior based on parameters passed?
I don’t think there’s an easy way out here, that’s the challenge and you need to make judgement calls, each have pros/cons. And often time the complexity comes simply from accruing more and more user features.
That said, I’d love to see more discussions around code organization around these. How do people implement those workflows? Pipelines? State-machine? In fact, do people prefer workflows over state-machine or vice versa? Or do you organize them in the code? How do you pass dependencies between them? How do you interleave pure/impure behavior, how do you handle cross-cutting concerns, how do you handle errors, retries, rollbacks, transactions, etc.