The domain logic is modeled hierarchically, with entities, value objects and aggregate roots all being maps.
The datastore in my example I modeled relationally, so I imagined someone would use a SQL database. You can see it here: clj-ddd-example/src/clj_ddd_example/repository.clj at e99aa7b7013529e669ce06d400765ddd10b7214f · didibus/clj-ddd-example · GitHub
I’m pretending that you’d have two tables:
account-table with columns: account-number, account-balance-value, account-balance-currency
and
transfer-table with columns: transfer-id, transfer-number, debited-account-number, credited-account-number, transfered-amount-value, transfered-amount-currency, creation-date
So like I said, it would be that the database is a SQL database, say MySQL. In DDD, you often also apply CQRS, or a simplified form, basically your reads and writes are seperated.
In a simple form, you have one SQL database with the above two tables I mentioned. The repository gets an account from the account table, and it commits a transfer by updating the account table and inserting to the transfer table.
You’ll see that I recently updated my example to show a more realistic design that handles the concurrent transfers better.
If your database is MySQL, the way I have it now in the application code is using eventual consistency. When you get an account, it could read a stale balance from it, because the transfers are not synchronized. But the domain model for debiting and crediting an account returns an event that describes the change, and not the new state of the account.
That means that the application service when it receives that event back from the domain model, it uses the repository to commit the change, at that point the repository takes care of atomically applying the change to both accounts and inserting the transfer.
This is eventually consistent, but can during the “inconsistency window” allow some debits below 0$, resulting in possible negative balances for some users.
We assume the domain is fine with that, by say supporting overdraft fees for example.
Now if using only a MySQL database, you could also simply in the application service wrap the whole thing in a transaction and take a FOR-UPDATE lock on the account rows when you get the accounts, and commit the transaction in the commit transfer at the end. This would be strongly consistent, and use a pessimistic locking strategy. The MySQL transaction handles that using two phase commit.
If you did this, you don’t need to model domain model changes as change events. You can just have the domain like on my prior example return the new state of the entites and aggregates, and the repository just overwrites the existing state with the new state. That is a simpler design in general, and it is well suited for supporting user frontend, because it is strongly consistent the user doesn’t see what can appear as weird glitches as the data becomes eventually consistent.
You could also choose a strongly consistent, but optimistic locking approach. This would make more sense if you use a NoSQL database like say DynamoDB, which don’t always support locks across documents.
In that scheme, you’d get accounts but the accounts would have a version number along with it. You’d make the changes, and when you commit the transfer in the repository, it will perform an atomic “compare-and-set”, which basically goes, update unless version is newer than what we read. When that happens, you’d retry the entire application service logic, back from the get accounts calls, you’d do this until you finally succeed.
Here too, you wouldn’t need to model domain changes as change events, and it is sufficient to simply have domain changes return the new state of the entity/aggregate. The difference is depending on your concurency patterns it could avoid unnecessary locks and be faster, but if you have a high concurency it could actually be slower as well.
There’s another option for strongly consistent, you can take a distributed lock outside your database, which locks the application service command itself. If your running in a single instance, you’d just wrap the whole application service in a (locking ...)
, but in a mutli-instance setup you’d need a distributed variant. You can be smart here and lock on the account-numbers so it’s only per-account locks.
Finally, the eventually consistent approach I have now in the example is also a great fit for Kafka or other distributed streams like AWS Kinesis. You can still have MySQL as your source of truth persistent database, but you’d front it with Kafka. The change events wouldn’t get immediately committed by the application service as it gets them from the repository, instead the application service would publish the events to Kafka. And a separate consumer on Kafka would handle the messages by commiting them to your MySQL database.
You can on top of that solution implement a distributed lock like I said for cases where you’d want strong consistency, though it kinda defeats the purpose a bit in my opinion, in that this approach is good for scaling out your writes, because the caller isn’t blocked waiting for the writes, they are made async and buffered in Kafka.
There might be more, I guess the point is that really it works with many different approaches. Since your domain model and services are all pure, you can adapt around it to have the state stored wherever and however you want. That doesn’t mean you can easily swap one storage for another, because your application service is still very coupled to the details of your repository which is coupled to which storage solution it is designed on. And even your domain model is slightly coupled in at least choosing if it needs to return change events or new states and possibly for optimistic locking it might have to have versions as fields on the entities as well, though that could possibly be handled as metadata it in the application service itself.
One more thing, like I said, DDD seperates reads from writes. So this domain model isn’t meant for queries and read-only use cases. It is meant for updates or insert use cases.
For queries and read-only use cases, my example doesn’t show it, maybe I’ll add it later to be more complete. But basically you’d have something called a Finder, and that would just run queries on your database however best works with your database, and it can return the query results in whatever view structure best works for your query use case, it wouldn’t returns things in the structure defined by the domain model. So this Finder can be a graphQL based API, or it could just have a bunch of functions for various queries like: (find-transfers-over 200 :usd)
.
In this design, your queries can run on the same database as your writes, but they don’t have too. You can replicate your data in other datastores best suited for querying. Even in the strongly consistent updates and insert cases, this can work, your queries could still be eventually consistent as the replicas read datastore might be behind, but when you go to make an update, you’d not use the query data to compute the update, you’d go back to your repository and call get again for all the entities you intend to change and change those, so that can be strongly consistent again.
Hope that answered your question.