No problem.
I’d also like to add some details. The reason the producer should validate before sending a payload or storing data to a data-store is because you want to fail fast. Once the reader on the other side receives the payload and fails, you’re already in a state that’s much harder to recover. Imagine storing a corrupted entity to disk, when you read it back later you can’t just throw an error undo the storing of it, go back to the writer, have it catch the exception and recover, etc, because of the distributed nature of things. So you want to fail fast from the producer side, so you don’t corrupt things that get harder to recover. Ideally your reply/tests catches things, and worse case you fail as soon as you go to prod and can quickly rollback, no data cleanup involved, because you’ve left the DB with a bunch of broken entities, or your kafka stream with a bunch of broken messages.
Secondly, the reason you want your reader to conform, is because conformance is the act of figuring out what type of data you received and handling it appropriately based on that. Any system over time will evolve its payloads and entities. And in a distributed setting, you can’t always enforce that the sender migrates to your new payload format. Even if they do, there’s always a deployment overlap if you want to have zero downtime. There will be in-transit messages of the old payload format, while the new code is deployed and starts to send new payload formats, so as a consumer of data, you must always be able to process the old data and the new. That’s what conform lets you do, when you call conform on some data, the result is that it tells you which of the many kinds and versions of the data you just received, which allows you to branch to the correct logic for it. It also will tell you if the data doesn’t match any of the type of data you support.
Now what neither validate and conform do is serialize the data for transport. I consider that orthogonal, and that’s why I don’t like schema libs that also do data conversion. Conform is not the same as convert.
So what you do is you produce the payload as Clojure data and validate that with the spec for it. Then you send that data to a layer that converts it into the transport format, like maybe it gets serialized to JSON. And then on the consumer side, you first get that transport encoded data, and you deserialize it back to Clojure data which you then conform.
Now if your transport is EDN compatible, well you don’t need to do much conversion, but the idea is the same, converting to/from the transport format I consider beyond the boundary of my app.
I prefer that to something like JSON schema, because what I want to do is work with my own domain representation, JSON is an implementation detail of my transport. With this approach you can support multiple transport, and use the same schema validation/conformance on all of them.
And then for conversion you are free to use whatever lib you prefer or hand roll it.
It does mean there’s a chance your conversion has a bug and that creates a bunch of broken payloads in the process though, but I tend to heavily test my conversion logic specifically for that.