I agree with your summary, and think it’s interesting to compare to the spec philosophy. The the article you linked summarizes the plumatic/schema approach like this (all emphasis mine):
Spending time writing the boilerplate conversions became exceedingly inefficient, so we decided to do something about it. If schemas have all the information needed to do what we want, then we should be able to solve this problem once and for all and do away with the boilerplate. And that’s what we’re delivering out of the box in Schema 0.2.0: a completely automated, safe way to coerce JSON and query params using just your existing Schemas, with no additional code required.
This is driven by this explicit statement of that tool’s philosophy:
Schema’s design goal: enabling a single declarative definition of your data’s shape that drives everything you want to do with your data, without writing a single line of traversal code.
The thing is, however, that this is explicitly not spec’s design goal. The spec-tools article that this thread is about mentions this:
Transforming specced values is not in scope of clojure.spec, but it should [be].
Correct me if I’m wrong, but it seems from this statement that spec-tools’s creators think that the spec philosophy should be like schema’s: you define your desired data format once and that definition is used for everything.
But the spec team has been vocal that this is not their aim. The clearest statement of their argument against coercion that I’ve found is in a mailing list thread that the OP article links to, where Alex Miller says:
spec-tools combines specs for your desired output with a coercion function, making the spec of the actual data implicit.
Note the difference between describing what data is with describing what the data should become.
This next part is me going out on a limb a bit. What helps me make sense of Alex’s statement, and of the spec philosophy in general, is to swap out the name “spec” for the term that comes from its prior art: contracts. A spec is, philosophically speaking, a contract for what the data must be. It’s an agreement between data source and data consumer. If the data does not satisfy the contract then something is wrong: the agreement has been broken, you should flag this violation and find out what went wrong. You don’t automatically massage it into some more desired form, because that introduces inherently slippery boundaries about what the data needs to be.
Another aspect that occurs to me is that the problems spec is intended to solve does not include data transformation. It’s easy to see why: Clojure does not lack for efficient and expressive data transformation capabilities. That’s Clojure’s bread and butter—spec doesn’t need to “fix” data transformation. So why bundle that into spec, if doing so weakens or at least distracts from spec’s intended uses?
That’s why I find solutions like Sean Corfield’s approach so compelling:
My recommendation is to have a strictly non-coercive spec for the target data “type” / shape you want, and to have a second spec that combines the coercion you want with that spec. That way you have a way to tell if your uncoerced data conforms to the spec, as well as a way to do coercion in s/conform. They are – and should be – two separate specs and two separate operations. They represent different layers of abstraction inside your application (so “of course” they should be two separate specs, one built on top of the other).
(Personally, I see a role for pure-Clojure transformation functions between those layers of specs.) The spec-tools article has this to say about this approach:
Runtime transformations are out of scope of clojure.spec, and the current best practice is to use normal Clojure functions first to transform the data from external formats into correct format and then validate it using the Spec. This is really a bad idea, as the structure of the data needs to be copied from specs into custom transformation functions.
To me, this seems unconvincing in the face of Sean’s point, which is that there is no “the structure of the data” because the data has multiple structures and needs to be treated as such.