Type hinting in defrecord?

Is it possible in this case that since the entity maps are cleary spec’ed out, we will not even need to create the defrecord?

Yeah, that’s the primary idea. The default recommendation from the clojure core team is that you leverage maps and fully-qualified keywords and specs to handle your information model. You only move to records if you need their features (typically performance, or custom protocol implementation, or a combination of both). There are some potential pitfalls around serialization and differing record implementations (e.g. the class changed at some point so the serialized object may not coerce, unless you serialized it in a clojure form that uses the reader literal). The other (for me more common) one is during dev time, when we may be messing around in the repl a lot, if we have some state defined that is a record, and we find a problem with say a protocol implementation on the record definition, maybe we go to fix the implementation and reload the defrecord form. That creates a new underlying type for the record that is technically not compatible with any instances of the earlier record retained in memory (example from Ben Mabey):

user=> (defrecord x [a b])
user.x
user=> (def x1 (x. 1 2))
#'user/x1
user=> (defrecord x [a b])
user.x
user=> (def x2 (x. 1 2))
#'user/x2
user=> (= x1 x2)
false

They also conflict with the spec vision of using namespace qualified keywords everywhere (I don’t really do this an tend to stick with unqualified by default so it’s not a problem for me) since all the static fields in the record representation are accessed by unqualified keys (like :x vs :some.ns/x). It’s not a huge problem with stuff like the data-specs though, but some people love their qualified keys.

If you don’t need the benefits of the record (there are some good discussions Maps Vs. Records What is 2021 Recommendation for Specs and the interplay with records vs. maps.

To be fair, I had not really dug into the schema lib since it was a precursor to spec and I wasn’t really paying attention to it even when it came out back in the day. It might be right up your alley though, looks actively maintained, and has some tight integration with validation, coercion, and the bases spec/malli cover as well (although no regex patterns and the like, from what I saw), with a stronger focus on concrete specification of nested data (e.g. no need for spec-tools data specs).

I guess the bottom line is: you have options. The recommended default is general/extensible maps + some verification/validation mechanism to help tame the potentially unbounded information model (e.g. spec, malli, schema, with spec being cognitect’s solution). You can still readily apply the same mechanism to records should you choose them though. Records add a little more organizational overhead (more types, additional validation, more hinting) that may be redundant if you are going to spec them anyway (schema has some cool forms that integrate schema definition and record definition pretty nicely though, and I think there are probably similar libs for spec and malli - or could be if one were interested).

Rich’s spec talk
Maybe Not - spec2 discussion and ‘situated programs’ and information models

3 Likes