The downside of records is that they are no longer pure data, so serialization is a problem, which makes information modeled with them harder to move to other processes, or store/retrieve them.
Most format that have a schema suffer from this, you need to have the schema definition of the correct version of the serialized data and know implicitly which one it maps too, where as schemaless formats evolve better over time as they are more flexible.
That’s why I say start with maps, use records if you need the performance boost and/or want to actually create a type to use with protocols for type polymorphism, though now you can do so with maps as well.
That’s also where I’d recommend the use of Specs over records. Specs are much better at describing data then records, and much more flexible in how they can evolve along the data.
Just to give an example, if you have a map, you would model type as data (if you cared about type):
But when using records, the type is implicit and it isn’t part of the data it models, instead it’s tracked by the runtime alongside the language instances of your data.
By having the type as data, your type info will serialize itself automatically. It is also more flexible and can evolve to be more refined or less as need be. The downside is polymorphic dispatch won’t be as performant.
And now if you want a schema to help you know what the data invariants for a certain entity are you can use spec instead of a record, which is even more precise.
So I feel maps + spec are just superior to records, unless like I said, you have some very special performance consideration.
You can absolutely use it in production, we have at my work since it launched with great success. The code works, and does what it does well. The reason it is alpha is because it isn’t sure if that’s what the final ergonomics and feature set for it will be for the language forever. They wanted to see how people would use it, if it would deliver on all they wanted, get feedback, etc, before commiting to spec fully for the language. And that’s where Spec 2 comes in, they’re reworking some aspects from what they learned from the alpha.
It isn’t alpha because it is buggy or anything like that, so it is safe to use in production.
As for best practices, I’d say you can spec your domain model and then validate explicitly using s/valid or conform (not instrumentation) at specific places in your app, my recommendation is to have the producer of the data validate, and the reader conform, and to do so at the boundary. Like before sending a payload, validate it meets the spec, and as soon as you receive a payload, conform it. Or before writing data to the DB, validate it, and after reading data from the DB, conform it.
On top of that, it’s good to spec pure functions you want to thoroughly test, and then setup a generative test for them.
Finally you can spec a few other functions as documentation for what entity they take as input/output, when it helps readability, and setup instrumentation at the REPL and when your tests run for it. But don’t use instrument in prod.