I guess there can be different strategies employed. I can only speak to what I do. Also, not all noSql store are exactly the same, so it can be some store require some different strategies. I’ve not use Crux for example, so I can’t speak to it specifically.
In semi-structured DB, I don’t do data migrations. You can always just add new fields or stop updating existing fields. What I do instead, is that my code must forever handle all possible schemas. So if 5 years ago documents had the shape A, and since then have had the shape B, C and D. My code will know how to read/write to all of them.
I use Spec for this. I’d have a multi-spec over the type of document.
In the code base, I can normalize on the boundaries, so it can be when I read, no matter if it’s A, B, C or D, I can convert it to normal form X and my code can use X. On writing back, I could choose to upgrade. Like say I write normal form X back I could write it back in the shape of D.
But to be honest, I often don’t normalize like this. I tend to just have code everywhere become a multi-method of the document type. I find that better, because with normalizing, you get into having to make fields that are now mandatory optional, because they would be missing for older doc types for example. So I’d rather not do that and just handle each case on their own.
Sometimes I backfill data in if it’s possible. Like if I want to introduce a new field, and I can actually backfill the data for that field over all existing documents. I’d do that as a one time job.
For backups, well I’ve only used managed document stores, so they generally come with their own replication and backup features. So I’d say say that would be Crux specific.
Oh and for connection pooling, no, normally these NoSql DB have REST endpoints, and don’t need an active connection, beyond the underlying http connection. So I’ve never had to pool them.