I think it would help here to talk about redundancy as a strategy to achieve fault-tolerant systems. Given a fault-free environment, one where nothing can go wrong, you wouldn’t need redundancy.
If you frame it like that, you can make cost analysis. For example, what is the extra effort in adding a specific form of redundancy, what is the cost of the added complexity to maintain it, and keep it in sync, and what is the value that it will provide, i.e., the type of fault it will protect against, their cost if they were to occur, and the chances and frequency at which they do occur.
Now, there’s no easy way to perform this analysis, I’d recommend simply doing a gut check. Intuitively, you should be able to pretty quickly assess the cost of the redundancy, just by thinking about it a little. You can also go about it re-actively, where if an issue has occurred enough times, or has occurred and caused large problems, as a result, you can take in a project to add redundancy to that particular case. For other things, you can be more pro-active, if in your experience, you anticipate these problems to happen more often, and already know their cost will be high.
And I quote Wikipedia:
Providing fault-tolerant design for every component is normally not an option. Associated redundancy brings a number of penalties: increase in weight, size, power consumption, cost, as well as time to design, verify, and test. Therefore, a number of choices have to be examined to determine which components should be fault tolerant:
How critical is the component? In a car, the radio is not critical, so this component has less need for fault tolerance.
How likely is the component to fail? Some components, like the drive shaft in a car, are not likely to fail, so no fault tolerance is needed.
How expensive is it to make the component fault tolerant? Requiring a redundant car engine, for example, would likely be too expensive both economically and in terms of weight and space, to be considered.
- Use of specs and asserts to validate inputs on internal APIs
I’ll assume nothing else is validating this, for example, clients do not validate when making the call. Now, if the API performs dangerous side-effects that cannot gracefully fail, or be retried, such as moving money around, or deleting records from a database. Then adding validation in order to handle faulty input seems pretty worth it. Wouldn’t want to accidentally disburse too much money, or delete the wrong table. I’m not too sure that classifies as redundancy though, but it is still a good strategy in order to tolerate faulty input.
- Use of development databases on PC, in Docker, and/or on a development DB server
I’ll assume you mean as opposed to using the prod DB directly in development? The first risk here is security and privacy related. Giving devs access to real user’s data could be a privacy issue. Similarly, that access might be opening insecure channels into the data, which can be easily abused and make the user’s data more easily compromisable. Another risk is that of accidentally corrupting the prod data, or bringing down the database. For this, my gut check says you always need a devo DB with fake user data, unless your production is not for paying customers, or for running a business. Like if you’re just hosting your own data for yourself, or a private game server which you use to play with your friends only, go ahead, but if this is handling paying user’s data, no go in my book.
- How many tests to write – use-case stories, full coverage unit tests, handler (route) tests, generative tests, etc
Also not sure these count as redundancy. In fact, I don’t think these would even count as strategies to implement fault-tolerance. This seems more like strategies to achieve fault-avoidance, in order to build systems less likely to have fault in the first place, not just handling the faults gracefully when they do occur. For this, I will simply link to the amazingly wise grand-master programmer Testivus: https://testing.googleblog.com/2010/07/code-coverage-goal-80-and-no-less.html