Since years I’m trying to make systems more observable and I tried all kind of logging approaches. The last few weeks I’m using a very different approach to decomplect our most complex processes.
The processes now store all input, intermediate and output data into a temporary folder. If the process fails, then the folder is zipped and uploaded (to Google Cloud Storage). An error message is logged that includes the process type and the command how to download the zip file.
Full story here:
You might want to look at µlog. It has means to log EDN straight into a file, and more, and it’s fully async. I’m using it with both console and elasticsearch output, and it’s been great.
Two tricks I have seen in this area are:
For bulky data log with Fressian (I think of it as binary EDN): GitHub - Datomic/fressian
Log the complete call with all parameters to try failed operations. So instead of just the data that can be reassembled by a skilled user at the repl to redo a failed operation, the entire call is logged. Just cut and paste to replicate. The sneaky thing about this is it forces dev teams to get their dependency injections and other state based arguments to the point where they can print them easily to the log (without passwords).
This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.