Maybe I need to be more specific, the difference I’m talking about is the difference between recomputing on every keystroke, or recomputing at the user discretion.
While there’s a spectrum here, like recompute on every x keystroke, or on every full symbol edit, or every y seconds, etc. I think doing it at the user’s discretion can actually be a good UX.
At first, a user might appreciate re-computation on every keystroke, until it becomes too slow off course, then they curse the tool and go back to their old workflow.
In both cases, the dependency of changes is tracked by the tool. So I’m not suggesting the user has to figure out what all to recompute and manually go to each block in the right order and re-eval them. This should still be tracked by the tool. That said, the user could choose how many changes to perform until the graph is re-computed.
But to be honest, I think sometimes further isolation might be needed. Let’s say I’m playing with a code block that pulls in a large dataset and does some filtering and cleaning over it. Maybe some other block uses this as an input, millions of rows as input, and it generates some visualisation over it.
This visualisation generation block is really slow over the large input. My code block itself is already pretty slow. So now as I work on the filtering and cleaning, I’m constantly hammered by the visualisation re-computing over and over even though I don’t care for it. I’m still figuring out the right filtering and cleaning I want. Why have this visualisation re-compute on every change?
In this scenario, I’d like to be able to say that only the code block I’m currently on re-evaluate, nothing else. And even that block, not on each keystroke, since it’s a block filtering and cleaning over million of rows. So only once I’m interested in seeing the effect of my new batch of changes.
And only once I’m satisfied with that, would I want to say, okay, now re-sync all dependent code blocks as well so that I see the effect on my entire notebook.
Now I don’t know, maybe there’s a way to make this even nicer. Like start by re-computing everything in change, but keep track of performance. If something is found to start to take too long, it automatically switches off from the auto-recompute chain, and now required manual trigger. Visual indicators could make this intuitive to the user. And if the user really cared, maybe they could force it back to auto-recompute and vice versa.