Maybe I need to be more specific, the difference Iām talking about is the difference between recomputing on every keystroke, or recomputing at the user discretion.
While thereās a spectrum here, like recompute on every x keystroke, or on every full symbol edit, or every y seconds, etc. I think doing it at the userās discretion can actually be a good UX.
At first, a user might appreciate re-computation on every keystroke, until it becomes too slow off course, then they curse the tool and go back to their old workflow.
In both cases, the dependency of changes is tracked by the tool. So Iām not suggesting the user has to figure out what all to recompute and manually go to each block in the right order and re-eval them. This should still be tracked by the tool. That said, the user could choose how many changes to perform until the graph is re-computed.
But to be honest, I think sometimes further isolation might be needed. Letās say Iām playing with a code block that pulls in a large dataset and does some filtering and cleaning over it. Maybe some other block uses this as an input, millions of rows as input, and it generates some visualisation over it.
This visualisation generation block is really slow over the large input. My code block itself is already pretty slow. So now as I work on the filtering and cleaning, Iām constantly hammered by the visualisation re-computing over and over even though I donāt care for it. Iām still figuring out the right filtering and cleaning I want. Why have this visualisation re-compute on every change?
In this scenario, Iād like to be able to say that only the code block Iām currently on re-evaluate, nothing else. And even that block, not on each keystroke, since itās a block filtering and cleaning over million of rows. So only once Iām interested in seeing the effect of my new batch of changes.
And only once Iām satisfied with that, would I want to say, okay, now re-sync all dependent code blocks as well so that I see the effect on my entire notebook.
Now I donāt know, maybe thereās a way to make this even nicer. Like start by re-computing everything in change, but keep track of performance. If something is found to start to take too long, it automatically switches off from the auto-recompute chain, and now required manual trigger. Visual indicators could make this intuitive to the user. And if the user really cared, maybe they could force it back to auto-recompute and vice versa.