@ericnormand Yeah, I’m not sure. It seems that we’d have to differentiate scaling relative to a particular architecture and problem-context and scaling with respect to generic polynomial difficulty.
The physics of this universe will probably imply certain mechanisms are more efficient than others. Likewise, the CPU vs GPU vs FPGA (discussed PDF:here) each are able to process different algorithms at different levels of efficiency, regardless of the algorithm’s polynomial efficiency. Amdahl’s Law is also a scaling law that applies here.
There are some papers(PDF) out there about power laws and ‘scale-free’ architectures in software, where they claim that “distributions with long, fat tails in software are much more pervasive than previously established.”
It would also appear as though the naming of things in software development tends to create Zipf distributions, forming a sort of symbolic trie. Some blogs and papers explore Zipfian distributions in software but I’d probably just chalk up the phenomenon to the trie efficiency you pointed out.
I’d bet all kinds of scaling laws could be correlated between various garbage collection schemes and memory management in various langs. But then you could also imagine softwares designed that have no notion of “garbage.” But in an architecture independent, context independent way, can we say certain things that are true at all scales? I’m not sure.
What about “actions” vs “calculations” you’ve been discussing in other videos? I’ve been wondering if for any given calculation, there is some requisite percentage of underlying action. In other words, for any given intended affect, there will be some necessary amount of unintended side effect, (relative to the primary purpose of the calculation), such as time going by, heat building up, missiles accidentally launching, etc. And perhaps there is some minimum amount ontological baggage for any given teleological action, which might be quantifiable via analysis of available software.