This is amazing, thanks for sharing. Maybe my bias shows, as I love Clojure, but I am in so much agreement with Rich Hickey, and I don’t even really understand the point Alan Kay was making.
Interpreters depend on data, how can they be a good idea but what they fundamentally depend on be a bad idea?
I see it as “symbols and artifacts” are put together into “data” to take on measurable qualitative and quantitative meaning, from that meaning, more data can be derived using reason or logic and decisions can be made about actions.
All of it seems to be a good idea to me, even more so, it seems like a necessity for reason, intelligence and free will.
Understanding requires interpretation, and you do need some understanding of data representation, knowing that 42 is greater than 41 is interpretation, but those are the tools of data representation we’ve all built which allow us to capture more data and derive more understanding and make even better decisions about our actions.
Alan’s Kay idea of an intergalactic scale language is interesting. If you and someone else do not speak the same language, have no analogy of concepts, have no agreed interpretation of any symbol or artifact, absolutely nothing agreed upon for communication, could you somehow derive a means to still communicate and understand each other?
At the same time, is this practical reality? It seems not to be a problem we have, computers live in a symbiosis with humans, we already have natural languages that we’re all thought and somewhat agree on meaning, and we all share the same mental framework for thought and interpretation, same feelings we can relate with, etc. I think that can be taken as an axiom that you use for data representation, you add a few additional semantics to be learned through simple documentation/text books, and that’s good enough, you now have a pretty good data representation like EDN.
Objects also seem inadequate to me. For example, even if I was trying to convey how to interpret the data I’m sending you, I’d have to use data to do so.
Code is data for example, and you can train an ML model to look at code, look at some examples of desired input to output, look at the documentation accompanying the code, also train it on natural languages the same, and the model can learn how to interpret the data, and then learn to code from it, which is it’s interpretation, that’s quite amazing.
But what would you do with an Object? That’s like a pre-trained model, can one model learn from another running model? It just seems Objects are impossible to convey. You could observe an Object as it’s running, but that observation is just data again, document the input messages and see the output messages or observe what side effects was taken, from that maybe you can figure out how it works and reproduce it’s interpretation.