I think something missing is that denotational semantics try to convey meaning to the computer, while a DSL in my opinion often tries to convey meaning to the programmer.
You might want to find a way to map your programming language, which is designed for humans, into a formal denotation most likely inspired by an existing mathematical formalism (because inventing a new one is super hard), and once you’ve done that, you can then more easily write programs that can understand the semantics and validate them, modify them, translate them, etc.
And then, you get into a place where… a place I think Haskellers often find themselves in, where you start to wonder, why teach the computer my semantics, when I could instead learn the computer’s semantics? And that is how programming started, when we didn’t have higher level programming languages, you just learned the operational semantics of a Turing machine and described your program directly in it, assembly instructions one after another. Until someone made a “higher level” programming language where “higher” really means closer to human language and thought.
Now with denotational semantics, I often see this happening, hey, why not just learn some existing mathematical semantics theory, like say category theory, or set theory, or hoare logic, etc. And why not write our programs in those languages instead? Since they are better understood by the computer? It does make sense, sure, but now you’ve again shifted the burden to the human to learn some other language that is “lower level”, where “lower” means further remote to the natural way our language and thoughts work.
And maybe that’s what a programmer needs to do in order to write correct programs that they can have the computer proove its behavior, etc. Or programs that the computer can split up and re-combine in other equivalent orders for performance, compatibility, flexibility, etc.
But, there’s another approach, that which I think is more what DSLs are about. To target a specific targeted domain, that which is smaller in scope than all Turing computations, and to create a language which appeal more to the human. That is of a higher level in that sense. Easier for a human to understand. Now, yes, you might not be able to have your computer statically prove programs of such language, or automatically parallelize them, or all kind of other such neat tricks. But maybe you’d have made it easier for the programmer to quickly write a program that does what they want, and you’d have lowered their chances of making mistakes in semantics because the burden to translate them was eased as well.
That said, I think there is a middle ground here, as a programmer, it does help to learn of other models of thoughts, because now it broadens your tools for how to think about problems. Language can do that. So it might be that learning set theory, and then translating your problem to it, actually helps you better understand the problem and find solutions to it. So denotational semantics can help in that way, where it gives you frameworks of thoughts and language. I just think we shouldn’t forget that we are humans and our thoughts are not that straightforward, so I think DSLs also very much have their place.
Just think about for
versus map
. Seems most human begin with for
and eventually transition to map
. for
being a DSL, a higher level language of map
that is closer to our thoughts about it. Until we learn the modeling semantics of map
and then suddenly no longer feel like map
is any more obscure than for
.