Hash Array Mapped Tries
There exist “fast” persistent arrays, sets, and maps, fast enough that you can use them for a much larger subset of programming than previously thought. Prior to HAMTs and friends, the choice between immutable FP and mutable imperative/OOP/procedural was very well delineated. ex. You go with immutable maps, you’re working with binary trees and living with that performance envelope, or singly linked lists, or copy-on-write structures.
The design of HAMT itself (an extension on Phil Bagwell’s earlier work with some novel optimizations) [and later RRB-Vectors] really blew my mind. We take it for granted since it’s baked into the language, but that data structure substantially blurred the performance/efficiency line that existed for decades prior. There are still plenty of spaces where the gains from mutability are worth the trade off in correctness and persistence, but I’m surprised to see how far I can get with just the built-in HAMT-based stuff. They also enable value semantics for collections, which in turn enables efficient comparisons (for the not= case) and enables the immutable thread-safe reference types.
Stepping Debuggers Are Optional, Maybe Even Distracting
I went years without touching a stepping debugger after having crutched on them in prior languages. Impetus toward lack of mutation, a focus on immutable values, and a nice repl meant inspecting about pieces of the program and function dependencies was faster than stepping through a program to watch some boxes. As the Cider debugger came online, I found myself using it maybe 3x in 5 years, and I was happy to have it, but it wasn’t necessary per se. I contrast this with the seeming Rube Goldberg Machines other languages encourage.
All You “Really” Need is a REPL
Everything else (all the nice tooling and fancy IDEs, etc) are really just window dressing. If all you have is a REPL, you have access to the entire language, the ability to inspect everything, the ability to redefine stuff live, compile things, extend the language, etc. I came up in the early days when the tooling wasn’t so great, so I ended up living in the repl primarily, and using it for things like documentation, searching namespaces, etc. I still do that these days, although the tools are very nice.
This isn’t a feature unique to clojure (arguably other lisps like Common Lisp and non-lisps like SmallTalk have even better introspective/live coding capability), but it was pretty mind-expanding when I encountered it.
The go macro flexes multiple technologies in Clojure to simultaneously enable a new programming paradigm in 2 (maybe 3 if they port to CLR) language targets. It’s a showcase in metaprogramming where the macro expansion is actually leveraging a tools.analyzer implementation with multiple custom passes (written in clojure), to transform the input into something with parking i/o. They basically lifted Go’s communicating sequential processes implementation into a macro as a library, and implemented a little compiler along the way. It reminds me of some of the stuff Paul Graham talks about in On Lisp, where he describes programs that spend the majority of their time in macroexpansion just doing ad hoc compilation without ever leaving the host lisp.
Transducers (preceded by reducers)
I think Guy Steele’s talk gets the majority credit, which formed the basis for reducers and then the later generalization into transducers. Very cool idea, relatively simple to implement in hindsight.
Will Byrd Implements Eval in miniKanren and Uses it for Program Snythesis
magic. Let’s find all the programs that could eval to our input expression…