I'm Zach Tellman, ask me anything!

I went fairly deep on the philosophy of science (Kuhn, Popper, et al) and all I got out of it was a section of the book that explains why software is not like the physical sciences. Likewise, I read a bunch about “complex adaptive systems”, and all I got out of that was an analogy about butterflies. They don’t have any sort of generative mechanism, just a bunch of analogies and agent simulations. It’s interesting, but didn’t provide anything that easily mapped onto software.

A lot of the books I found useful were ones I had already read, actually. Simon’s Sciences of the Artificial and Alexander’s Notes on the Synthesis of Form were probably the two most important influences, and both were books I had first read a decade ago. I think all of the other books I read, relevant or not, helped me understand them more completely, especially the intellectual context in which they were written.

3 Likes

I have an appreciation for a lot of other niche languages, but based on what I actually use, I’d have to say Java. It’s not a very inspiring language, but it’s a very effective assembler for the JVM. I know exactly what my code will turn into, and can (usually) guess what the performance characteristics of that will be. I think it’s a very nice dual to Clojure in that way.

As far as other natural languages, I’ve never really had the knack for them. I have a bit of Spanish, but that’s it. Considering a lot of my favorite authors wrote in other languages (Borges, Calvino, Lem), this has always been a regret of mine.

I’ll start by saying I haven’t used probabilistic languages, automatic differentiation, and similar mechanisms for anything serious. My understanding of them, as a whole, is that they try to replace a scalar value with something analogous (a statistical distribution, a tuple of value and n-many derivatives, etc). You then create a standard library, execution model, or rewrite mechanism that allows you to treat these non-scalar values as if they were scalar.

This is a pretty leaky abstraction any way you slice it. At the edges, you have to transform real scalars into pseudo-scalars and back again. Leveraging existing code that doesn’t live inside the scope of your framework can cause errors that expose its ugly innards.

Clojure is good at rewriting code, but bad at enforcing invariants on the input code, and bad at generating comprehensible errors when things go wrong. A fully mature version of clojure.spec might address these weaknesses, but right now that’s only theoretical.

If you’re doing research with probabilistic languages rather than on them, using Python is probably the path of least resistance. There’s definitely potential for Clojure to be a powerful tool in this space, but that potential may not be easy to realize.

1 Like

I’m finishing up the chapter on names, but one particular name which I think doesn’t follow any formalized rules are names of entire libraries. They range from very, “bland” names that say exactly what it does, to a single word that can be layered in meaning from obvious to obscure. Do you think that it’s important to be direct when naming libraries or is that the one place devs can play in the mud?

I think a bit of whimsy when picking library names is fine, as long as it’s short and easy to say aloud. If people all standardize on a diminutive, like “lein” for “Leiningen”, you may have gone too far.

2 Likes

I’m a bit late here but I just want to remind everyone that there’s moderators around and that posts should adhere to the Code of Conduct.

Now I also have a few questions actually :slight_smile:

  1. What’s the story behind the name ideolalia.com?
  2. I think a bunch of your writing is really amazing. Early Adopters And Inverted Social Proof is an essay that I still think about frequently. How did you develop your writing and what advice would you give to people wanting to advance their writing skills? Would be curious about practical stuff (i.e. how you actually write an essay) as well as more long-term activities.

Thanks for doing this AMA, super happy to have you around! :tada:

Yes, it is true that passing faulty inputs causes weird errors in Clojure. This leaks into a DSL also when describing code itself.

Anglican uses a CPS transform and samples Monte Carlo traces of the program. It allows to pass in foreign functions (deterministic functions of the environment):

It then calculates posterior distributions over variables of interest after generating these traces. For autograd you work with linear algebra, but your scalar analogy still holds. It is making your program execution differentiable. In combination it allows a Monte Carlo method that leverages neural networks to be efficient:

I will do research on these languages and I feel that Python is a dead end to integrate probabilistic inference with programming languages. In a non-functional programming language normal code cannot be used through such a native mechanism and the probabilistic programming language is more difficult to embed in the surrounding stateful system. In Clojure on the other hand I can even integrate DB queries into Datomic. But you are right that Clojure needs a lot of sustained work in this direction. Dragan Djuric is doing a lot for fast numerics with his efficient bindings in neanderthal for example. I thought you were also doing some data analysis at factual(?).

What is your take on the ontological vs teleological nature of a function. Your twitter tag line mentions something about “utility is contextual” and I have found that teleology is always a subset of some ontology / ontology is always a superset of teleology. And somehow I think this is fundamental to programming, in some Aristotelian way. Eric Normand has been touching on these subjects too, recently, wrt ‘calculations’ being a subset of ‘actions,’ etc.

It’s a play on “idiolalia”, which is a private language, like the ones twins invent for themselves. You might translate it as “speaking in ideas”. It was something I thought was clever in college, and just makes me want to roll my eyes now, but I haven’t gotten around to changing it.

I continually re-read my favorite pieces of writing, in the hope that their structure and rhythm will shape my own. I also write very slowly, edit obsessively, and don’t finish most of the essays I start. I’m not sure that’s a recipe for happiness, but it does keep the average quality high.

As to practical stuff, there’s not much magic to it. I write everything in Markdown, generally start in prose, but sometimes will fall back to bullet points or add a TODO note like “[segue here]” if I’m not quite sure how to stitch it all together. A friend of mine once described his writing process as “waiting until I feel like I’m about to vomit words”, and it’s more or less the same for me. I have a hard time spending X hours a day writing, and tend to procrastinate until I can’t stand it anymore.

If I ever start another major writing project, I’ll probably try to do something more structured, because I think everything I described above is inefficient and self-indulgent. It works for the odd essay, though.

1 Like

I haven’t been at Factual for a few years, but what they’re doing is closer to “data transformation” than “data analysis”; the result of their data pipeline is consumed by other computers, not people. I think Clojure’s pretty good at the former, but less so at the latter. This is a result of a lack of libraries, not any inherent property in the language, but it would take a lot of work to achieve parity with Python.

I think you’re asking “are the intended uses of a function narrower than the possible uses of a function”, or maybe just “can functions be misused?” I’d say yes, but I think that’s true of any tool; there’s nothing stopping you from using a screwdriver as a hammer.

Maybe I’ve misunderstood, though. If so, I’m happy to try again.

Ok, I do not want to press too much on this. I am totally aware that this is a crazy effort, that is why I am asking you whether you have some advice what would be a good strategy, good stepping stones in this direction that Clojure developers might pick up. I am talking about building a community and providing value for everyday coding early on.

Btw. this is a cool format & thanks for taking the time!

Hey Zach, glad to hear that the book is going smoothly :)

I always enjoy your take on the ecosystem, so here’s a question for you -

What’s something you wish everyone who used or interacted with your open source libraries knew?

I have been remarkably bad at predicting what people would and wouldn’t pick up. I think if you pursue this, it should be because you want to make it for yourself, and any community engagement is icing on the cake. If that’s not true, you’ll likely burn yourself out.

That’s a particular implication, yeah. I guess what I’m getting at is that there can be an ontological description of an affair, which describes it purely in terms of its state. And then there’s an teleological description of an affair, which describes the affair in terms of the future states towards which the present states are causing. A purely ontological description of a program might be its ones and zeros. A higher level, teleological description, is one that includes human-related names of things, which reflect purposes common to human life.

How is this relevant to programming? Well, if all teleological affairs can be described in purely ontological contexts, without respect to the futures towards which they affect, then this isomorphism between between function and state may be a kind of universalism. Clojure’s focus on state, time and identity has really helped me to think through these Aristotelian notions. Because Clojure plots change of state across a manifold, we are provided an abstraction that is isomorphic between state and function, through time.

Also, if teleology must always be embedded within an ontological context, this may imply a necessary amount of ontological (syntactical) baggage, for any given function, or animal, in the universe, etc…

At the risk of being an open source curmudgeon, I think a lot of my libraries are used more often than they have to be. Aleph and Manifold are fun, but most applications do not need more concurrency than the Java threading model gives them. I wish people knew that these are tools for specific problems, not pixie dust that makes things “faster” or “more scalable”.

4 Likes

It’s likely you have a much more nuanced conception of these terms than I do. I think of ontology as broadly relating to “existence” and teleology as relating to “purpose”, but that may be muddling my understanding of your question.

With that in mind, I’m going to focus on this:

This seems to imply that the interface describes the teleological aspects, and the implementation/model describes the ontological aspects. If so, that would imply the opposite of what you’re arguing; an interface can be discussed in the absence of a model, and the models can be changed without changing the interface.

Conversely, if you’re saying the ontology encompasses the environment in which the interface sits, then yes, we cannot judge how software is being used without considering where it is being used.

Is that closer to the mark?

Hey Zach, thanks for writing Elements of Clojure. The chapter on names was a fascinating read.

What books/resources on distributed systems or software in general would you recommend to read?

What has motivated you to create all your OSS libraries?

And what’s your typical workflow like for a new piece of code?

I think I agree with your definitions. Ontology is “that which is” and teleology is “that which ought.”

I’m not saying that purposeful effects (affects) - or the purposive interface - cannot be independent of implementation. Just that there is always some implimentative, ontological context from which those oughtful statements derived. And in our heads, those oughtful statements exist over an otherwise un-oughtful substrate. And when we put them into computers, they are again hosted over an otherwise un-oughtful substrate.

So, yeah, I’m saying that some ontology always encompasses a functional interface. But I’m not just saying that in the sense that, “hey, everything is relative.” I think that this ontological/teleogical relationship may produce a certain amount of necessary syntactical baggage, which scales at some constant rate relative to problem size. Something akin to universal, sort of mathematical law of metabolic scaling as a contention between purposes and the incidental complexities it takes to get there for a given hosting ontological context.

Software in general is pretty broad, but I’ve compiled a rough bibliography here that you might find interesting. I especially recommend Data and Reality.

As far as distributed systems go, Nancy Lynch’s Distributed Algorithms is a good formal resource, and Gerald Weinberg’s An Introduction to General Systems Thinking is a good informal resource.

Pretty much everything I do is to motivate myself to understand something better. If a project has an obvious path to completion, I generally never get around to doing it.

I start with a single file, and try to thread together simplistic end-to-end functionality, and then start filling in the pieces. Tests come later, but as I’m changing things there’s some small set of code I can run at the REPL to make sure everything still fits together. This helps me decide how the project should actually be structured.

1 Like