Some background: core.matrix is an interface and home to multiple matrix/linear algebra libraries. In the best case, you can make a one-line code change and switch matrix libraries. I have the sense that core.matrix doesn’t get a huge amount of affection these days, although I still see issues and PRs submitted, so maybe I’m wrong. I believe there’s more interest in the Neanderthal matrix library, which I am sure is quite justified. There is a core.matrix interface to Neanderthal, denisovan, but I understand that there are often advantages to writing code directly in Neanderthal.

Recently I was implementing a simple neural network algorithm that’s part of a proof by Siegelmann and Sontag (or see Siegelmann’s book), and thought I’d try Neanderthal. (I’d only used core.matrix in the past.) Neanderthal worked perfectly, but trying to run Siegelmann and Sontag’s algorithm in Neanderthal made me realize that Neanderthal’s floating point numbers weren’t sufficiently precise for the algorithm.

So I rewrote my (very few lines of) code in core.matrix, and implemented the network with `clojure.lang.Ratio`

s in core.matrix’s `ndarray`

library. `Ratio`

s are native Clojure abitrary precision rational numbers, and `ndarray`

is written in pure Clojure, so it preserves Clojure’s numeric type semantics. That was exactly what I needed. I got the algorithm working and finally understood the proof.

I’m not aware of any other language in which I could have done this! (For example, OCaml, my other favorite language, has a good matrix library and a couple of arbitrary-precision libraries, but no matrix library with abitrary-precision numbers.) The fact that we have core.matrix in Clojure, with a common interface to several libraries, means that even in unusual use cases like mine, there is a good chance that there’s a matrix library that will work. And if you write to the core.matrix interface, you may not have to change your code when you need a different library. (Of course, it’s not always so simple. Sometimes core.matrix code that works with one library can become slow or even break when you switch libraries.)

I think that core.matrix was and is a very difficult project, for which Mike Anderson deserves enormous credit. Maybe it’s too difficult, without a big community of users and contributors, and I don’t think it has that. Most people who need a matrix library quite reasonably want the fastest possible floating point matrix code, and for that, writing directly to Neanderthal may be the best solution.

However, core.matrix has been incredibly useful to me. I needed to say that. I hope that it persists and grows.