Functional Programming and Algorithmic Efficiency

I’m in a class that digs in to operating speed of algorithms, with lots of O, θ, and Ω. For years of Clojure coding, I’ve gotten out of the habit of the sort of iteration-counting that this subject promotes (i’ll abbreviate it “AE”); in functional programming we hate to lose the forest for those trees. We don’t generally ask how efficient is (for) or (map). So what do you think – is functional programming at odds with the conventional AE? Does it just not care? Or is there an angle I haven’t grasped?

1 Like

Many Clojure core functions explicitly state what Big O complexity they are. Some are linear O(n), some are amortized constant O(1). Performance concerns are why we have things like transducers (to avoid creating intermediate sequences etc). Given FP’s mathematical underpinnings, I’d say it’s even more aware of Big O and other performance/complexity issues.

3 Likes

Excellent; this is just the sort of perspective I was looking for. I had suspicions, due to the FP/math connection, that the oncept was there. But I’ve never noticed O complexity listed in the docs (albeit I don’t read the official page as often as other alternatives with examples).

Examples in clojure.core – check the docstrings for these functions:

  • contains?
  • count / counted?
  • rseq
  • compare last and peek

And the basic data structures have Big O performance guarantees on a number of operations depending on whether they are Counted, Indexed, etc.

1 Like

There are also practical concerns that big O analysis tends to downplay.

Consider something like a vector lookup, even though the lookup operation is actually O(log32 n), you can pretend it’s O(1) for anything that will fit in memory. ex: For a 32GB vector of ints in memory, a lookup will cost you 7 or less traversal operations.

Why? Check out this article for the deatils. https://hypirion.com/musings/understanding-persistent-vector-pt-1

2 Likes

As an aside, Andrei Alexandrescu’s talk on optimizing a medium-sized sorting algorithm (c++) is highly enlightening and entertaining, particularly since he basically says (past a point) toss out conventional measures and book wisdom if you want actual performance. He wrestles to achieve seeming classical algorithmic efficiency (in terms of comparisons, swaps), which blows up in his face, until he starts approaching things counter intuitively guided by brutal mechanical sympathy and in some cases “trying silly things.” The results are surprising (as are the measures he derives for actually predicting sorting performance).

There is a noted disconnect between many FP advocates who rest more on the mathematical side of the house (where implementation is a mere detail) and pragmatists who must wrestle with the machine (implementation pays the bills). Clojure is pretty up front about the expected complexity of its structures (most of which are layered on top of solid research or well-known complexity of existing structures) and operations via docs, official site, books, etc. Perhaps what’s not well-discussed is how unsympathetic the prolific reliance on pointer chasing things through the heap is (object arrays of objects, etc.), and the effect this has on performance. Then again these end up being constant factors (supposedly), so they wash out of the complexity analysis.

4 Likes

Right. IME, O notation can be useful to gauge what order of magnitude to expect between different approaches given my data size. Other than that, I find the performance problems I run into are dominated by incidental things given the platform, data structures, etc.

2 Likes

I think of complexity as a question of algorithms. And design & code reviews. FP is an orthogonal matter. Think of the RETE algorithm.

Just because something is O(1) doesn’t mean one can’t make a really, really slow program with it. FORTRAN is still used because is designed to use cache, memory, and compute units efficiently. In modern supercomputers IO bandwidth is more important than memory cache efficiency. FORTRAN has a hard time programming that complexity. C is more suited to do today’s general purpose supercomputing. There is still a place for lisp and FORTRAN in supercomputing. Just not the general problem. The common stuff (Laplack) is hand-tuned for each computing environment. They use modifiable memory for real world efficiency. Because the hand-coded optimizations, the advantages of FORTRAN are less important.

The algorithms can be programmed in FP. Eigenvalues don’t require modifying anything in place. When I solved them by hand, I didn’t write over the values on the page. I put each step on a separate part of the page. That’s a great thing to understand where I made mistakes. Or to read it at all. When it comes to programming it, FP programming style is a Really Bad Idea. Early optimization is the source of all evil. Knowing that I need to reduce a billion element matrix, deciding to avoid FP isn’t an early optimization. It’s a basic requirement. Actually, I do use FP to decompose the problem across all of the processors and networks. The actual coding is imperative, reused memory.

When it comes to the real world, we need to understand complexity and implementation efficiency. FP is just an implementation choice. It helps in some places it hurts in others.

1 Like

Ya, I 'd like to add that you cannot ignore fundamentals of Computer Science, and algorithms make assumptions about primitive operations, and concrete implementations of them make assumptions about hardware and specific machines.

You can take an algorithm and frame it within a set of existing operations assumed to be single execution atoms. Then you can reason about algorithmic complexities over such an abstract machine. Similarly, once you know the concrete machine, what instruction set the CPU has, if a GPU is available, the type of RAM and size of it, etc., you can reason about algorithmic complexities as well as relative performance of primitive operations for that concrete execution scenario.

So fundamentally, your algorithms must be efficient on the abstract machine you intend to use them, most likely a Turing machine, and it must also be efficient on the concrete machine you will be implementing it and using it, and that sometimes reflects less in the traditional Big O algorithmic analysis.

When it comes to FP, all of the above holds. The issue is that, the abstract machine for FP, and the primitive operation assumptions would be those of some Lambda Calculus machine. That abstract machine has no concrete hardware for it. Most algorithms are thus analyzed abstractly against an abstract Turing Machine, and later concretely implemented on hardware closely related. That means programs implemented using FP, and algorithms defined within FP’s context will need to be translated to an imperative machine and concrete hardware. This translation has efficiency consideration that can’t be ignored, and practical FP takes it into account and takes it quite seriously.

Clojure, is a great example of taking FP’s efficiency consideraiton to heart. It was the first to implement fast persistent immutable data structures for example. It chose to build on top of HotSpot, which can do wonders in terms of managing GC for FP. A lot more choices take performance into account. For a long time, FP wasn’t a serious contender for practical software, because hardware was too slow, memory was too limited, etc. The resurgence of FP in recent years is not coincidental, part of it is due to hardware catching up, and algorithms specific to FP having made stride in optimization, such as the ones Clojure use.

Clojure gives a lot of escape hatches out of FP, most of them are there because of performance concerns. Things like mutable fields in deftype, defrecord using classes under the hood, access to volatile containers, being able to type hint to avoid reflection, allowing primitive math for some operations avoiding boxing, using mutable data-structure operations in constrained boundaries with transient, etc. Hopefully, one day, hardware and algorithms evolve so FP doesn’t need these escape hatches, but currently it is not always the case.

Bottom line, FP isn’t at odd, but conventional AE is focused on Turing models, because current concrete hardware is imperative, and FP needs to be translated to it to be executable on real physical computers, thus it makes sense for AE to focus on that. When you work on FP, you need to understand that translation as well, and analyze its overhead and its implications post-translation in terms of AE and concrete efficiency. Thus FP does add a layer of implementation challenges, mostly for fundamental building blocks, like the implementation of data-structure, GCs, compilers, etc.

3 Likes

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.