This is why people say that in functional programming the order of execution doesn’t matter

I was having a discussion with someone online, and I was trying to explain that let is very much a functional construct, though they were under the impression since it allowed you to do imperative programming in Clojure it was thus imperative.

So finally, I came up with the following explanation, and in the process I feel I have quite simply explained the reason why you’ll hear that in functional programming the order of execution doesn’t matter:

See how let:

(let [a 1
      a (+ (inc a) (inc a))
      _ (println a)
      a (+ a a)]
 a)

Can be rewritten simply as a nested composition of anonymous functions:

((fn[a]
  ((fn[a]
    ((fn[_]  
      ((fn[a]
        a)))
     (println a))
    (+ a a))
   (+ (inc a) (inc a))))  
 1)

The latter representation you would consider functional programming no? Well the let is just syntactic sugar for it and can be rewritten in terms of only composed anonymous functions (aka lambdas).

And being functional, you can reduce it with variable substitution, which gets rid completely of all the variables, at compile time, and the evaluation will give the same result:

(do
 (println (+ (inc 1) (inc 1)))
 (+ (+ (inc 1) (inc 1))
    (+ (inc 1) (inc 1))))

From the let form, or from its corresponding anonymous function form:

((fn[_]  
  ((fn[]
    (+ (+ (inc 1) (inc 1))
       (+ (inc 1) (inc 1))))))        
 (println (+ (inc 1) (inc 1))))

In this case you see more clearly that the side-effect causes impurity, since it can’t really be reduced, it’s only in this case that order dependence matters, and so we can’t eliminate the wrapping function, and this relies purely on Clojure’s left to right argument evaluation ordering which allows you to mix/match side-effects within pure functions with predictable effect timing.

So here what happens you reduce that function into a do-block (which is Clojure’s imperative form):

(do
 (println (+ (inc 1) (inc 1)))
 ((fn[]
   (+ (+ (inc 1) (inc 1))
      (+ (inc 1) (inc 1)))))) 

And now you can further reduce the pure parts, which takes us back to what we had when we reduced the let:

(do
 (println (+ (inc 1) (inc 1)))
 (+ (+ (inc 1) (inc 1))
    (+ (inc 1) (inc 1))))

And finally this can be reduced further:

(do
 (println (+ 2 2))
 (+ (+ 2 2)
    (+ 2 2)))
(do
 (println 4)
 (+ 4
    4))

To our most reducible form:

(do
 (println 4)
 8)

This reduction can all happen in parallel or in any order, and the result will always be the same.

Ok, and this last bit is very important, this is what people mean when they say that in functional programming the order of execution doesn’t matter. The side-effects must still be sequenced in their correct order, but all the computation can happen in arbitrary order, because the computation doesn’t rely on a sequence of instructions that mutates shared memory locations like it does in the imperative programming paradigm, instead it relies on this “reduction” process I described which as you see you are free to reduce each part in whatever order you want (even in parallel), you’ll always end up with the same thing in the end.

1 Like

x = (1 + 2 + (3 + 4) + (5 + 6))

Does the result depending on which order we simply the expressions? If not, then “order of execution” or evaluation/simplification does not matter. Boils down to referential transparency and a substitution model of evaluation, e.g. (3 + 4) = 7, and that immutable fact means we can substitute. The order of substitution is irrelevant; we’re just “recognizing” the immutable relationship that already existed. In the case of (pure) FP, the end result of the computation is the result of substituting and simplifying these equalities. That means in languages like haskell, where pure FP has primacy, the compiler can pick and choose how things are evaluated and substituted behind the scenes; e.g. it’s fairly free to re-order and cache stuff to get efficiencies without messing with correctness.

Yes exactly. I don’t know if referential transparency is the right concept here though. There must be some well defined properties in literature somewhere that define what is required in order for the substitution model of evaluation to work, and I feel it might be more than just referential transparency, or it might not always need referential transparency. But definitely it requires pure functions, though when you have impure parts, you can keep track of the sequence within the “formula”, it’s just those parts can’t be substituted themselves, and will need to run imperatively.

Good point in asking if the result could depend on the order you simplify things. This I think it very much does, basically operation prefecedence we know can affect results, but I think normally the substitution rules define those, so ya, it’s not that all things can be substituted in any order, but that substitution lets you know what can be and when that’s the case then the order wouldn’t matter.

Like:

(* 2 (- 1 2))

We know you need to substract first and then multiply, that order very much matters to the result. But in the case of:

(* (+ 2 4) (- 1 2))

We also know that you can substract first and then add or vice versa, as long as you multiply last you’re all good.

Also worth nothing, practically I don’t think many compilers/runtime actually will evaluate things using substitution. It’s more that substitution is fundamental to the paradigm of functional programming, so it just matters that you have semantics that could be evaluated with substitution, even if your implementation might use other tricks to do so.

Edit: Oh and I also believe being able to do a full reduction like this of an entire program at compile time I believe is the holy grail of compiler optimization for FP languages. You could eliminate lots of computation and really simplify the code down to its bare minimum.

Yeah, operator precedence/order of operations is unnecessary with the clear prefix application rules (part of the reason I like lisps so much…0 ambiguity). Things can get dicey in Haskell with custom operators and having to juggle precedence.

Perhaps the best view (which lisps/sexpr helps) is to look at these as an AST that we evaluate. It becomes extremely obvious that the only “order” that matters is the topological order defined by dependencies defined by inputs. You can get niceities like turning the tree into a DAG to cache provably like terms, and computing leaves in parallel. All of this is underpinned formally by lambda calculus though.

Yea, its really the lambda calculus evaluation rules bended to a more programmer friendly syntax/semantics.

I thought about how things combine in a DAG, I think more specifically it acts as a Multitree even, to be more specific, if I’m not mistaken.

Edit: The reason I posted this by the way, is I’ve often seen people wrongly assume that FP allows execution to be order independent, but its not correct, you can obviously require some order of operations, and side-effects almost always require ordering to be logically correct. It is more that in this multitree view of the code base, the substitution at each level can happen independently of siblings (and maybe a bit more), and that’s the part where FP allows order independence where imperative would generally not allow it for the same computation.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.