On functional, procedural and OO

class Foo {
   void bar( IQuux q ) {
      q.wibble();
   }
}

If IQuux is in an interface, the implementation of wibble() could be absolutely anywhere – in user code, in library code, in code that hasn’t even been written yet. Given an interface IQuux, you have absolutely no idea where its implementations might be.

You just haven’t seen enough code. For the first half of the '90s, I worked for a software QA tools company. We wrote source code analyzers. We had hundreds of clients, all around the world. I got to see – and analyze in depth – hundreds of millions of lines of C, FORTRAN, and C++ across thousands of projects worldwide. Since then, I have also worked as a consultant doing mostly architectural reviews of projects and writing coding guidelines, again with a global client base in many languages. A lot of code.

Another downside of OOP: boilerplate. Something that OOP language designers are now working to address because they’ve seen how much less boilerplate there can be (in FP code). And even though the first round of value objects were still mutable, there’s a strong movement in several mainstream OOP languages to introduce better support for immutable (value) objects.

I’ve shown my credentials. I’ve dealt with a lot of different languages over more than three decades of professional software development and I’ve seen a huge amount of code, written in various styles, because of the industries I’ve worked in.

As I said above: your assertion about this simply isn’t true.

1 Like

Ya for sure. I meant that, in terms of code organization, like being able to find all usage of the class, you still need to do some “search” over the code base and upstream packages as well. Just going to where the class is defined doesn’t show you the full picture of all operations which involves the class in the application.

As an aside, I want to say that I appreciate your challenges. I’m glad there is someone to represent OO on this forum, otherwise we’d be at risk of just becoming an eco-chamber. Even if the conversations are sometime threading a thinner line between constructive and flame wars. I think this one has been constructive.

I also think that some styles work best for some people and not others. I think this is an aspect not always ackowledged. To whatever extent you can point out technical differences, it does come down to how effectively one can leverage the paradigm. If one finds it they are more effective with OO, it’s useless to argue with them. Similarly for FP. What can be done though is an exchange of thoughts, because sometimes being effective in one style is a matter of practice and learning.

We’ve talked a lot about OO modeling, because I think OO forces a certain model on you, so it’s normal to start to talk about it. In FP modeling, there’s many ways to model things (I mean in OO as well once you get in patterns, but the basic modeling is more obvious).

So on the prior topic of having OO club data and behavior over the data together. One thing I want to bring up is the same way this is leveraged in OO, I actually take this idea one step further.

I see the input to a function as the fields to a class.

And in that sense, it’s kind of a really granular OO, imagine restricting yourself to one method per class.

This is why I don’t like writing functions that operate on some common structure. Too much coupling involved in that. I like to think of each function as, what data they need and what structure they’d want it in.

That way, you can just introduce mappings between your functions and easily compose them.

Even in OO this is known as a good practice. The single responssability principal. And also, how they advise that there shouldn’t be fields which are not used by all the methods, normally if there are, this is an indicator to break appart the Class into two.

Now sometimes this is too granular. For me, finding that right level of granularity is key to good design. But in general, I also know I’ll get it wrong the first time. And I find refactoring to have more functions that operate over the same structure is easier than breaking appart an existing structure into two and splitting up the methods. That’s why I find it personally a better default to start at the most granular level, that of single functions, and bring together what needs too over time.

3 Likes

I know this is an old thread but discussions of this nature are fascinating to me, and I’d like to make a number of observations.

Firstly regarding the above, the fundamental difference between FP and OOP is mutability in place in OOP vs immutability in FP. So in OOP, any data structure by default has some level of complexity, and the more nested it is, the more complex it becomes. In FP on the other hand, no matter how deeply nested a data structure is, you do not have that complexity. Complexity is not a function of how deeply nested a structure is, but rather how the value of one entity depends on another.

This is the core problem in OOP: you create two objects, where one references another, and suddenly you have tight coupling and with it complexity. The more objects you link to this structure, the more complex it becomes.

Take the following examples in Clojure.

;; Example OOP
(let [a (atom {0 1}) 
      b (atom {:a a})
      c (atom {:b b})]
  c)

The above is an example of how OOP works, modeled using Clojure primitives. If I want to build any data structure, I do it using mutable references, and the mechanism for dereferencing is built in to the language. (I could also use deftype with mutable fields here, but the idea is the same.)

Contrast that with:

;; Example FP
(let [d {0 1}
      e {:a a}
      f {:b b}]
  f)
;= {:b {:a {0 1}}}

Here I’m expressing the same thing, but with immutable values instead. Now this is a very simple example with only two nested levels, however even in this simple example, the immutable version is clearly less complex. Why? Because as soon as I create f, I know that the value it holds can never change. Nor can I change d or e. I can re-bind these symbols to different values, but cannot change the actual values they are bound it.

That isn’t the same with Example OOP. Not only can I change the actual value of each of the items I have defined, but if either a or b changes, then c changes along with it. Hence there is no guarantee as to what c holds at the start of any operation I may wish to carry out using c, since something may have changed any of a, b or c without my knowing.

If I were to increase the nesting in both examples, for each extra level in Example OOP, I am adding extra complexity to the system since I will need to add another mutable at that level.

As such, @Richard_Heller when you say the more complex a data structure needs to be, the more limiting you find FP, I would have to completely agree with you on this. But that is because FP doesn’t deal with complex (read: mutable) data structures, it deals with immutable data structures, which by their very nature are not complex, no matter how much nesting you require.

You cannot have pure functions without immutable values. If you pass anything other than an immutable value to a pure function, it isn’t a pure function. Even if you pass in a reference (object) to a function and don’t change it within the function, on a host which supports concurrency that reference can be modified from outside of the function, so you lose functional purity. You can only pass immutable values, such as numbers, strings, booleans, and of course immutable data structures into pure functions.

Say you have the following object,

obj = {
  foo: {
    bar: {
      a: 1,
      b: 2,
      c: 4,
      d: 3,
    }
  }
}

You notice that c should be 3 and d should be 4. Do you do this,

obj.foo.bar.c = 3
obj.foo.bar.d = 4

and be done with it? Nope! Setting c and d cause a new reference for bar, which needs to be assigned to foo, which causes a new reference for foo that needs to be assigned to obj. And obj, of course, is merely a part of a larger data structure that you’re doing divide and conquer on, so the new obj reference needs to be set everywhere that obj was being used, with the same domino effect coming into play.

That gets old.

Yes, there are things like assoc-in to help with the dominoes and ease the pain. The fact that assoc-in even exists shows how much of a pain it is in the first place. Making the data immutable doesn’t make it less complex. It just makes it immutable.

The point of immutability is to provide an alternative to the complexity inherent in OOP systems. The example you have given is something you would only do in OOP, not FP. Yes, you can do something like that in Clojure with atoms, and I have provided something similar in my “Example OOP” code fragment to show why this isn’t a good idea, but in general in Clojure you do not have one atom depend on another. Instead, you write immutable data structures, as I have shown in “Example FP”.

Yes, in OOP. Your example is written in OOP, and there is no FP equivalent. The “domino effect” you describe is the complexity that is present in OOP systems that FP/immutable data structures tries to mitigate. There is otherwise no domino effect in FP in general, though if you try really hard in Clojure you can replicate it. But why bother when it’s much easier to avoid the domino effect entirely and just use immutable data structures?

I cannot agree with you more, and that is part of the reason why FP languages have been gaining traction in recent years, because in at least this area, they are a lot simpler to work with than the equivalent in OOP.

You are mixing OOP and FP concepts in the one place were the two are orthogonal to each other. assoc-in is not something you can use to address the pain in OOP systems you are describing. It is entirely for use with immutable data. In no way can it help with the dominoes since, as I have already stated, there are no dominoes.

1 Like

I‘ve been catching up on this discussion and reading everything – interesting stuff, @Richard_Heller and @didibus and the others! Thanks for starting and carrying on this mystifyingly intense topic (and I didn‘t think you came overly close to a flame war).

I‘d like to ask, especially @Richard_Heller, where you see the difference between object orientation and object specificity, domain/data orientation and domain specificity. How do you define these concepts for you, and relate them? You wrote something along the lines of having to see an FP codebase that’s less domain specific than an OO one. I feel like there’s a disconnect of terminology here, why would an FP codebase be less domain specific, and how would that relate to object orientation? This would also be my question to @seancorfield: Was the analyzer code not highly specific to its domain, even if highly generic over its input range?

I‘d also like to make a point regarding the idea of a (e.g. Javascript) hash map or any other generic datastructure being object oriented. While true in one technical sense (they‘re kind of objects, even a JS object has hasOwnProperty), it’s not true in respect to the program domain, which a HashMap is no part of unless it‘s a program about programs or datastructures. That’s also the idea behind the term datastructure - it‘s not an object of the targetted domain, it’s an environment-provided structure, most likely accessible via data literals. This introduces the idea of data notation (and further branches out towards metaprogramming and ‚homoiconicity‘), a concept widely used in data oriented languages in combination with ubiquitous generic datastructures. These are not objects you ask to accomplish something in your domain, in traditional OO languages this is all married in a class (or multiple).

The reason we say „functional programming“ is largely in recognition of how relationships between sets are modelled in category theory, the whole functional mapping and injectivity (this may be mistranslated by me from the German Injektivität) stuff, where you get from one set of stuff to another with functions. FP is not necessarily object oriented, it may have any grade of OO support like both @didibus and @Richard_Heller showed, and that‘s because the „functional“ in „functional programming“ is not meaningfully comparable to how oriented towards Objects (in the message passing encapsulation sense) a programming environment is - they‘re arbitrarily independent of each other, and you see the chef‘s choice in your language.

I agree fully with @Richard_Heller that much of the hate towards OO is really targetted at Java. Yet I feel all the examples I see are always in Java or languages with similar weaknesses towards actually embodying OO principles - like Clojure does DO ones or Haskell functional ones (granted, there‘s a lot of JS too, so, YMMV). OO has powerful answers that replicate well fractally, you get a lot of mileage especially when the problem at hand is well specified, rigid, and lends itself well to being implemented in a very normalized way. As soon as you’re talking about „problems“ on a 21st century person‘s device, like a normal business application, I think we‘re talking about something very different. These are constantly moving targets, constantly moving requirements, on an equally constantly shifting underlying layer of technology. A given language may be OO and very well suited for a business application, but the way you tend to have to write your code in e.g. Java makes it fundamentally at odds with having prior assumptions challenged. The shift to data-orientation is precisely to allow more, and more kinds of clients, to take part in the data without knowing or caring how that changes. It’s moved decidedly away from XML to JSON, not - I believe - because XML is so verbose, but XML tends to make you make your API very object specific (like Java does), and JSON doesn‘t (like Clojure doesn‘t). Object orientation barely comes into play at this level in my opinion.

1 Like

Not sure what you’re asking here: I was talking about the huge body of source code we got to review, in support of my position that Richard was just plain wrong about reusability (in domain-specific OO code).

1 Like

I see, I read more into your answer in the context of the quote above it, sorry!

I was taught structured programming in Algol 68 and programmed professionally in a number of imperative languages. I was always very aware that I didn’t really know enough to design software architecture from a bad specification but nobody else I worked with seemed to understand better. I now know that I was looking for was agile, incremental methods and experimental development. You can’t do top-down design until you fully understand the problem. You are very likely to make your biggest mistakes first, when you know least and to embed them in the structure of your code, making changes very expensive.

I stopped programming just before OOP became widespread but I still worked alongside developers. I could see that objects were a powerful tool for domain modelling but I could never get my head around the object paradigm with single-inheritance. It felt tree-structured and I can see that the world isn’t. When I decided I needed to write code again, I tried Python then went to a talk about Clojure and moved over. I like Clojure but it doesn’t feel anything like a procedural language to me. I’ve felt very confused by it at times.

The data model moves from mutable in-place data to data representing history of state flowing through time. I haven’t made anything in Clojure that is big or complex enough to need simplifying yet but the idea of designing code structure around objects in the problem domain is available to control complexity in FP. I think functions with optional objects are far preferable to a meaningless object invention being mandatory, whenever you need a function or procedure.

1 Like

Well… not all oops are guilty of boilerplate. Java is surely king in the boilerplate department, and part of why I enjoy Clojure on the JVM even if it had zero FP capabilities is because it saves me from that boilerplate. I definitely reached a saturation point with Java where it felt like the expression of any new piece a system system required 50 lines of boilerplate.

However C++ never felt as boilerplate driven as Java to me, and I say that with the utmost loathing of the language. (Caveat, I stopped C++ after the 1990’s, maybe it’s changed in that regard).

Nor do I think of CLOS as having a lot of boilerplate, but my experience with it was always to dress it up with some define-class macro that let me essentially produce whatever boilerplate I wanted under the hood, i.e. if I wanted class-specific accessor functions instead of using slot-value I could do that.

Anyway, just a tiny bit of defense of OOP from the boilerplate charge. Obviously a lot of it is subjective. Just having a decent macro language would likely provide a substantial reduction in boilerplate even for Java. Okay, maybe not Java :slight_smile:

I think a combination of templates and the preprocessor help folks avoid boilerplate in C++ and, yes, it is inherently less verbose than Java in general, but it suffers from other issues that impede readability and maintenance (the preponderance of “punctuation” in operators, for examples). Sorry that you loathe the language – I spent eight years working on its design as part of the ANSI C++ Standards Committee :slight_smile:

1 Like

My deepest condolences.

4 Likes

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.