Okay, my 2 cents. I hope they clarify.
@Yehonathan_Sharvit, your distinction between OOP and FP reminds me of the model where OOP first switches on the class, then it switches on the function, while FP first switches on the function, then it switches on the class. (Many OOP codebases use classes as an open way of creating what in a typed FP language might be a tagged union.)
I don’t think that’s quite the point you’re making, but it is related. The model sees FP and OOP as essentially isomorphic. The question is then which switch (class or function) will be the most beneficial for your problem. For example, are you expecting to add classes but have the same functions? Then OOP is better. If you are expecting to add functions but have the same classes, then FP is better.
I think it’s a useful reductionist view of the differences, but doesn’t quite capture what is special about them as paradigms (thinking tools), seeing both FP and OOP as merely dispatch mechanisms that you could easily translate between.
However, for the purposes of your book, this might be just what you want. My opinion might or might not be useful to you, but I’ll state it here.
OOP models things as Objects. Each object has References to zero or more other Objects. An Object with a Reference to another Object may send that Object a Message.
Computation largely happens by messages flowing through a large, complex, and, on the whole, unknowable network of object references. A little bit of computation happens in the Objects themselves (for instance, when you send the
+ 1 message to
3, that terminates in an ADD instruction on the actual machine you’re doing). If there is class-based dispatch, some computation happens there, too (for instance class-based dispatch can replace conditionals).
FP models things as Actions, Calculations, and Data. Actions are often known as effects or side-effects. They have an effect on the world outside of the software. Calculations are timeless computations. They do not depend on when or where they are run. Data is facts about the world, and as facts, they don’t change, but also they are inert (they can’t execute as a calculation can).
FP differs primarily by recognizing that Calculations and Data are easier to work with. Actions are harder, and so we should devote more attention to getting them right.
I think Data Oriented Programming is the recognition that facts are
- interpreted in various ways, even in the same program
Haskell gets that data is structured, but the types make #2 very difficult. Yes, you can have different types for different means of interpretation, but they are difficult to know ahead of time and they are numerous.
A Data Orientation means you find an abstraction for storing data that allows you to capture the structure of the facts with a high fidelity. How to capture that structure is the province of data modeling. You can then interpret those facts in various ways for the many purposes of your software.
This is what Relational Databases were designed for. Get the data in there with some structure. Then the query engine can do arbitrary queries on it in something like a declarative logic language.
Clojure has a different model, but it is a model, nonetheless. Associate values with names (maps of keyword->value). Sometimes you need a collection that maintains order (vectors). And sometimes you need to check for value containment in a collection (sets with contains?).
Data Orientation mainly gets data into some structure, then interprets it in various ways.
One benefit is that you can write functions at two levels of generality. The first level is domain-agnostic operations that operate on data in given structures. Since there are only a small set of possible structures, these functions are very reusable and often allow for combinatorial recombination. It’s like being able to extend the SQL language.
The second level is domain-specific operations that understand something not captured in the data structure. For instance, your code might know that it can take a map under the
:address key and send it to a geolookup API to turn it into a lat-long. These kinds of operations are less reusable but obviously necessary.
Most orientations focus on the second level of generality. Data orientation separates these out and leaves things as data.
Another benefit is that you can be very fluid with your interpretation of the data. For instance, different stages of a workflow may need different pieces of data all tied to a domain entity. Would you write different types for each of those aggregates of data? Having a generic data abstraction lets you deal with this fluidly. Some would argue it’s too fluid. I think spec was supposed to help with this (but wink wink I think it made it worse and is why spec 2 was started).