What is place oriented thinking?

I’ve heard Rich Hickey talk about it, and it’s mentioned around. I feel that it relates to immutability, references, and how Clojure’s state management. But I don’t feel like I understand it.

Any takers?

Teodor

Perhaps you have already seen the talk “The Value of Values” by Rich Hickey (transcript and link to video here https://github.com/matthiasn/talk-transcripts/blob/master/Hickey_Rich/ValueOfValuesLong.md ), but if not, in the first few minutes he does a great job of distinguishing facts from places, and goes on to talk about the benefits of working with values.

That might not answer your question, but I believe that place oriented thinking is thinking not about immutable values/data/facts, but about the constructs in default-things-are-mutable programming languages that programmers have used to (try to) represent them.

1 Like

Hello Andy!

Thanks for the link. I’ve seen the talk, but it seems like a rewatch is in order. And this definitely seems to be the talk to watch to cover place-oriented thinking.

Would this be your proposed definition?

Place oriented thinking is thinking not about immutable values/data/facts, but about the constructs in default-things-are-mutable programming languages that programmers have used to (try to) represent them.

– Andy Fingerhut

Sure, if one doesn’t mind a more conversational style of definition, rather than formal-writing way of putting it.

I think there was a similar thread a while back. Can’t find it. So I’ll summarize some of the things I believe I had similarly mentioned last time.

Basically, it’s talking about code that uses input/output from a specific place. A place is quite literal, you can think an exact memory location, or similar use of virtual memory, like a specific cell into an array, or a specific field in a struct, on an object, etc.

And place oriented programming is a style where your code agrees on conventions of common places where data will be written too and read from.

Such a style encourages mutability, as actors in your code communicate to each other through this limited set of places, each allowed to write and read to those places at any time, which means careful coordination will need to be put in place.

Now, in truth, you can’t really get away from this style on today’s hardware, because fundamentally, memory works that way at the hardware level. You have known memory locations, known registers, and known virtual memory places. But you can build layers above that which changes the style for code above it.

One such layer you can add is immutability of data, so that once a place has been written too, it can never be re-written too ever again as long as it is being read from, aka, as long as it’s not been reclaimed by the GC.

So each new piece of code wanting to make a change to the data must make the change in a new place. In the abstract, it appears like functions just get copies of the data as input and return copies of their changes to it as output, exchanging between each other this immutable data, never ever touching the data owned by someone else.

Another aspect is that places are abstracted out and managed for you by lower level constructs. So instead of agreeing that the value you want will always be at a specific offset in a struct, you agree that you can find the data given a key. This means the data structure can actually choose to.store the data in many different places, as long as it can find it for you given the agreed on key.

Another aspect of that is static dependencies on places. For example, a function which expects to find the database connection it needs in a known global location say agreed-on-namespace/specific-symbol. That’s a place to look for something. And you can think of this as place oriented programming. The alternative is that the place isn’t known in advance, and the data is either passed to the function itself so it’s just handed the connection when called, or it is handed something that can be used to find the connection.

Both of these relate a lot to FP vs Imperative/OOP. Immutability eliminates the shared memory places that could cause coordination issues. And having pure functions, which only operate on their given inputs and return outputs solves the issue of depending on global places for input/output, and meaning that if that place ever change, you accidentally broke a bunch of functions which were secretly relying on this place always existing and never changing.

4 Likes

Thanks for an excellent explanation, @didibus. I like how explained how immutability helps us get away from place-oriented thinking.

You might also want to review John Backus’ Turing Award lecture for related thoughts. He talks about how the architecture of the von Neumann machine led to one-at-a-time programming / thinking, and how thinking functionally is many-at-a-time. Analogous to place-oriented versus values.

Hello @dorab,

Thanks for your reply.

I can’t get the link working, any chance you can recheck it?

One-at-a-time vs Many-at-a-time really strikes a note with me. I feel that relates to how values should evolve over time. Please do expand on that, if you’re interested.

Teodor

Sorry about the link. You are correct that it leads to a “permission denied”. Try this link instead. If that does not work, try searching for “backus turing award lecture”.

If you think about a “place” to store and update things, you have to think of “what have I stored in this ONE place” and “how do I update this ONE place”. If you think in terms of values, then you can easily imagine transforming multiple (perhaps all) values into new values in parallel, not necessarily one-at-a-time.

To be pedantic, Backus actually talks more about the “von Neumann bottleneck” (the narrow one-at-a-time connection between the CPU and memory) rather than a “place” as we are discussing here. But the concepts are related.

1 Like

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.