Book recommendations on concurrency/async programming

Any recommendations for good books dealing with async programming? Preferably clojure related but not necessarily.

I’m thinking specifically about patterns and concepts related to manifold’s abstractions - deferred and streams.

1 Like

This book is excellent: https://pragprog.com/titles/vspcon/programming-concurrency-on-the-jvm/

This book is also really good (to look at concurrency in different languages, including Clojure): https://pragprog.com/titles/pb7con/seven-concurrency-models-in-seven-weeks/

5 Likes

The unification of single-threaded and multi-threaded and asynchronous and distributed

“Fire-and-Forget” is a guidance bullet with independent guidance capability. It does not need external support, it will automatically track and strike the target, and do not need to control after launching. The utility model has the advantages of improving the use efficiency between the missile and the launcher, and reducing the missile’s dependence on other systems to provide its own updated information, so that the launcher can attack the largest number of targets in the shortest time and improve the survival of the launcher. The development direction of guidance technology in the future is precisely the “Fire-and-Forget” precision guidance technology.

For the same reason, I think the development direction of concurrent and parallel programming technology is also “Fire-and-Forget”, so asynchronous is unnecessary, async / await is a backward and inevitably eliminated model. Change from focusing on “code and function development” to “data control, data flow management, data lifecycle management, data standardization system, process improvement (process reengineering), thread collaborative optimization, etc.”

From the perspective of Operations Research, async/await should be abolished. When waiting, this thread should be over. For example, in a factory, it will not happen that one workshop stops in the middle of the production process and waits for its products at the door of another workshop to continue the production process. Each workshop only interacts with the warehouse. After the main thread (also the workshop) sends out order data, The production plan is generated by the warehouse, and data (messages) are sent to the relevant workshops for production until the task is completed. There is no waiting in the entire process, but the production plan is generated according to the order, with the warehouse as the center, and each workshop independently produces in parallel.

Therefore, asynchronous technology from the perspective of management, people who understand the situation know that they are waiting for raw materials; people who do not understand the situation may mistakenly think that they are on strike. But in any case, it wastes resources greatly.

Existing pragmatic “Fire-and-Forget” concurrency and parallel technology: software transactional memory (STM), multi-versioned concurrency control (MVCC), git.

The Grand Unified Programming Theory: The Pure Function Pipeline Data Flow with Warehouse/Workshop Model

2 Likes

The underlying pattern and concept is called a continuation. If you can wrap your head around how call/cc works in scheme, it’ll go a long way to understanding what’s going on with the async stuff. As far as books for that, SICP?

This suggests you don’t fully grasp what async/await is doing. There is no blocking, there is no waiting. NodeJS, for example, can handle thousands of simultaneous connections with a single thread. Assuming, of course, that everything’s I/O bound because there’s only one thread.

Going from fire and forget to async/await is like going from goto statements to if/else, for, while, etc. The behavior is the same, but the code is much easier to deal with and far less prone to devolving into spaghetti code.

3 Likes

It’s only “parallel” if every workshop is producing the same product at the same time otherwise it is “concurrent”.

In addition, inside the workshop, there are going to be multiple processes operating concurrently and, yes, those are absolutely asynchronous and they may well have to wait at times for either the inputs of the process to become available or for enough of the output of the process to be consumed in order to continue producing (backpressure).

The real world is inherently asynchronous and concurrent. We aim to minimize the waiting by streamlining and coordinating the processes if we can, and where we can arrange for workers (threads) to work on something else if their current process is “blocked” (parked) then we can minimize workers as well.

Fire-and-forget is a dangerous path to tread because as throughput increases you need more and more threads – so you will either run out of resources (threads) or you will end up blocking, waiting for threads to become available, which is definitely inefficient and why languages and systems have moved away from fire-and-forget and have adopted more sophisticated approaches to handling concurrency and coordination between asynchronous processes.

It’s somewhat of an irony that all the pain and patterns and workarounds we’ve adopted on the JVM for the limited capacity of threads will be mostly unnecessary once we have Project Loom available and we can pretty much treat threads (fibers) as an unlimited resource!

There is no blocking in asynchronous, but there is waiting.
js does not support multi-threading, only asynchronous.

At the same time in the same factory, some workshops produce the same product, some workshops produce different products, so parallel or concurrency is not the main point of the discussion here.

They should definitely not be asynchronous, nor should there be waiting. If there is waiting in an operation (thread, asynchronous thread, fiber), it must be because the operation design or the proportion of system resource allocation is unscientific. If there are tasks (data) that are not completed, all workers (threads, asynchronous threads, fibers) are not allowed to wait. This is the most basic requirement of scientific management.

scientific management is committed to improving efficiency through operations research. It divides operations into indivisible monotonous operations (threads, asynchronous threads, fibers), and then designs the optimal combination of operations based on resources to achieve maximum efficiency. Among them, waiting is not allowed within an operation (thread, asynchronous thread, fiber), which is the most basic requirement, and this approach is also the most convenient for overall coordination and optimization. The most important design tool is the Gantt chart. The best implementation method is the warehouse/workshop model implemented by the factory, which is also the principle of ForkJoinPool (the basic technology of fiber), but the ForkJoinPool’s designer did not realize this and did not provide guidance in the ForkJoinPool’s user guide.

The most typical case: Amazon actually used AI to monitor and dispatch employees, which was inefficient and fired on the spot.

They will not run out of resources (threads) or eventually block, because the warehouse (+ dispatch center = DBMS) will arrange the maximum number of workers (threads, asynchronous threads, fibers) in the system according to the optimal component (data) production ratio Quantity for production.

They are independent and do not interfere with each other. They are only responsible for doing their own work. There is no need to bother to observe and wait for resources during production. The warehouse (+ dispatch center = DBMS) is responsible for it.

This is consistent with the basic principle of ForkJoinPool.

  • It uses an infinite queue to save tasks that need to be executed

  • Use an internal queue to perform operations on tasks and subtasks that need to be executed to ensure their order of execution.

  • ForkJoinPool can use a limited number of threads to complete a lot of tasks with a parent-child relationship

  • Split a “big task” into multiple “small tasks” and hand the task to ForkJoinPool to execute

There is no direct relationship between fibers and asynchrony.

A fiber is made of two components — a continuation and a scheduler. As Java already has an excellent scheduler in the form of ForkJoinPool, fibers will be implemented by adding continuations to the JVM.

ForkJoinPool uses the warehouse/workshop model and the scientific management of operations research, which has been mentioned above.

Fiber is more like an uber driver, not an uber employee (system thread). It does not need to bear the minimum wage, paid sick leave and unemployment benefits and other benefits, and there is no cost. Therefore, uber can almost treat the uber driver (fiber) as an unlimited resource, and dispatch the uber driver (fiber) to complete the task.

Conclusion

I just object to waiting in a thread, thinking that the waiting point is the natural boundary of the thread, and the thread should be terminated when there is a wait. The processing after the waiting point is solved by the dispatch center issuing new threads as needed.

No, there’s not. Unless your definition of waiting is “any code that isn’t currently running”, which would be a pointless definition. In GUI code, when you register a button press event handler, is that handler “waiting”? No. There’s no idle threads, no idle CPU time. Same thing with async/await. Nothing is idle.

That’s the whole point of async/await, to eliminate waiting in threads. Node has shown that for I/O heavy tasks, only a single thread is necessary.

This approach has been proven to be less efficient and to not scale as well.

My programming ideas mainly comes from RMDB (PostgreSQL, Foxpro). I don’t think that RMDB is not as efficient and scalable as js.

Excuse me, I have never used js, I am not familiar with js and Node, so I can’t analyze it further.

An area that really hasn’t changed in decades – unlike the stuff we’re talking about in this thread which is constantly evolving, with new patterns of efficiency being identified fairly regularly… As you say, you “can’t analyze it further”.

To be sure, JS is by no means optimal. It’s a valid proof of concept, though. What async/await gets you for I/O, and the project loom that Sean mentioned aims to get for general code, is thread behavior for the cost of a function call.

my algorithm is similar to “Project Loom”, which is equivalent to the data-driven version of “Project Loom”. They should be equivalent.

In the “Gantt Chart”, there is no waiting inside a task (a bar in the chart. Thread, fiber), and all waiting is global. When waiting, the task (a bar in the chart. Thread, fiber) ends. When the resource is obtained to continue working, it is already a new task (a bar in the chart. Thread, fiber) . “async/wait” has a wait inside a task (a bar in the chart. Thread, fiber), which is completely wrong. “async/wait” completely does not conform to the most basic principles of “Operations Research Science”(ref01: wiki,
ref02). I don’t think the unscientific model can produce higher efficiency.

Hum, if I understand what you mean. You’re saying to organize the rate of each “factory worker” so that when one needs the result of the other, there is no “wait” time. Is that correct? Thus the system should be most efficient, by having everything just at the right speed?

And if I also understand you, you do not have the different “factory workers” exchange data between them, or depend on each other at all. Instead you put everything in a “database” and coordinate the “factory workers” using STM (or some form of MVCC). And to parallelize them, you run them over a ForkJoinPool ?

Is that correct?

If so, I do think its an interesting model. I’m assuming it be something like each FW submits itself to the ForkJoinPool. So the ForkJoinPool will run them to some concurrency level, and after they run they’d like submit themselves back to the ForkJoinPool or terminate? And maybe, if they see the “DB” does not contain anything for them to do, they simply submit themselves back to the ForkJoinPool ?

And your DB is thus acting as some kind of execution plan. Where depending what is in it will vary the work and order the FW are doing things?

@didibus yes.

In the Gantt chart, the waiting point divides a large task into many smallest independent small tasks. A small task is a bar, a bar is a workshop, a workshop is a pipeline, and a pipeline is a pure function or equivalent pure function.

In addition to obtaining input parameters from the warehouse at the beginning and submitting output data to the warehouse at the end of the workshop, the workshops are independent of each other, and the workshop has nothing to do with the external environment. They do not need to know whether there is waiting or whether there is a previous step or a next step.

In this model, the system dispatch center (warehouse) can safely arrange the order of completion of tasks with the optimal algorithm.

“async/wait” is just an unorganized, undisciplined, imprecise, and unsafe practice.

1 Like

I see, well, I can’t speak to it, since I’ve never tried to design concurrency like this. That said, I think there’s one slightly orthogonal consideration that this doesn’t handle, which is Java’s blocking IO. That’s where we can all, you included, benefit from having a form of lightweight thread. Because even in your ForkJoinPool, if you have FW that do blocking IO, they’ll block the thread and the thread count will grow and grow, making things slower (due to thread creation being slow), and using up more memory.

I don’t think there will be the kind of situation you mentioned. “Java’s blocking IO” is the bottleneck of the system. Solving in the thread or by the system dispatch center is essentially equivalent, However, it is easier to perform global optimization by the system dispatch center.

This’s a Manufacturing execution system (MES).

Data flow naturally supports parallelism and concurrency, which is different from traditional programming methods.

My algorithm does not necessarily use system threads, but fibers can also be used.

Update 2020.12.01: add Warehouse/Workshop Model diagram

1 Like

What you’re proposing is very similar to modern CPU architectures of an Out Of Order Machine.
Instructions are fed into a dispatch area which can allocate them to different internal paths (allowing parallelism). When the results return they enter a reorder buffer then exit the entire system in-order.

My architectures is simpler, unified and clear.

This is the magic of the two guiding principles of scientific research (simplicity and unity). If you observe carefully in life and work, you will find more and more similar architectures. As I pointed out in the article:

The idea of simplicity and unity is an important guiding ideology of scientific research. Unification of theories is the long-standing goal of the natural sciences; and modern physics offers a spectacular paradigm of its achievement. It can be found from the knowledge of various disciplines: the more universally applicable a unified theory, the simpler it is, and the more basic it is, the greater it is.

The overall architecture of the Apple M1 chip will be more similar.

Apple M1 chip adopts Warehouse/Workshop Model

  • Warehouse: unified memory
  • Workshop: CPU, GPU and other cores
  • Product(Raw material): information, data

there’s also a new unified memory architecture that lets the CPU, GPU, and other cores exchange information between one another, and with unified memory, the CPU and GPU can access memory simultaneously rather than copying data between one area and another. Accessing the same pool of memory without the need for copying speeds up information exchange for faster overall performance.

reference: Developer Delves Into Reasons Why Apple’s M1 Chip is So Fast

And this will be major, because async/await, as done today, is a useless burden on the programmer (and yes, I’m looking at you too, core.async). Nearly always, I don’t care if it’s a thread or not or whether the underlying operation is synchronous or not.

But imagine real goroutines, with immutable data structures and all the goodies that come in the Clojure box…

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.