To block or not to block in go-blocks

In the light of this comment by @Timothy_Baldridge we’ve been reviewing our usage of core.async.

This makes a lot of sense, go-blocks can get away with sharing a single thread pool because the assumption is that most of their time is spent being “parked”, but this means you have to be pretty careful about what you do inside go-blocks, something that people might only realize after a lot of their async code has already been written.

This blog post from 2013 by Martin Trojer explains in more depth what exactly the issue is.

Not using >!! / <!! in the calling context of a go-block is straightforward advice, but what about other types of blocking calls? In principle any kind of IO should be avoided, including HTTP requests (unless they’re async), derefing a future, or even a simple println (if the process attached to standard out can’t keep up then all your go-blocks will grind to a halt.) You could also argue that computationally intensive work inside go-blocks should be avoided.

In general it seems there’s not much awareness about this, and I’m wondering how people deal with this in practice. Where do you draw the line? Do you look out for this in code reviews? Any particular patterns that you’ve found useful?

At Nextjournal we created a wrapper namespace that performs some extra checks, and this certainly caught some cases that we otherwise wouldn’t have spotted. We’re looking for go-blocks that don’t contain >! / <! / alt! / alts!, and for calls to the blocking equivalents inside a go-block’s dynamic scope.

Does this hold water or is it a bad idea? What downsides can you see in this approach.


Personally i don’t think that having a layer of code to keep an eye on go blocks is a good idea. Understanding how core async works would be much better. Or at best having a linter tools for this purpose makes more sense to me.

For example one of the things to keep in mind while working with go block is that since the generated code for the block is slower than the original code it would be better to keep it as short as possible ( in order to have smaller number of SSA blocks ). But it’s not something that you can validate programmatically. Because it heavily depends on the domain.

I agree,

another thing to keep in mind is that may you don’t need a go “there”.
I’ve found quite a few place in our code base where go blocks could be replaced with one of the pipeline functions. This kinda pushes the need to deal with go blocks to the edges, dodging the issue altogether :stuck_out_tongue:


I kind of like this approach. Not to be a party spoiler, but I think that go blocks are, in a sense, like mutable objects shared between threads - very nice when they work, but ready to blow up in your face in an undebuggable way if you make a mistake. How do you know if anything is blocking if you use a third-party library? and what is “non-blocking enough”? we came all the way to Clojure and its immutable collections and sane multithreading just to be bitten by a missing “!”?

This is very different from Golang, where all the environment is built with go-blocks in mind (and they have problems of their own there).

So my feeling is that if can avoid/hide go blocks (like you do with “raw” threads in most programming languages), it would be better.

There’s some overhead to this approach, but it’s not too high. Linters won’t help here since Clojure support HOF and tracking those goes way beyond what a source code linter can support.

I thought there was a ticket for this in the core.async bug repo, but I can’t find it now. I know this approach has been discussed, and aside from the (minimal) performance problems I’m not aware of any good arguments against it.

1 Like

Wouldn’t it only deadlock if they’re waiting on each other? Otherwise it would only result in lower performance no?

Also, I always got confused about async IO, doesn’t it just handle the blocking thread for you? So you might not have a thread waiting for the IO result, but there is still a thread waiting for it. Or is it the OS can somehow wait for the IO in a more efficient way then a thread?

Finally, why even allow blocking take and put in go blocks? Is there times where you’d want to do it? Otherwise, couldn’t the macro just fail on expansion?

In core async you can run your blocking code in a thread body which get executed on different thread pool dedicated to blocking operations

Right, I think my question was more related to the article OP linked. It mentions how parking with async io results in more throughout then using blocking io with threads. Saying that’s because you’ll reach a thread count limit in Java, and threads consume a lot of memory, thus you can’t have that many.

But, there has to be a mechanism in place somewhere to handle the IO. So if its not an application thread, it must be something either in OS or in hardware.

I used to think it was just an OS thread that blocked on the IO, periodically polling to see if its done.

So my question is, why is the non application managed async IO cheaper then the application based async io using blocking threads? Does the OS create a more lightweight multi-tasking construct then a thread to handke the io, or is ut that its handled in hardware?

That’s reassuring. If someone comes across any older discussion around this that’d be super interesting.

My question exactly. Would you do a datomic query in a go-block? would you slurp a file? would you output logging messages?

Either that, or they’re all waiting on the same resource. Point is, if you have a lot of go-blocks, and if they could block, then at some point you’ll exhaust the “processor count + 2” thread pool, and all go-blocks in your system stop working until one unblocks and parks.

Asynchronous IO can be implemented with polling, but more commonly it’s done with interrupts. In other words, the OS will receive a signal when the operation is ready, either from a hardware controller or a software subsystem.

Hi @plexus,

I think that’s an interesting approach, especially since you can extend it as you find new things that shouldn’t be done inside a go block. There’s no general solution for Clojure, but you can have a solution that asymptotically approaches your particular needs.

What should go inside a go block? I myself had to make a big mental shift when I started using core.async in anger, to use the same purity analysis to the code inside of go blocks. Eventually, it turned into the more general care that you need for parallel programming: treating computation as a resource to be shared and managed. Luckily, core.async provides exactly what you need for that: queues with backpressure.

I would add a checkbox in my code review checklist for go blocks. I’d ask the question: “I’ve only got 8 threads that all of these go blocks run on. Does this particular go block hog those resources?”

Similarly, you should be very careful with anything that creates a new thread per channel value. I’ve seen that in production before. The reason it is ill-advised is because it throws away backpressure. You cannot deal with infinite threads. You should instead spin up a fixed number of threads and let them handle each value as they can. In small tests, a thread per value might work fine. Then in production, things spin out of control until the process locks up.

So that would be checkbox #2 in my code review checklist. I would ask: “Does this go block read a value and create a new thread to handle it?” Even if it happens occasionally (like in one branch of many), it’s bad. Threads are precious and you should think in terms of worker pools with queues to feed them. I used to treat the core.async/thread macro like a future, but even futures use a thread pool.

Rock on!


For that matter, I might amend your dynamic call checker to exclude calls to thread.

So I wasn’t satisfied with my knowledge about it, so I researched and wrote a blog post about it here.

Bottom line, it really depends on what operating system and kernel version your app is running on. Java NIO 2 will use whatever better IO mechanism is available. Sometimes, that means the threads are just moved from your application into the OS. Or it means java will wrap blocking IO within Threads for you. Other times, it means it will in fact use the OS non-blocking or async IO mechanisms.

Can you? I guess in a situation where you value responsiveness or throughput over latency I guess.

1 Like

This is the classical gotcha of core.async. I’ve been “bitten” by it when writing a system that used core.async at its core. Everything was working fine in development, but in production I got deadlocks. With real data, real users, real work. Culprit was a library call three levels down the go block, which was doing blocking I/O. It was one of my libraries, so I could change it into non-blocking I/O, and that fixed it. But it took time before I wrapped my head around the “spirit” of core.async. Call it an awareness problem. It comes up time and again. Timothy knows something about it (with countless warnings on slack). I don’t think there’s a real solution to that, because asynchronous systems are not intuitive. core.async is won over by sweat and tears. It takes practice.
I’m not a fan of solutions like the one suggested because it doesn’t capture most cases. You’ve explained it yourself. Any kind of blocking operation can result in deadlock. Most of the time the unsuspecting user will do some kind of work via a library call, not realizing that it blocks and that’s a no-no.

Is there anyway to detect a thread blocking?

A quick google said if you inspect the stackTrace, then you could see if any of the methods come from a blocking IO namespace.

Maybe there’s a way to instrument this in development ?

The problem is that most likely you will not see any blocking, because in general e.g. reading a cached small file takes zero gazillioseconds; until you deploy to production, said file is on a remote disk and remote connectivity starts to suck…
The same thing can happen if you have a very long computation; if you keep forking, all computations will start more or less in parallel and finish very late; on a fixed thread pool, they will just block when the pool is full (though they will be processed likely more quickly).

I don’t think there’s a real solution to that, because asynchronous systems are not intuitive. core.async is won over by sweat and tears. It takes practice.

I don’t believe that programmers have to improve by getting used to cry and pain. In a world where code is data, there must be another way. Maybe if we could represent the code as a graph with some special visual emphasis on async and blocking operations via some analytical tools, that unintuitiveness could become very obvious.

I am always in favor of handing my sweat and tears to the language designers. Consequently, I would cheer if I was presented with a system such as you proposes. However, please remember that core.async is already a code/data transformation system, a deep walking macro that goes far and wide and whose complexity made people say that they were grateful they didn’t have to write it.
My view on knowledge acquisition is that it involves a deal of effort. It is not a sadomasochist position, but rather a stoicist one. In other words, I am aware that I might not fully apprehend an abstraction the first time it is presented to me, that to fully appreciate its ramifications I will need to put some work into it.

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.