Structured concurrency

Roman Elizarov
5 min readSep 12, 2018

Today marks the release of a version 0.26.0 of kotlinx.coroutines library and an introduction of structured concurrency to Kotlin coroutines. It is more than just a feature — it marks an ideology shift so big that I’m writing this post to explain it.

Since the initial rollout of Kotlin coroutines as an experimental feature in Kotlin 1.1 in the beginning of 2017 we’ve been working hard to explain the concept of coroutines to the programmers who used to think of concurrency in terms of threads, so our key analogy and motto was “coroutines are light-weight threads”. Moreover, our key APIs were designed to be similar to thread APIs to ease the learning curve. This analogy works well in small-scale examples, but it does not help to explain the shift in programming style with coroutines.

When we are taught to program with threads, we are told that threads are expensive resources and one should not go around creating them left and right. A well-written program usually creates a thread pool at startup and then uses it to offload various computations. Some environments (notably iOS) go as far as to say they “deprecate threads” (even though everything still runs on threads). They provide a system-wide ready-to-use thread-pool with the corresponding queues that you can submit your code to.

But the story is different with coroutines. It is not just Ok, but it also very convenient to create coroutines as you need them, since they are so cheap. Let us take a look at a couple of use-cases for coroutines.

Asynchronous operations

Suppose that you are writing a front-end UI application (mobile, web, or desktop — it does not matter for this example) and you need to perform a request to your backend to fetch some data and update your UI model with the result of it. Our original recommendation was to write it like this:

fun requestSomeData() {
launch(UI) {
updateUI(performRequest())
}
}

Here we are launching a new coroutine in the UI context with launch(UI), invoke the suspending function performRequest to do an asynchronous call to the backend without blocking the main UI thread, and then update UI with the result. Each requestSomeData call creates its own coroutine and it is fine, isn’t it? It is not much different from asynchronous programming practices you find in other programming languages ranging from C# and JS to Go.

But here is a problem. These asynchronous operations can take long, long time to complete if there is some problem with a network or with a back end. Moreover, these operations are usually executed in the scope of some UI element like a window or a page. If an operation takes too long to complete, a typical user closes the corresponding UI element and goes do something else or, which is worse, reopens this UI and tries the operation again and again. But we have the previous operation still running in background and we need some mechanism to cancel it when the corresponding UI element is closed by a user. In Kotlin coroutines this had lead us to recommend quite tricky design patterns one has to follow in their code to ensure this cancellation is handled properly. Moreover, you have to always remember to specify a proper context, or your updateUI might end up being invoked from a wrong thread and subtly ruin your UI. This is error-prone. A simple launch { … } is easy to write, but it is not what you should be writing.

On a more philosophical level, you rarely launch coroutines “globally”, like you do with threads. Coroutines are always related to some local scope in your application, which is an entity with a limited life-time, like a UI element. So, with structured concurrency we now require that launch is invoked in aCoroutineScope, which is an interface implemented by your life-time limited objects (like UI elements or their corresponding view models). You implement CoroutineScope once and, lo and behold, a simple launch { … } inside your UI class, that you write many times, becomes both easy to write and correct:

fun requestSomeData() {
launch {
updateUI(performRequest())
}
}

Note that an implementation of CoroutineScope also defines an appropriate coroutine context for your UI updates. You can find a full-blown example of CoroutineScope implementation on its documentation page.

In more complex cases, your application may have a number of different scopes, with lifetimes tied to different entities, so you’d be using named scopes explicitly like viewModelScope.launch which is what had became the default recommended practice in the most recent updates to the documentation.

For those rare cases where you need a global coroutine whose lifetime is limited by the life-time of your application, we now provide theGlobalScope object, so what was launch { … } for a global coroutine before, now becomes GlobalScope.launch { … } and the global nature of this coroutine becomes explicit in the code.

Parallel decomposition

I’ve given numerous talks on Kotlin coroutines, presenting the following sample code that shows how to load two images in parallel and combine them later — an idiomatic example of parallel decomposition of work with Kotlin coroutines:

suspend fun loadAndCombine(name1: String, name2: String): Image { 
val deferred1 = async { loadImage(name1) }
val
deferred2 = async { loadImage(name2) }
return
combineImages(deferred1.await(), deferred2.await())
}

Unfortunately, this example is wrong on many levels in very subtle ways. The suspending function loadAndCombine, itself, is going to be called from inside a coroutine that is launched to perform a larger operation. What if that operation gets cancelled? Then loading of both images still proceeds unfazed. That is not what we want from a reliable code, especially if that code is a part of a back end service, used by many clients.

The solution we used to recommend was to write async(coroutineContext) { … } so that loading of both images is performed in children coroutines that are cancelled when their parent coroutine is cancelled.

It is still not perfect. If loading of the first image fails, then deferred1.await() throws the corresponding exception, but the second async coroutine, that is loading the second image, still continues to work in background. Solving that is even more complicated.

We see the same problem in our second use-case. A simple async { … } is easy to write, but it is not what you should be writing.

With structured concurrency async coroutine builder became an extension on CoroutineScope just like launch did. You cannot simply write async { … } anymore, you have to provide a scope. A proper example of parallel decomposition becomes:

suspend fun loadAndCombine(name1: String, name2: String): Image =
coroutineScope {
val deferred1 = async { loadImage(name1) }
val
deferred2 = async { loadImage(name2) }
combineImages(deferred1.await(), deferred2.await())
}

You have to wrap your code into coroutineScope { … } block that establishes a boundary of your operation, its scope. All the async coroutines become the children of this scope and, if the scope fails with an exception or is cancelled, all the children are cancelled, too.

Further reading

The concept structured concurrency has more philosophy behind it. I highly recommend to read “Notes on structured concurrency, or: Go statement considered harmful” that establishes good analogy between a classic goto-statement vs structured programming debate and a modern world where languages just start to give us a way to launch concurrent asynchronous tasks in a completely non-structured way. It has to end.

--

--

Roman Elizarov

Project Lead for the Kotlin Programming Language @JetBrains