Next article in the series: now that you know about the dangerous/complicated primitives, don't ever touch them again. Instead use the high-level safe concurrency/parallelism mechanisms in your programming language: futures/promises, nurseries, channels, observers, actors, monitors. Ideally, these should be built-in, but a library whose API composes well into most programs will also do.<p>Data races can be statically removed by carefully restricting certain parts of the language design, see Pony. <a href="https://tutorial.ponylang.io/#what-s-pony-anyway" rel="nofollow">https://tutorial.ponylang.io/#what-s-pony-anyway</a><p>Bonus: learn aspects of deadlocking by playing a game: <a href="https://deadlockempire.github.io/" rel="nofollow">https://deadlockempire.github.io/</a>
The article is very nice, thanks a lot for it. Especially since I hear the word concurrency and parallelism often thrown around without any distinction.<p>Very off topic, but I have read several times the argument that the rise of functional programming is due to it's easy concurrency (since functions don't have side effects) and that concurrency becomes more and more important due to moores law being dead (i.e. we can't scale the hardware up, we have to add cores to our processors).<p>Could someone with more experience comment on that? Is concurrency really easier in functional languages and is the rising importance of concurrency a valid reason to look into functional programming?
Does anyone know the history behind the distinction between concurrency and parallelism presented here? The most frequent reference I see is Pike's "Concurrency is not parallelism" talk, but I'm curious who first came up with this distinction.
> The sched_yield() puts the calling thread to sleep and at the back of the scheduler’s run queue.<p>Not necessarily, but it is fine for this purpose I suppose. See <a href="https://news.ycombinator.com/item?id=21959692" rel="nofollow">https://news.ycombinator.com/item?id=21959692</a><p>Glad to see lock hierarchies mentioned. Barriers are new to me so that was nice.<p>IMO, it would be nice to at least have a mention of lock-free techniques and their advantages and disadvantages.
There is a good body of knowledge around dealing with concurrency issues within a single process. We've tools (locks, semaphores ...) to deal with the complexity as well as programming paradigms which help us write code which minimizes data races. It's interesting to realize that in a world with an increasing number of micro-services manipulating shared resources (a shared database, shared cloud resources), or even multiple nodes backing a single micro-service all reading and writing to shared resources, similar concurrency bugs arise all the time. Unlike a single process where you can use locks and other primitives to write correct code, there is no locking mechanism we can use to protect access to these global shared resources. We have to be more thoughtful so we write correct code in the presence of pervasive concurrency, which is easier said than done.
Is anyone aware of good examples that can be used to explain and implement parallelism/concurrency that are not the bankers? I have seen it too many times.
No mention of volatile variables or the concept of stale cpu cache reads when a value is written to from another core. I think its a pretty common and fundamental concept that should be in a write up such as this.
My experience is, single threaded execution and being able to replicate that with local data for each instance and remote lookup at a fine grained data level when needed, is a more easier way to maintain the code. Cocurrency and all those synchronisation is damn hard to code and debug.
Thank you for the great read! Wondering how io_uring would be put in place of this situation...would be very interested in the authors review: <a href="https://kernel.dk/io_uring.pdf" rel="nofollow">https://kernel.dk/io_uring.pdf</a>
I'm a bit disappointed that the article doesn't explain the need for a memory/consistency model and how it interacts with CPU caches. Locks are the easy part, and the article makes you think that with them you can now write at least simple concurrent programs.<p>Why is that? I'm pretty sure that the author's intention is not to equip the readers with the tools to make buggy programs, yet that is exactly what happens here.