As a proponent of structured concurrency [1], I am happy to see this post.<p>I personally think that Rust would be the best language ever if it did a few things:<p>* Got rid of async in favor of structured concurrency built in to the language.<p>* Made compile times competitive with C.<p>I think that much of the complexity of the language would be vastly reduced by adopting structured concurrency and dropping async because of the very compatibility mentioned in the article.<p>I also think that Rust is unique among languages in that it would benefit most from structured concurrency because its borrow checker would interact with structured concurrency in great ways.<p>Compile times are still a problem, though.<p>[1]: <a href="https://gavinhoward.com/2019/12/structured-concurrency-definition/" rel="nofollow noreferrer">https://gavinhoward.com/2019/12/structured-concurrency-defin...</a>
This approach is extremely popular. С++ have it with parallel-for like libraries.<p>It has some downsides. When multiple threads join before allowing to continue it leaves some perf on the table. If you have limited number of threads to hardware concurrency then any thread that reaches join point is typically unavailable to deal with remaining work or other algorithms.<p>In gamedev with limited concurrency due to hardware it is more beneficial to let every worker thread to participate all the time. Instead of joining back to sequential execution one uses task dependencies. If continuation is done as separate task that is scheduled "when all" of the dependencies are done then there are no threads that unavailable due to waiting. This way if you have multiple parallel algorithms in flight then every thread is available for any task. Instead of me handwaving it here it is probably better to check a presentation from Sean Parent [1]<p>1. <a href="https://sean-parent.stlab.cc/papers-and-presentations/#better-code-concurrency" rel="nofollow noreferrer">https://sean-parent.stlab.cc/papers-and-presentations/#bette...</a>
It would be really nice if the Rust standard library were to get structured concurrency similar to what Ada has:<p><a href="https://learn.adacore.com/courses/Ada_For_The_CPP_Java_Developer/chapters/11_Concurrency.html" rel="nofollow noreferrer">https://learn.adacore.com/courses/Ada_For_The_CPP_Java_Devel...</a><p><a href="https://en.wikibooks.org/wiki/Ada_Style_Guide/Concurrency" rel="nofollow noreferrer">https://en.wikibooks.org/wiki/Ada_Style_Guide/Concurrency</a><p>It allows multiple concurrent taks to run within a parent block of code and terminate at the end of the block.<p>It is extremely useful for embedded and parallel programming which would help Rust to further succeed in those areas.<p>So much concurrent and parallel Rust code relies on third-party libraries because the standard library offers primitives that work but lack the "creature comforts" that developers prefer.<p>It is good that futures-concurrency offers better choices for Rust developers, but ultimately it would be great for the Rust to adopt better concurrency APIs.
Concurrency, parallelism, async, multithreading is my favourite subject that I am interested in but I am not an expert.<p>Thank you for this article.<p>I think the ergonomics of modern programming languages for concurrency and parallelism is not there yet but I like structured concurrency.<p>In one of my projects I generate a dependency graph of work and then parallelise on all a node's egress edges in the graph. I just .join() threads in each node's thread on all ancestors of that node. This is how <a href="https://devops-pipeline.com/" rel="nofollow noreferrer">https://devops-pipeline.com/</a> works.<p>One approach to performance from parallelism, split/shard your data between threads and represent your concurrency AND parallelism as a tree of computation where leafs (leaves) of the tree never need to communicate until they complete leads to high performance because of Amdahls law and avoidance of synchronization during processing. Single core performance is fast and if you multithread it right you can get a performance boost. Single threaded can be faster than naïve multithreading due to synchronization overheads ( <a href="http://www.frankmcsherry.org/assets/COST.pdf" rel="nofollow noreferrer">http://www.frankmcsherry.org/assets/COST.pdf</a> )<p>My other approach is to create a syntax to represent multithreaded, concurrent and parallel state machines.<p>Kafka makes it fairly easy to parallelise pipelines and Go pipelines can be created, I just want to use a notation!<p>Here's my notation which I run on 2 threads:<p><pre><code> thread(s) = state1(yes) | send(message) | receive(message2);
thread(r) = state1(yes) | receive(message) | send(message2);
</code></pre>
This syntax is inspired by prolog facts. thread(s) and thread(r) are facts that the program waits for and need to be fired before the rest of the pipeline after the =. When state1(yes) is fired, the statemachine moves to the next state after the | pipe symbol. One thread sends send(message) a message and the other receives a message receive(message). You can also put multiple facts in each state line which provides asynchrony of multiple events joining into one state, kind of like multiple nodes connecting to a single node in a graph:<p><pre><code> order(orderno) checkout-received(orderno) = take-payment | save-order(orderno) | send-email-confirmation
</code></pre>
This waits for order(orderno) and checkout-received(orderno) separate events and then moves to the take-payment action.<p>I have a simple Java parser and runtime for this syntax. What I like about it is that it combines state machines, parallelism and asynchrony.