It's really a pity the "synchronous" terminology took off, as if the layout of the text of the source code has any particular relationship to the efficiency of the underlying scheduling and multitasking primitives. In the end what matters for the code complexity is how the threads can interact with each other (shared memory, messages, etc) and how thoroughly those things are enforced, and the performance is about the efficiency and power of the underlying scheduler and processing. How your code looks in the editor doesn't enter into it.<p>Things with bad thread interaction stories and inefficient or ineffective schedulers aren't going to be the Next Big Language, no matter how cool they sound today.<p>The results in this article are a lot less weird or counterintuitive if you understand that.
Eh, I would just limit the front process to coroutines accepting incoming connections and accepting complete responses from remote procedure calls and use neither threads nor polling. Launch a lot of worker processes to divide and conquer all the actual work, restart some on the other end of a TCP/IP connection if scaling is necessary. But that's my preference.
Previously: <a href="http://news.ycombinator.com/item?id=1551776" rel="nofollow">http://news.ycombinator.com/item?id=1551776</a><p>An interesting set of slides. Also interesting is the URL. Is this seriously the only copy of this slide deck?
The performance analysis is pretty bad. Comparing synchronous I/O performance against asynchronous I/O based solely in terms of bits per second transferred only tells a small part of the story. It doesn't show how well the server scales as more and more connections are added, which absolutely essential information. It would have been far better if the author had plotted the connection rate and response time against the transfer size per connection.