TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

High performance services using coroutines

69 pointsby gmosxabout 10 years ago

6 comments

themartoranaabout 10 years ago
Writing high performance server software is a whole other world from writing services. In the first, data copies can cause unacceptable slow-downs of a fraction of a millisecond - and those copies may be hidden all the way down at the driver level. When writing services (which are just as often written in Ruby or Python) trading off a few milliseconds for safety is often a worthy thing.<p>I spend most of my time writing services, and when looking at a language like Go, there&#x27;s a reason the default is pass-by-value. Passing around pointers to multiple co-routines is asking for a race condition, and in non-GC languages, null pointers. Services are rarely written with the precision of high-performance servers.<p>I don&#x27;t envy the server writers (although I can see how it would be fun!). Giving up a few milliseconds per request to make sure my co-routines aren&#x27;t sharing pointers is worthwhile, and I appreciate the safety that gives me. I&#x27;m sure someone will mention that Rust could give me the safety I was looking for in a non-GC language, but that&#x27;s the point, isn&#x27;t it - that by being able to game the system, you can gain a few precious microseconds here and there that enforced safety might cost you.
评论 #9254424 未加载
aaronlevinabout 10 years ago
This sounds very similar to warp, the Haskell web server that is known for its high-performance on many-core machines. The Web Application Interface (WAI) is also coroutine based (I believe): <a href="http://www.aosabook.org/en/posa/warp.html" rel="nofollow">http:&#x2F;&#x2F;www.aosabook.org&#x2F;en&#x2F;posa&#x2F;warp.html</a>
评论 #9254846 未加载
marktangotangoabout 10 years ago
&gt;&gt; If not, the coroutine will yield, and at the same time, be scheduled(migrated) to another set of threads responsible for the ‘slow’ requests(ones that need to block). Those will just schedule the coro in their own scheduler, and perform the read again and continue.<p>&gt;&gt; Alternatively, we could just yield the coro, waiting for a new special coroutine that would run in a background thread just executing readahead() and then re-scheduling(making runnable) the original coro, waiting for readahead.<p>Seems to me this scheme will ultimately be limited by the slower request. Performing fast and slow operations is essentially Little&#x27;s Law[1] where the average time dominates. However if the slow&#x2F;blocking reads where also async, I think you&#x27;d eventually be limited by io speed?<p>[1] <a href="http://en.wikipedia.org/wiki/Little%27s_law" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Little%27s_law</a>
评论 #9255071 未加载
SixSigmaabout 10 years ago
Syscalls are slow. See my previous HN post<p><a href="https://news.ycombinator.com/item?id=8961582" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=8961582</a>
rubiquityabout 10 years ago
The end implementation sounds a lot like... Erlang. With the exception that this solution avoids allocations that Erlang probably makes due to immutability.
评论 #9255057 未加载
ameliusabout 10 years ago
Coroutines are essentially a form of cooperative multitasking. I&#x27;m not sure we should be using that in this day and age, especially considering that requests are not purely I&#x2F;O bound in all cases, and could depend on actual computation.
评论 #9255990 未加载