TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

High performance services using coroutines

69 点作者 gmosx大约 10 年前

6 条评论

themartorana大约 10 年前
Writing high performance server software is a whole other world from writing services. In the first, data copies can cause unacceptable slow-downs of a fraction of a millisecond - and those copies may be hidden all the way down at the driver level. When writing services (which are just as often written in Ruby or Python) trading off a few milliseconds for safety is often a worthy thing.<p>I spend most of my time writing services, and when looking at a language like Go, there&#x27;s a reason the default is pass-by-value. Passing around pointers to multiple co-routines is asking for a race condition, and in non-GC languages, null pointers. Services are rarely written with the precision of high-performance servers.<p>I don&#x27;t envy the server writers (although I can see how it would be fun!). Giving up a few milliseconds per request to make sure my co-routines aren&#x27;t sharing pointers is worthwhile, and I appreciate the safety that gives me. I&#x27;m sure someone will mention that Rust could give me the safety I was looking for in a non-GC language, but that&#x27;s the point, isn&#x27;t it - that by being able to game the system, you can gain a few precious microseconds here and there that enforced safety might cost you.
评论 #9254424 未加载
aaronlevin大约 10 年前
This sounds very similar to warp, the Haskell web server that is known for its high-performance on many-core machines. The Web Application Interface (WAI) is also coroutine based (I believe): <a href="http://www.aosabook.org/en/posa/warp.html" rel="nofollow">http:&#x2F;&#x2F;www.aosabook.org&#x2F;en&#x2F;posa&#x2F;warp.html</a>
评论 #9254846 未加载
marktangotango大约 10 年前
&gt;&gt; If not, the coroutine will yield, and at the same time, be scheduled(migrated) to another set of threads responsible for the ‘slow’ requests(ones that need to block). Those will just schedule the coro in their own scheduler, and perform the read again and continue.<p>&gt;&gt; Alternatively, we could just yield the coro, waiting for a new special coroutine that would run in a background thread just executing readahead() and then re-scheduling(making runnable) the original coro, waiting for readahead.<p>Seems to me this scheme will ultimately be limited by the slower request. Performing fast and slow operations is essentially Little&#x27;s Law[1] where the average time dominates. However if the slow&#x2F;blocking reads where also async, I think you&#x27;d eventually be limited by io speed?<p>[1] <a href="http://en.wikipedia.org/wiki/Little%27s_law" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Little%27s_law</a>
评论 #9255071 未加载
SixSigma大约 10 年前
Syscalls are slow. See my previous HN post<p><a href="https://news.ycombinator.com/item?id=8961582" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=8961582</a>
rubiquity大约 10 年前
The end implementation sounds a lot like... Erlang. With the exception that this solution avoids allocations that Erlang probably makes due to immutability.
评论 #9255057 未加载
amelius大约 10 年前
Coroutines are essentially a form of cooperative multitasking. I&#x27;m not sure we should be using that in this day and age, especially considering that requests are not purely I&#x2F;O bound in all cases, and could depend on actual computation.
评论 #9255990 未加载