TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A Solution to CPU-intensive Tasks in IO Loops

46 点作者 pors超过 13 年前

7 条评论

ot超过 13 年前
<i>A watchdog thread can be running every n milliseconds. This is very low load on the system. [...] If the loop has not moved onwards since the last sample or two it can be deemed stalled. [...] But you can move the other events in the affected loop to a fresh thread; you can go sideways when you’ve detected a blocking task.</i><p>Congratulations, you just invented (a very inefficient version of) pre-emptive multitasking.
评论 #3555709 未加载
rektide超过 13 年前
The final question/answer is resolved around,<p><i>Hellepoll has the concept of a task tree - tasks can be subtasks of others, and this simplifies tidy-up when one aborts. This explicit linking of tasks and callbacks can be used to determine what gets migrated when a task blocks, to ensure that the cascade of events associated with a request do not themselves get split, but fire in the originating thread and in the right order even if at some point the thread triggers the blocking watchdog.</i><p>He asks, in closing,<p><i>I am not a node.js user but I wonder if this approach could be transparent in node and not actually break any of the API contract there?</i><p>This is the work being done on 0.8, with domains and isolates. It is explicitly to allow this kind of task/work parenting to be made: <a href="https://groups.google.com/forum/#!msg/nodejs/eVBOYiI_O_A/-mACjP-CHtsJ" rel="nofollow">https://groups.google.com/forum/#!msg/nodejs/eVBOYiI_O_A/-mA...</a>
评论 #3555873 未加载
moonchrome超过 13 年前
Why don't you just lock each connection handler to one thread at a time and dispatch events on a thread pool ? That way connection level events are always synchronous, but event handlers are spread over the thread pool, you get optimal load balancing because events fill the pool (no processes) and thread pool can use it's own logic to grow if one channel handler used blocking IO and is blocking the a pool thread.<p>This is pretty much what netty does with OrderedMemoryAwareThreadPoolExecutor ?<p><a href="http://netty.io/docs/stable/api/org/jboss/netty/handler/execution/OrderedMemoryAwareThreadPoolExecutor.html" rel="nofollow">http://netty.io/docs/stable/api/org/jboss/netty/handler/exec...</a><p><pre><code> -------------------------------------&#62; Timeline ------------------------------------&#62; Thread X: --- Channel A (Event A1) --. .-- Channel B (Event B2) --- Channel B (Event B3) ---&#62; \ / X / \ Thread Y: --- Channel B (Event B1) --' '-- Channel A (Event A2) --- Channel A (Event A3) ---&#62;</code></pre>
评论 #3556471 未加载
halayli超过 13 年前
Check out lthread_compute_begin() and lthread_compute_end() functions. It allows you to block inside a coroutine without affecting other coroutines. (example at the end of the page)<p>I prefer coroutines over IO loops because they result in simpler and cleaner code. And with lthread_compute feature, you get the advantages of real threads + the lightness of coroutines.<p><a href="https://github.com/halayli/lthread" rel="nofollow">https://github.com/halayli/lthread</a>
jconley超过 13 年前
This is almost exactly what ASP.NET and IIS do in recent iterations.<p><a href="http://blogs.msdn.com/b/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx" rel="nofollow">http://blogs.msdn.com/b/tmarq/archive/2007/07/21/asp-net-thr...</a>
评论 #3555781 未加载
rektide超过 13 年前
<i>So the server has multiple threads. If a handler blocks in one thread, another thread can pick up incoming requests. So far, so good. In return for needing to carefully synchronise access to shared state, we get to efficiently share that state (even if its just a hot cache of secure session cookies - things you don’t want to be validating every incoming request etc) between many threads and multiplex incoming requests between them.</i><p>Sharing state is bad, m-kay? Allow Node to do it's thing (<i>So the server has multiple threads. If a handler blocks in one thread, another thread can pick up incoming requests.</i>).<p>It's aggrieving that this model requires any given handler to be able to service any given request, tbh. Shared state is a folly. A serializing token scheme might work well: if a request fails to find the data local to it's core, it passes a serializing token of the request around the ring of handlers, asking either a, for the required data, or b, take the token and run the data.<p>Serializing tokens are a concept Matt Dillon spoke of often at the inception of DragonflyBSD; much like locks, except that ownership is not relinquished, someone always hold the token, but instead phase changed, yielded to another. it's a responsive less a stateful ownership.<p>Sadly that token ownership negotiation requires some kind of interruption in the currently-occupied worker thread: if that thread could be interrupted to do other things, this serializing token negotiation might be an acceptable argument (ending with a) no, i'm busy using that set, b) sorry, i had the data and was free, so i completed it, or c) here's the data, i'm busy and not using it). But it does still require thread-interruption. If the worker thread can yield frequently and resume, finding the answer might be a small enough invisible enough calculation to help plaster over there being interruptions altogether; that's essentially the hope. The result would be the mating of green threading to with location aware latency aware multi-processing.
评论 #3556536 未加载
评论 #3556209 未加载
superrad超过 13 年前
Doesn't erlang effectively do this, allowing its processes to only execute a certain amount of vm instructions before allowing a switch to another process?
评论 #3557770 未加载