<i>So the server has multiple threads. If a handler blocks in one thread, another thread can pick up incoming requests. So far, so good. In return for needing to carefully synchronise access to shared state, we get to efficiently share that state (even if its just a hot cache of secure session cookies - things you don’t want to be validating every incoming request etc) between many threads and multiplex incoming requests between them.</i><p>Sharing state is bad, m-kay? Allow Node to do it's thing (<i>So the server has multiple threads. If a handler blocks in one thread, another thread can pick up incoming requests.</i>).<p>It's aggrieving that this model requires any given handler to be able to service any given request, tbh. Shared state is a folly. A serializing token scheme might work well: if a request fails to find the data local to it's core, it passes a serializing token of the request around the ring of handlers, asking either a, for the required data, or b, take the token and run the data.<p>Serializing tokens are a concept Matt Dillon spoke of often at the inception of DragonflyBSD; much like locks, except that ownership is not relinquished, someone always hold the token, but instead phase changed, yielded to another. it's a responsive less a stateful ownership.<p>Sadly that token ownership negotiation requires some kind of interruption in the currently-occupied worker thread: if that thread could be interrupted to do other things, this serializing token negotiation might be an acceptable argument (ending with a) no, i'm busy using that set, b) sorry, i had the data and was free, so i completed it, or c) here's the data, i'm busy and not using it). But it does still require thread-interruption. If the worker thread can yield frequently and resume, finding the answer might be a small enough invisible enough calculation to help plaster over there being interruptions altogether; that's essentially the hope. The result would be the mating of green threading to with location aware latency aware multi-processing.