So, as someone who has been working heavily with coroutines and continuations for decades in a number of different languages across the gamut of programming paradigms, I don't really understand why these runtimes aren't "interoperable", and am hoping I just have a different idea of what that word means than the people who talk about them in the context of Rust.<p>Like, right now I maintain a large almost-entirely-asynchronous C++ codebase using their new C++20 co_await monstrosity, and while I find the abstraction ridiculously wide and a bit obtuse, I have never had trouble "interoperating" different "runtimes" and I am not even sure how one could screw it up in a way to break that... unless maybe these "executors" are some attempt to build some kind of pseudo-thread, but I guess I just feel like that's so "amateur hour" that I would hope Rust didn't do that (right?).<p>So, let's say you are executing inside of a coroutine (context is unspecified as it doesn't matter). When this coroutine ends it will transfer control to a continuation it was given. It now wants to block on a socket, maybe managed by Runtime A (say, Boost ASIO). That involves giving a continuation of this coroutine past the point of the transfer of control to Runtime A which will be executed by Runtime A.<p>Now, after Runtime A calls me--maybe on some background I/O thread--I decide I would prefer y task to be executing in Runtime B. I do this sometimes because I might have a bit of computation to do but I don't want to block an I/O thread so I would prefer to be executing inside of a thread pool designed for slow background execution.<p>In this case, I simply await Runtime B (which in this case happens to be my lightweight queue scheduler). I don't use any special syntax for this because all of these runtimes fully interoperate: I used await to wait for the socket operation and now I use await to wait until I can be scheduled. The way these control transfers work is also identical: I pass a continuation of myself after the point of the await to the scheduler which will call it when I can be scheduled.<p>Now remember, at the beginning of this I was noting that something unspecified had called me. That is ostensibly a Runtime C here (maybe I was waiting for a callback from libwebrtc--which maintains its own runloop--because I asked it to update some ICE parameter, which it does asynchronously). It doesn't matter what it was, because now that "already happened": that event occurred and the continuation I provided was already executed and has long since completed <i>and returned</i> as I went on immediately to pass a continuation to someone else rather than blocking.<p>Is this somehow not how Rust works? Is await some kind of magic "sticky" mechanism that requires the rest of this execution happen in the context of the "same" runtime which is executing the current task? I have seen people try to do that--I am looking at you, Facebook Folly--but, in my experience, attempts to do that are painfully slow as they require extra state and cause the moral equivalent of a heavyweight context switch for every call as you drag in a scheduler in places where you didn't need a scheduler.<p>But, even when people do that, I have still never had an issue making them interoperate with other runtimes, so that can't be the issue at its core. I guess I should stare at the key place where the wording in this article just feels weird?... to me, I/O and computation are fairly disjoint, and so I can't imagine why you would ever want to have your I/O scheduler do "double-duty" to also handle "task queues". When I/O completes it completes: that doesn't involve a "queue". If you want to be part of a queue, you can await a queue slot. But it sounds like tokio is doing both? Why?