TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Local async executors and why they should be the default

220 pointsby rklaehnalmost 2 years ago

29 comments

pdimitaralmost 2 years ago
Just a personal take: after not coding with Rust for several months, I find it more and more difficult to return to an async code I was writing.<p>The whole thing just reads... ugly and inconsistent. It needs too much already-accumulated knowledge. As the article correctly points out, you need a bunch of stuff that are seemingly unrelated (and syntactically speaking you would never guess they belong together). And as other commenters pointed out, you need to scan a lot of docs -- many useful Tokio tools are not just not promoted, they are outright difficult to find at all.<p>Now don&#x27;t get me wrong, I worked on projects where a single Rust k8s node was ingesting 150k events per second. I have seen and believed and I want to use Rust more. But the async story needs the team&#x27;s undivided attention for a long time at this point, I feel.<p>Against my own philosophy and values I find myself attracted to Golang. It has a ton of ugly gotchas and many things are opaque... and I still find that more attractive than Rust. :(<p>This article is a sad reminder for me -- I am kind of drifting away from Rust. I might return and laugh at myself for this comment several months down the line... but at the moment it seems that my brain prefers stuff that&#x27;s quicker to grok and experiment with. Not to mention writing demos and prototypes is objectively faster.<p>If I had executive power in the Rust leadership I&#x27;d definitely task them with taking a good hard look at the current state of async and start making backwards-incompatible changes (backed by new major semver versions of course). Much more macros or simply better-reading APIs might be a very good start. Start making and promoting higher-order concurrency and parallelism patterns i.e. the `scoped_pool` thingy for example.
评论 #36785691 未加载
评论 #36789922 未加载
评论 #36786548 未加载
评论 #36785276 未加载
评论 #36785219 未加载
评论 #36787266 未加载
评论 #36788773 未加载
评论 #36786110 未加载
评论 #36786278 未加载
insanitybitalmost 2 years ago
I&#x27;m sympathetic to this point but I think that:<p>a) Saying Node + Deno are good is a stretch. Node has horrible performance, even for simple routing. And I&#x27;ll source that[0].<p>b) Saying that adding `Send + Sync + &#x27;static` bounds is a serious burden is, to me, overstating things.<p>&gt; the far better model for writing performant servers.<p>It&#x27;s completely workload dependent. For a chat server it&#x27;s almost definitely not going to be more performant and you may end up with worse latency.<p>&gt; it only costs you friction everywhere else in your entire codebase, and quite often performance as well.<p>I am unconvinced tbh. I do not believe that adding Send + Sync + &#x27;static bounds is onerous, I do not believe <i>satisfying</i> those bounds is hard (it&#x27;s almost always just a matter of <i>moving</i> the value), and I do not believe that the vast majority of programs benefit from TPC architecture.<p>I recognize that there is a problem here - that we are optimizing for one runtime at the expense of others - but I am not convinced at this point that the problem matters.<p>[0] <a href="https:&#x2F;&#x2F;www.techempower.com&#x2F;benchmarks&#x2F;#section=data-r21&amp;test=json" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.techempower.com&#x2F;benchmarks&#x2F;#section=data-r21&amp;tes...</a>
runiqalmost 2 years ago
This is bad editorializing. You&#x27;re putting words in the author&#x27;s mouth that cannot even be found on that page.<p>Don&#x27;t do that.<p>Edit: Thanks to the mods or whoever fixed it.
评论 #36783815 未加载
评论 #36783405 未加载
评论 #36783386 未加载
resoniousalmost 2 years ago
&gt; Yes the RwLock and mpsc comes from Tokio and lets you .await instead of blocking a thread, but these are not async primitives, these are multi-threading synchronization primitives.<p>The only reason all this async stuff even exists is because we want concurrency. We want to say &quot;while this one task waits for I&#x2F;O, this other task will do stuff&quot;. So it&#x27;s not too surprising to me that an intro to async would include synchronization primitives. Those primitives aren&#x27;t really &quot;thread&quot;-specific if by thread you mean OS thread. When you do async like this, you&#x27;re basically re-implementing OS threads in user space.
评论 #36783349 未加载
评论 #36783347 未加载
评论 #36783444 未加载
评论 #36783312 未加载
评论 #36783480 未加载
weinzierlalmost 2 years ago
<i>&quot;If you write regular synchronous Rust code, unless you have a really good reason, you don&#x27;t just start with a thread-pool. You write single-threaded code until you find a place where threads can help you, and then you parallelize it, [..]&quot;</i><p>I cannot agree more with that. As someone who&#x27;s done a good deal of Java in my day job, I can tell you a thing or two about spawning threads willy-nilly. At least it is easier to avoid in Rust, but I&#x27;d still prefer it the other way round: opt-in, instead of opt-out.
评论 #36784463 未加载
评论 #36783284 未加载
namjhalmost 2 years ago
&gt; Making things thread safe for runtime-agnostic utilities like WebSocket is yet another price we pay for making everything multi-threaded by default. The standard way of doing what I&#x27;m doing in my code above would be to spawn one of the loops on a separate background task, which could land on a separate thread, meaning we must do all that synchronization to manage reading and writing to a socket from different threads for no good reason.<p>Why so? Libraries like quinn[1] define &quot;no IO&quot; crate to define runtime-agnostic protocol implementation. In this way we won&#x27;t suffer by forcing ourselves using synchronization primitives.<p>Also, IMO it&#x27;s relatively easy to use Send-bounded future in non-Send(i.o.w. single-threaded) runtime environment, but it&#x27;s almost impossible to do opposite. Ecosystem users can freely use single threaded async runtime, but ecosystem providers should not. If you want every users to only use single threaded runtime, it&#x27;s a major loss for the Rust ecosystem.<p>Typechecked Send&#x2F;Sync bounds are one of the holy grails that Rust provides. Albeit it&#x27;s overkill to use multithreaded async runtimes for most users, we should not abandon them because it opens an opportunity for high-end users who might seek Rust for their high-performance backends.<p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;quinn-rs&#x2F;quinn">https:&#x2F;&#x2F;github.com&#x2F;quinn-rs&#x2F;quinn</a>
评论 #36784459 未加载
lionkoralmost 2 years ago
I find all the async stuff in Rust incredibly ugly, cumbersome, and its one of the biggest reasons I prefer C++, still. C++ lets me just write single- or multithreaded code, because none of the dependencies force their `async` stuff on me. Yeah, its up to me to ensure things are synchronized, but I&#x27;d rather do that than try to figure out how to get some dependency that isnt meant to use async to work in some async move closure.
评论 #36783556 未加载
评论 #36783554 未加载
评论 #36784793 未加载
评论 #36784695 未加载
评论 #36783543 未加载
pie_flavoralmost 2 years ago
The thing the article calls bad is not the thing the examples illustrate. The set-up is supposed to be that multithreading is a pain, but none of the examples actually do that. Take the initial list. &quot;These are not async primitives, <i>these are multi-threading synchronization primitives.</i>&quot; Yeah, but they&#x27;re multi-threaded forms of stuff you still need. If you needed RwLock in multi-threaded mode, you need RefCell in single-threaded mode, and the API is virtually identical. If your state needed to be in an Arc, it&#x27;ll need to be in an Rc. And if you needed sync::mpsc, you&#x27;ll instead need... sync::mpsc.<p>A Send bound is no great burden. The only type that you will regularly interact with that is not Send is the lock type from a Mutex or RwLock, which is <i>good</i> because if you hold it across a long await you can slow down your app, a bug prevention mechanism that does not exist in straight multithreading. The only point that actually illustrates a threading-caused problem is the thing about multithreading sockets, which it admits is almost imperceptible, and which you can also solve by not doing that.<p>Almost everything the author identifies as a parallelism problem is a &#x27;static problem. Tokio is missing a scoped spawn like std has, and if it gained one then the much described multithreading woes would reduce to basically nothing.
distcsalmost 2 years ago
Can a mod fix the title please? The poster of this story has editorialized the title so bad that it has no connection with the actual title.<p>Actual title: Local Async Executors and Why They Should be the Default<p>Posted title: Async rust – are we doing it all wrong?<p>Really? Why this kind of terrible editorializing?
评论 #36783575 未加载
Animatsalmost 2 years ago
It&#x27;s a frustrating area. As I&#x27;ve mentioned before, I&#x27;m writing a high-performance metaverse client in Rust, something which has many of the problems of both a web browser and a MMO. If you want to have a good looking metaverse, it takes a gamer PC multiple CPUs and a good GPU to deal with the content flood. (This is why Meta&#x27;s Horizon looks so bad.) Now you have to use all that hardware effectively.<p>So what I&#x27;m writing uses threads. About 20 of them. They&#x27;re doing different things at different priorities towards a coordinated goal. This is different from the two usual use cases - multiple servers running in the same address space, and parallel computation of array-type data.<p>Concurrency problems so far:<p>- Single-thread async is simple. Multi-thread async is complicated. Multi-thread async plus other threads not managed by the async system isn&#x27;t used enough to be well supported.<p>- Rust is good at preventing race conditions, but it doesn&#x27;t yet have a static deadlock analyzer. It needs one.<p>- Different threads at different priorities do work in both Linux and Windows, but not all that well. With enough low-priority compute-bound threads to keep all CPUs busy, high-priority threads do not get serviced in a timely manner. I was spoiled by previous work on QNX, which, being a true real-time operating system, takes thread priorities very seriously. On QNX, compute-bound background work has almost no effect on the high-priority stuff. Linux just doesn&#x27;t work well at 100% CPU utilization. Unblocking a lock does not wake up a higher priority waiting thread immediately. This can delay high-priority threads unnecessarily.<p>- The WGPU crowd has spent a year getting their locking sorted out so that you can load content into GPU memory while the GPU is rendering something else. It&#x27;s a standard feature of Vulkan graphics that you can do this, but it has to be supported at all levels above Vulkan too. For me, that&#x27;s WGPU and Rend3. That stack is close enough to ready to test, but not ready for prime time yet.<p>- There&#x27;s no way to cancel a pending HTTP request made with &quot;ureq&quot;. &quot;reqwest&quot; supports that, but you have to bring in all the async and Tokio stuff, which means you now have multi-thread async plus other threads. This is only a problem for what I&#x27;m doing when the user closes the window, and the program needs to exit quickly and cleanly. I&#x27;m getting a 5-10 second stall at exit because of this.<p>- Crossbeam-channel is not &quot;fair&quot;; it&#x27;s possible to starve out some requests. Parking-lot is fair, but doesn&#x27;t have thread poisoning, which means that clean shutdowns after a panic are hard.<p>- Running under Wine with 100% CPU utilization with multiple threads results in futex congestion in Wine&#x27;s memory allocation library, and performance drops by over 99%, with all CPUs stuck in spinlocks. The program is still running correctly, but at about 0.5 frames per second instead of 60 FPS. Bug reported and recognized by the Wine crew, but it&#x27;s hard to fix. I can make this happen under gdb running my own code and see all those threads in the spinlocks, so I was able to file a good bug report. But I haven&#x27;t generated a simple test case. It&#x27;s a Wine-only problem; doesn&#x27;t affect Microsoft Windows.<p>So that&#x27;s life in a heavily threaded world.<p>Individual CPUs have not become much faster in over a decade. Everybody has been stuck at 3-4 GHz for a long time now. CPUs with many cores are widely available in everything from phones to game consoles. To use modern hardware effectively, you need threading.
评论 #36798453 未加载
mgaunardalmost 2 years ago
Multithreading, why does everyone always do it wrong?<p>One of life&#x27;s big questions.
评论 #36783121 未加载
评论 #36785502 未加载
评论 #36783096 未加载
评论 #36783123 未加载
评论 #36783231 未加载
评论 #36784351 未加载
dpc_01234almost 2 years ago
Async is and probably will always be less usable than blocking Rust. It is a very, very useful mode of operating when you really need two of its biggest benefits: lightweight cooperative concurrency and task cancellation, but it comes at a big usability cost.<p>Rust software should use async tactically - in places where it is needed. Unfortunately handling http, which is a large part of many applications is actually a place where async has benefits. But if you plan to run your http behind nginx anyway (for TLS termination) even there using blocking http server might be a good idea.<p>&gt; If you write regular synchronous Rust code, unless you have a really good reason, you don&#x27;t just start with a thread-pool. You write single-threaded code until you find a place where threads can help you, and then you parallelize it,<p>I disagree with this one. When you work on a software project you should have the basic architecture figured out already, and main part of that is breaking your software into structurally parallel parts that can work independently. Adding ad-hoc parallelism after the fact works only for small scale things and will lead to rather accidental concurrency architecture.<p>Then for each part (groups of threads), figure out if it *needs* async. Between each part you&#x27;d communicate via channels or some shared data structures that rather easily can be made to work with both async&#x2F;blocking code.<p>So e.g. an async http server, benefits from lightweight async concurrency, makes rpc-like channel-based calls to blocking IO &#x2F; CPU&#x2F;bussiness-logic intense workers (that don&#x27;t benefit from async) where it makes sense. Each part is written in the best &quot;type of Rust for its use-case&quot;.<p>Or if you need ability to cancel certain computations inside the larger framework (e.g. simulating agents etc.) you might want to nest async executor inside a blocking code.<p>Note: there&#x27;s a lot of types of program archetypes out there (CRUD, ETL, data-intensive, embedded, frontent SPA, native mobile app) and I&#x27;ve noticed that many people are boxed in the type they happen to work on. CRUD applications (which are very common) are often 90% http handling-based and it might make sense to write them whole in async Rust.
评论 #36783614 未加载
评论 #36784857 未加载
dvtalmost 2 years ago
&gt; If you know anything about asynchronous sockets it should be that multi-threading a socket doesn&#x27;t actually yield you more requests &#x2F; second, and it can actually lower it...<p>Re-read this a few times, and I&#x27;m fairly convinced it is not generally true. The author is also being a bit confusing about what exactly he means by &quot;socket&quot; here. Because while it&#x27;s true that multi-threading over a server socket (e.g. the one that binds to the port when you launch the server) will not yield performance gains, multi-threading clients (that have their <i>own</i> sockets, including file descriptors) definitely <i>will</i>. That&#x27;s the whole point of nginx thread pools[1]. Note that nginx does zero &quot;CPU-bound&quot; work, it literally just serves files.<p>Node&#x2F;Deno being single-threaded is purely a limitation of Javascript. Tomcat, Jetty, etc. are all multi-threaded. I&#x27;m a bit tired, so I can&#x27;t comment on the rest of the post in detail, but this was a bit of a red flag.<p>[1] <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;denji&#x2F;8359866" rel="nofollow noreferrer">https:&#x2F;&#x2F;gist.github.com&#x2F;denji&#x2F;8359866</a>
评论 #36784878 未加载
datadeftalmost 2 years ago
For me the biggest issue with Async is the management of multiple dependent async calls. It has some weird thing going on and I am not sure which pattern to use exactly. Some functions expect exactly same async fn signature some not and I am not sure why and which one to use.
评论 #36784488 未加载
dathinabalmost 2 years ago
Rust asyncs design was probably the mostly correct decision for what rust is: A general system programming language which you can use in most situations people today use C,C++ and more.<p>If rust would only be targeting web server programming the right decision might have been no async and green threads, it&#x27;s just much easier to use.<p>But that rust would likely never have succeeded as most of it&#x27;s initial success cases where for use-cases where you wouldn&#x27;t want green threads.<p>Nicely we might still get no async and green threads: In form of run times which run WASM compiled rust code in a node like fashion. Probably in combination with some serverless&#x2F;edge-compute providers which hopefully will be nice to use.
fsckboyalmost 2 years ago
the title should be &quot;Local Async Executors and Why They Should be the Default (Rust)&quot;
zokieralmost 2 years ago
While the article mostly focuses on the cognitive cost, which I deeply sympathize with, I do wonder about the runtime performance cost. Are there any good benchmarks actually quantifying the impact of all that extra thread-safety and the hoops that it adds? I&#x27;m not asking simply due personal interest in seeing the numbers, but also because if we want to steer the community towards this non-threadsafe direction it would help to have material to back the ideas and I suspect Rust community would be more responsive to complaints about perf than cognitive cost.
评论 #36783398 未加载
评论 #36793529 未加载
rich_sashaalmost 2 years ago
Moaning aside, what is (if any?) the direct equivalent to single-threaded asyncio, like Python or node.js, in Rust?<p>The thing I enjoy about async in Python is that it&#x27;s very easy to write &quot;thread&quot;-safe code - you know exactly where you might give up execution context, and 90% of the time you have no need for mutexes and locks. But as this article complains, in Rust, it seems to be sync or multi-threaded.
评论 #36786689 未加载
ikekkdcjkfkealmost 2 years ago
In C# i always wondered why they couldn&#x27;t hide the async&#x2F;await logic for most cases. I never need to fire off two IO futures at the same time, so just make the thread do other stuff if i&#x27;m waiting for IO feedback, don&#x27;t make me type out async&#x2F;await in all impacted functions, let the compiler figure out when it can process other stuff
评论 #36783510 未加载
evanrelfalmost 2 years ago
&gt; Posted on June 9, 2022
评论 #36783083 未加载
packetlostalmost 2 years ago
I&#x27;m still not sure what async (cooperative multitasking) gives over green threads (preemptive userspace multitasking)
评论 #36787922 未加载
gwbas1calmost 2 years ago
When I was between jobs, I decided to learn rust by writing some async code. I really got stung and spent days doing things that would take minutes in C# or Node.<p>Part of the problem is that, in Rust, stack memory is easier to use than heap memory. When writing traditional threaded code, this isn&#x27;t much of an issue because naturally most of our code is working with values on the stack.<p>BUT: When we look more closely in how async works in C# or Javascript, the compiler, under the hood, breaks up an async method into multiple methods and puts values that appear to be on the stack onto the heap. (Of course, I&#x27;m over simplifying.) It just works, and it just works well.<p>But, in Rust, making something async implicitly moves what appears to be on the stack onto the heap. It can quickly become hard to reason about.<p>I wish I knew about techniques that this article describes. Maybe it would make my code easier? In my current hobby projects, I&#x27;m doing traditional blocking IO because it&#x27;s not &quot;worth it&quot; to write async. (In comparison, in Node, async code helps avoid nesting callbacks within callbacks, and in C#, doing IO in async is a best practice.)
paletteOvOalmost 2 years ago
async&#x2F;await is ugly and hard to use and understand for me. It is pretty reasonable that rust chose it because it is a zero-cost abstraction. But i just don&#x27;t like it.
samsquirealmost 2 years ago
This article is against multithreaded executors by default and that synchronous code is easier to read and more practical than async code. I completely understand this.<p>My hobby and main interest is multithreading, async, coroutines, parallelism so I love articles like this so thank you.<p>I am trying to design a solution that lets us have our cake and eat it to. I want multithreaded coroutines or multithreaded async executors by default. I am trying to design a server and runtime that is largely parallel, concurrent, efficient, easy to understand, easy to reason about, async and easy to read and maintain. I want:<p>* Tokio is not bad but I think a codebase that uses async rust requires a high level of skill, cognitive load and understanding. It&#x27;s not straightforward!<p>* synchronous straight-line flow code is the easiest to read and follow<p>* promises and callbacks aren&#x27;t easy to read or follow control flow<p>* making a single threaded program parallel after writing it is almost a complete rewrite<p>* work stealing thread pools solve the starvation problem, so they&#x27;re good!<p>* IO shouldn&#x27;t block CPU and CPU shouldn&#x27;t block IO<p>* I am trying to design a syntax that is data orientated that means programs can be parallelised after writing them, so parallelisation comes for free<p>* we can use the LMAX Disruptor pattern for efficient cross-thread communication. I use a lockfree multiconsumer multiproducer ringbuffer in my programs.<p>I have an epoll-server which multiplexes clients&#x2F;sockets over threads, this is more efficient than a thread-per-socket&#x2F;client. I need to change it into a websocket server.<p>Imagine you&#x27;re a search engine company and you want to index links between URLs. How would you model this with async rust and thread pools?<p><pre><code> task download-url for url in urls: download(url) task extract-links parsed = parse(document) return parsed task fetch-links for link in document.query(&quot;a&quot;) return link task save-data db.save(url, link) </code></pre> How would you do control flow and scheduling and parallelism and async efficiently with this code?<p>`db.save()`, `download()` are IO intensive whereas `document.query(&quot;a&quot;)` and `parse` is CPU intensive.<p>I think its work diagram looks like this: <a href="https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;dream-programming-language&#x2F;blob&#x2F;main&#x2F;Slide2.PNG?raw=true">https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;dream-programming-language&#x2F;blob...</a><p>I&#x27;ve tried to design a multithreaded architecture that is scalable which combines lightweight threads + thread pools for work + control threads for IO epoll or liburing loops:<p>Here&#x27;s the high level diagram:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;ideas5&#x2F;blob&#x2F;main&#x2F;NonblockingRuntime.drawio.png">https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;ideas5&#x2F;blob&#x2F;main&#x2F;NonblockingRun...</a><p>The secret is modelling control flow as a data flow problem and having a simple but efficient scheduler.<p>I wrote about schedulers here and binpacking work into time:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;ideas4#196-binpacking-work-into-network-and-time-and-server-communication">https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;ideas4#196-binpacking-work-into...</a><p>I also have a 1:M:N lightweight thread scheduler&#x2F;multiplexer:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;preemptible-thread">https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;preemptible-thread</a>
binary132almost 2 years ago
Rust acolytes need to come to terms with the fact that if they want Rust to be the next C++, it&#x27;s going to be the next C++. Heck, even C++ async is simpler. Anyway, who needs a package manager when you&#x27;ve got a package manager?
评论 #36784189 未加载
chrismsimpsonalmost 2 years ago
Suggesting that single threaded concurrency is <i>the right way to do it</i> when building tooling is completely asinine.
评论 #36784384 未加载
dathinabalmost 2 years ago
&gt; why multi-threaded task executors should be the default<p>many many reasons and it&#x27;s subtile and complicated<p>For one ironically it&#x27;s just way easier to use, especially for less experienced programmers. In a task system like that it&#x27;s just way easier to accidentally cause major issues when it&#x27;s many tasks on one thread compared to multiple threads (at least with the guard rails rust provides for threading safety). On the other hand the performance overhead of by-default multi-threaded is just fine for a ton of use-cases to a point you could argue worrying about it is premature optimization.<p>Through it&#x27;s important to state that a lot of this comes from the ecosystem around rust and not rust itself, as stuff like `LocalSet` shows you can have a non-multi threaded runtime and there is no reasons all the libraries you might use couldn&#x27;t provide non multi thread safe versions. Some do. Just many decided that avoiding the performance overhead of being thread save isn&#x27;t worth the maintenance overhead and additional foot guns it can provide.<p>Now naturally you can say &quot;but node&#x2F;deno&#x2F;etc.&quot; but they are a completely different beast then &quot;just&quot; being not multi threaded by default. Like e.g. they don&#x27;t have multi threaded code at all. Just single threaded code communicating through serialized messages (kinda). They also e.g. handle all I&#x2F;O completely separated from you application code and don&#x27;t have any non serialized communication between threads etc. etc.<p>Interestingly if you look at the design choices of the I&#x2F;O Event loop (reactor) of tokio there are some conceptual similarities. Also there AFIK there is a company which is building something similar to the node model for rust using WASM.<p>I mean in the end the node-style approach is grate for building servers, but rust isn&#x27;t just for building servers but much more general with much more low level use cases.<p>Now the main point where rust could improve quite a bit is to make it much easier to write a library which work very efficient with both cases. Code which crosses thread boundaries and code which doesn&#x27;t. Currently you are often split between either implementing it twice, doing terrible unusable generics tricks or using tons of `cfg` (probably generated using macros&#x2F;annotations) and hope no dependency accidentally imports the multi-thread feature when you don&#x27;t need it. Non of this is really viable but it&#x27;s a surprisingly hard problem. Currently the best idea I can come up for it is generic modules which make the &quot;terrible to use generics tricks&quot; usable, but it&#x27;s probably not enough by a long stretch. (Even if a solution is found it might not work for Waker, even if it does you still might want sync&#x2F;send Wakers in some cases.)
hoangnguyenvualmost 2 years ago
reason?
captainmuonalmost 2 years ago
Wait, async is multithreaded by default in Rust? For me the whole point of using async in JavaScript or Python (originally with Twisted&#x27;s @inlineCallbacks) was to get concurrency without threads.<p>Imagine writing code for a computer game bot: move left, wait for enemy, attack enemy... You normally can&#x27;t write it like this because it would block the rest of your program. Async allows you to go from &quot;program sequential&quot; to &quot;bot sequential&quot; for lack of better terms. If you are IO bound there is no need for threads. I often like to use one network thread and one GUI thread to keep things separate, and to prevent the occasional blocking in one to cause latency in the other. You just need to have a method to post calls to the other event loop. Works well in Python, as well with Qt-based apps.<p>C# on the other hand made the same &quot;mistake&quot; of being very general. I think you can even have a coroutine suspend on one thread and wake on another. It looks like you switch threads in the middle of a function.<p>I guess that is useful when you want to write performant multithreaded servers. I just want to write easy sequential code without worrying about locks or state machines.
评论 #36786024 未加载
评论 #36786488 未加载
评论 #36786030 未加载