TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Rust without the async (hard) part

149 点作者 taldridge将近 3 年前

16 条评论

Matthias247将近 3 年前
&gt; The problem is that threads just don’t work in practice for massive concurrency.<p>That&#x27;s an assumption that is repeated very often recently, and measured very rarely. Truth is that they amount of applications for which they don&#x27;t work is surprisingly low. I&#x27;m working at a well known cloud provider, and lots of people would really be suprised which applications at largest scale are working fine with a thread-per-request model. 50k OS threads are not really an issue on modern server hardware. While it might not be the most efficient [1], it will not perform so bad that it causes an availaiblity impact either.<p>There&#x27;s obviously some exceptions to that [2] - but I encourage people to measure instead of making assumptions. Unless one finds themselves in a weekly meeting about server efficiency or scaling cliffs both models probably work.<p>[1] it really depends on the workload, but people might find an efficiency degradation (e.g. measured as BYTES_TRANSFERRED&#x2F;CPU_CORES_USED) of 20% at a concurrency level of 1000, or maybe only at a concurrency level of 10k. Coarse-grained work items (e.g. send a large file to a socket) will show a lower degradation.<p>[2] Load balancers, CDN services, and e.g. chat applications which maintain a massive amount of mostly idle client connections can be such environments. They have a high amount of concurrency that needs to be managed, but less so of &quot;active concurrency&quot;. If all clients would be active at the same time, those environments would run out of disk IO or network bandwidth far before CPU or memory become an issue.
评论 #31685588 未加载
评论 #31687006 未加载
评论 #31686305 未加载
评论 #31686225 未加载
评论 #31685181 未加载
zaphar将近 3 年前
Anything using the green&#x2F;lightweight or OS thread model is usually easier to use at the cost of some runtime performance. Whether the runtime performance matters for your use case can only be determined by measuring stuff.<p>The perception that async rust is where you should start for concurrent rust because it&#x27;s built in and everyone uses it perhaps should be revisited. I would argue that the other options are worth consideration first and dropping down to low level async code might be warranted when you need the performance it gives and that justifies the increase in development costs.
评论 #31684100 未加载
评论 #31682446 未加载
评论 #31682213 未加载
评论 #31684085 未加载
评论 #31683132 未加载
评论 #31682404 未加载
lewantmontreal将近 3 年前
I use Rust for the amazing types, map&#x2F;filter&#x2F;reduce, and, even if I never write macros myself, beautiful libraries like serde and clap. I do need to often use async to wait for multiple network requests at once, although I&#x27;m not quite comfortable with it.<p>Requesting urls n-at-a-time took me a while (<a href="https:&#x2F;&#x2F;play.rust-lang.org&#x2F;?version=stable&amp;mode=debug&amp;edition=2021&amp;gist=aec5e4b0e1ca012288c02dabd05a9b0e" rel="nofollow">https:&#x2F;&#x2F;play.rust-lang.org&#x2F;?version=stable&amp;mode=debug&amp;editio...</a>). In particular rust-analyzer itself cannot figure out `buffer`&#x27;s type here.<p>You can consider me very intrigued by Lunatic.
评论 #31683093 未加载
wongarsu将近 3 年前
&gt; However, if you are doing web apps or any networking stuff, massive concurrency benefits are almost always too important to ignore<p>My problem is more that even if I don&#x27;t need massive concurrency (say in a client that only talks to a single server, in a serial manner), I&#x27;m still more or less forced into async code because that&#x27;s what the ecosystem switched to. No matter if you benefit from async or not, not using it is going against the grain and generally makes your life harder, despite threads being much better from a language-ergonomics point of view
评论 #31682387 未加载
评论 #31682416 未加载
cshenton将近 3 年前
Why isn’t imperative event loop programming more widely used? It’s a reasonably common pattern for games networking libraries like Enet, and has the added bonus that you get to design exactly how you lay out the memory of all your in flight work and therefore have it be easily debuggable.
thecompilr将近 3 年前
For me async is about ergonomics first of all. When you perform parallel tasks on multiple threads it is hard and ugly (in cross platform Rust at least) to implement any sort of intricate cross communication, as communication between threads is asynchronous by nature. And it is very much impossible to stop a thread externally.<p>Async rust lets you implement different combinators on async tasks and cancel them effortlessly.<p>As for performance, tokio is not exactly a zero cost abstraction. Just run perf on a tokio program to see how big of overhead it introduces. It has claimed to be zero cost from the start, and since then it has done at least two major performance overhauls, to prove the point. That being said I love tokio and its ecosystem, but it is ergonomics, not speed that I love. That being said async-std was much slower for the networking use case that I had, so overall tokio is as good as it gets.
评论 #31688951 未加载
jstx1将近 3 年前
I&#x27;ve done some beginner Rust and Go programming (read &quot;the books&quot; on both, written small programs) and I&#x27;m wondering which one to spend more time on or try to get a job with in the future. When I see discussions like this one about Rust, I start to worry that it&#x27;s unnecessarily complicated and difficult to work with and that this will only get worse in the future to the point that it won&#x27;t be a good fit for many of the use cases that it&#x27;s pitched for. Am I wrong to think this?
评论 #31684168 未加载
评论 #31686105 未加载
评论 #31691330 未加载
verdagon将近 3 年前
Does anyone else get the feeling that we (as a field) are missing something basic about concurrency? Like there&#x27;s a really elegant solution just around the corner, that has the low overhead of async&#x2F;await without the complexity. Or otherwise put, the ease of goroutines but without GC.<p>I know it sounds crazy. I recently dove into the area, and was pretty surprised at how many interesting building blocks there are out there. It feels like if we just combine them in the right way, we&#x27;ll discover something that works a lot better.<p>Off the top of my head:<p>Google discovered a way to switch between OS threads without the syscall overhead. All it needs is to solve the memory overhead. [0]<p>Zig discovered a way to use monomorphization to enable colorless async&#x2F;await. If someone could figure out how to make it work through polymorphism &#x2F; virtual dispatch, that would be amazing. [1]<p>Vale discovered a possible way to make structured concurrency in a memory safe way that&#x27;s easier than existing methods. [2]<p>Go [3] and Loom [4] show us that we can move stacks around. Loom is particularly interesting as it shows we can move the stack to its original location, a unique mechanism that could solve some other approaches&#x27; problems with pointer invalidation.<p>Cone is designing a unique blend of actors and async await, to enable simpler architectures. [5]<p>We&#x27;re close to solving the problem, I can feel it.<p>[0] No public docs on it, but TL;DR: we tell the OS the thread is blocked, and manually switch over to it by saving&#x2F;manipulating registers.<p>[1] <a href="https:&#x2F;&#x2F;kristoff.it&#x2F;blog&#x2F;zig-colorblind-async-await&#x2F;" rel="nofollow">https:&#x2F;&#x2F;kristoff.it&#x2F;blog&#x2F;zig-colorblind-async-await&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;verdagon.dev&#x2F;blog&#x2F;seamless-fearless-structured-concurrency" rel="nofollow">https:&#x2F;&#x2F;verdagon.dev&#x2F;blog&#x2F;seamless-fearless-structured-concu...</a><p>[3] <a href="https:&#x2F;&#x2F;blog.cloudflare.com&#x2F;how-stacks-are-handled-in-go&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.cloudflare.com&#x2F;how-stacks-are-handled-in-go&#x2F;</a><p>[4] <a href="https:&#x2F;&#x2F;youtu.be&#x2F;NV46KFV1m-4" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;NV46KFV1m-4</a><p>[5] Can&#x27;t find the link, but was a discussion on their server.
评论 #31682725 未加载
评论 #31683583 未加载
评论 #31684812 未加载
评论 #31687324 未加载
评论 #31683926 未加载
评论 #31684432 未加载
the__alchemist将近 3 年前
This sounds like what I&#x27;m looking for for building a set of networking&#x2F;pentest tools. Ie, being able to spawn an arbitrary number of IO bound processes without the overhead of OS threads, and the contagion and fracturing of Async.<p>There may still be some fracturing here, ie in the first example (but not the others, inexplicably?) `lunatic::net` vice `std::net`.
评论 #31682419 未加载
zokier将近 3 年前
Has anyone seen any recent solid benchmarks of thread per connection architecture web application? What is actually the break-point load where it&#x27;s perf starts to regress and async really becomes useful?
brickbrd将近 3 年前
What does &quot;stream.write_all(&amp;number_as_bytes).unwrap();&quot; do if the socket buffer is full? Does it block this virtual thread running this function? Or does the stream keep buffering? or is it sending the message to some other process which is accumulating those messages. What if I don&#x27;t wait this thread to block and instead do something else?<p>I believe all of these are handled. I just cannot find sufficient documentation to understand the details of how this works.
评论 #31685233 未加载
mamcx将近 3 年前
Exist an alternative to `actix` that can use this model?<p>Because it sound interesting, but the hard part is that you need a combo of request&#x2F;webserver to have a chance.<p>and then the DB side....
beebmam将近 3 年前
I&#x27;ve been asking this for months, and I can&#x27;t seem to find an answer anywhere:<p>I&#x27;m unable to get debugger breakpoints in Async functions in Rust to actually break.<p>Is this a known bug with Async Rust? Or is this simply unsupported (yet)? Seems like a really broken experience currently.
ithkuil将近 3 年前
is it possible to use this on a non-wasm target?
amelius将近 3 年前
Meanwhile, GoLang allows thousands of threads without problems.
评论 #31687222 未加载
评论 #31682506 未加载
评论 #31683565 未加载
评论 #31684646 未加载
bruce343434将近 3 年前
&gt; However, if you are doing web apps or any networking stuff, massive concurrency benefits are almost always too important to ignore<p>No, you will benefit from parallelism&#x2F;multithreading. Why only use 1 core? Multitasking as it was once called, or &quot;async&quot; as it is now, is fundamentally _synchronous_ because everything still happens on one core. Just that the order of execution may be a bit wonky, which technically all code already suffers from at the microscopic level with instruction reordering and out of order execution. You almost certainly don&#x27;t <i>need</i> multitasking unless you are writing an OS for embedded.
评论 #31682407 未加载
评论 #31682541 未加载
评论 #31682457 未加载
评论 #31682443 未加载