TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

I'm not feeling the async pressure

337 点作者 pauloxnet超过 5 年前

17 条评论

LennyWhiteJr超过 5 年前
I wish he would have commented on .NET&#x27;s async implementation. Microsoft really got it right here and it&#x27;s arguably the best implementation I&#x27;ve seen.<p>All .NET async APIs take an optional cancellation token parameter which solves his flow-control problem by allowing the async request to be canceled at any time. If the token is canceled, the async task will (or should) throw an OperationCanceledException which can then be cleanly handled in a standard try&#x2F;catch block up the stack.<p>The best part about this is that it pervades the entire .NET runtime, the APIs, code examples, and has excellent documentation on correct usage patterns. Sure, a 3rd party library could choose not to support cancellation tokens, but they would be going against the entire .NET ecosystem by doing so. Every other async implementation I&#x27;ve seen has really seemed like a haphazard bolt-on.<p>I honestly don&#x27;t know how I would write robust async code without cancellation tokens, so I guess he has a point when it comes to javascript and python ecosystems.
评论 #21934141 未加载
评论 #21933336 未加载
评论 #21943875 未加载
评论 #21933702 未加载
评论 #21937201 未加载
评论 #21933840 未加载
pantulis超过 5 年前
IMHO all the rage on async comes from an era when interpreted languages where the main trend as they demonstrably increased developer productivity --I&#x27;m looking at you, Rails, Django. Those platforms were not designed for runtime speed, so it made sense to push the bottleneck to the most inmediate backend system in the chain (i.e.: your trusty database, which is much faster than your agile framework of choice, remember the discussions of ORMs versus pure SQL?)<p>Then there came async frameworks a la Node, Twisted et al. and changed everything. Again in my opinion, async code is harder to reason about versus synchronous code.<p>Things to keep in mind:<p>- Are you really working at scale? Does the arguably added complexity of async benefit your particular use case? Specially when SPA technologies allow to build simpler backend for frontend systems (pure API, no HTML rendering). And not only regarding pure operational performance, Rails is still impossibly hard to beat when it comes to productivity.<p>- New players like Go, Rust have async capabilities but you dont necessarily need to use them to perform closer to native speed, hence becoming simpler solutions than Node, Ruby, or Python. Guess that also applies to old dogs with new tricks in the JVM (Micronaut, Quarkus...)
评论 #21934410 未加载
评论 #21937982 未加载
评论 #21934383 未加载
ncmncm超过 5 年前
I wonder if the advent of async&#x2F;await in all the popular and upcoming programming languages will be seen, after some (buffering) interval, as a disaster of major proportions. And, I wonder how the programming world will respond to the disaster.<p>It seems advisable to begin that response now. The linked article might be the beginning of such a response, but it seems too tentative. We may need an Iron Law of Flow Control, visibly acknowledged and observed in each system that uses an async&#x2F;await facility, at the point of use, or a note explaining where it is handled farther back up the chain.<p>TCP vs IP is an excellent example of such an alternative: IP does not bother with buffering, except as a completely local performance optimization, and happily drops packets at the first hint of trouble, assured that somebody closer to the source has buffered copies of whatever they actually care about.
评论 #21933081 未加载
评论 #21932949 未加载
评论 #21933551 未加载
评论 #21934794 未加载
评论 #21932683 未加载
majke超过 5 年前
&gt; <i>In most async systems … you end up in a world where you chain a bunch of async functions together with no regard of back pressure.</i><p>yup. Back pressure doesn’t compose in the world of callbacks &#x2F; async. It does compose if designed well in coroutine world (see: erlang).<p>&gt; <i>async&#x2F;await is great but it encourages writing stuff that will behave catastrophically when overloaded.</i><p>yup. It’s very hard, in larger systems impossible, to do back pressure right with callbacks &#x2F; async programming model.<p>This is how I assess software projects I look at. How fast is database is one thing. What does it do when I send it 2GiB of requests not reading responses? What happens when I open a bazillion connections to it? Will a previously established connections have priority over handling new connections?
fyp超过 5 年前
I really wish we had more control over the scheduling of async tasks.<p>For a javascript example I ran into recently, say I am firing off a fetch for each image that comes into view in a large gallery. If I suddenly scroll down to the 1000th image, a naive implementation might fire off 1000 fetches for all the images we scrolled past. Then you&#x27;ll be waiting a long time before the images in your current viewport is loaded.<p>Backpressure can save you a little bit here. Say you do the semaphore trick mentioned in the article and only allow a max of say 10 fetches in flight at once. Then if you quickly scroll through, all the subsequent fetches after the initial should fail, including the ones at the viewport you stop at. But since the queue is short, when the images in your current viewport retries it should now succeed.<p>This works but it isn&#x27;t ideal. Ideally I would be able to just reprioritize the newer fetches to be LIFO instead of FIFO. Or maybe inspect what&#x27;s currently queued up (and how big the queue is) so I can cancel everything that I don&#x27;t need.<p>The backpressure solutions might just be a symptom of async tasks not being controllable in any way once started which is why you&#x27;re forced to commit to it or not from the start even if that might not be the best point in time to make that decision.
评论 #21934130 未加载
评论 #21932726 未加载
评论 #21932571 未加载
评论 #21932328 未加载
评论 #21932731 未加载
评论 #21932656 未加载
评论 #21937815 未加载
j88439h84超过 5 年前
As mentioned in the article, Python&#x27;s Trio solves all of these issues much better than asyncio does.<p><a href="https:&#x2F;&#x2F;trio.readthedocs.io" rel="nofollow">https:&#x2F;&#x2F;trio.readthedocs.io</a>
评论 #21933582 未加载
kd5bjo超过 5 年前
&gt; So why is write not doing an implicit drain? Well it&#x27;s a massive API oversight and I&#x27;m not exactly sure how it happened.<p>Separating these two operations allows code to use multiple write() calls to build up a single record atomically before yielding control to the system, where some other task might also write to the same stream. This reasoning is only valid if the program is running on a single thread, but that’s a reasonable architecture decision for many programs.
评论 #21934469 未加载
评论 #21933782 未加载
christiansakai超过 5 年前
Why is Go mentioned here? AFAIK, Go&#x27;s goroutine makes async not async like in NodeJS async sense, but just lightweight user space threading, so just blocking like regular threading.
jph超过 5 年前
Ideally async and backpressure will advance to make better use of scheduling, prioritizing, quality of service shaping, termination conditioning, and the like.<p>As an example, there&#x27;s a big difference between the async needs of one paying customer who&#x27;s loading one medical chart web page vs. some free third-party web crawler trying a daily full text scan of an entire site.<p>Async with cost functions feels like a promising area for real-world use cases.
Doxin超过 5 年前
It seems to me that all that this boils down to is the fact that await&#x2F;async makes back pressure something you need to deal with explicitly. Having the default be buffering isn&#x27;t ideal but since each application will have their own idea of what to do with backpressure it&#x27;d be hard to figure out a different default that works better.<p>In any case all this can be solved without major rewrites by making sure every awaitable is awaited <i>at some point in the future</i>. Instead of doing this:<p><pre><code> while connection.accept(): handle_connection() </code></pre> you might do something like this:<p><pre><code> connection_pool = [] while connection.accept(): connection_pool.append(handle_connection()) if len(connection_pool)&gt;=MAX_CONNECTIONS: await wait_any(connection_pool) </code></pre> And there you go. Anytime there&#x27;s more than MAX_CONNECTIONS the program stops accepting new connections, providing back pressure. It&#x27;s more code but it&#x27;s also defining exactly HOW to provide back pressure. Your specific use case might warrant providing back pressure not based on connection count but cpu usage or average response times. You might want to have a single global maximum connection count instead of one per thread. All of these aren&#x27;t much more complex than what I&#x27;ve shown above as long as you keep the cardinal rule in mind: any awaitable MUST be awaited.<p>And in fact in python -- and other programming languages too I bet -- you get a warning on exit if there are any awaitables that never got awaited. In my opinion that should be an error. I can&#x27;t think of a single scenario where it&#x27;d be proper form to never await an awaitable. You might await immediately, sometime in the future, or at the end of the program. But you never don&#x27;t await at all.<p>Threading isn&#x27;t any easer, or harder. If you spawn a thread for each connection you run into the same issue as await unless you do something about it. If you use a pool of threads you get backpressure for free, but the same goes for a pool of awaitables!<p>tl;dr: async&#x2F;await has the exact same problems as threads, the tooling around async&#x2F;await is just less mature. Rewriting async code to provide back pressure is near trivial.
mayoff超过 5 年前
Backpressure is the big thing that Reactive Streams adds to Rx&#x2F;ReactiveX.<p><a href="https:&#x2F;&#x2F;www.reactive-streams.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reactive-streams.org&#x2F;</a>
iforgotpassword超过 5 年前
My main problem with async is that it&#x27;s much harder to build a mental model of what exactly is going on. Admittedly, I only had to deal with existing code so far doing only minor work in it so you can definitely say it&#x27;s due to lack of experience. However it seems that while problems like concurrent access to shared memory known from traditional threaded code don&#x27;t exist, having to think about what can block etc. is of equal burden.
grok2超过 5 年前
How do actor model languages (like ponylang) handle this? It seems like not having back-pressure support would be a fundamental issue with the language.
评论 #21932449 未加载
carapace超过 5 年前
FWIW, having climbed Twisted&#x27;s learning curve long ago the current async-all-the-things fad looks so childish to me. I remember when Tornado came out and I was like, why would you use a go-kart when there&#x27;s a free <i>Maserati</i> right there?<p><a href="https:&#x2F;&#x2F;twistedmatrix.com&#x2F;trac&#x2F;" rel="nofollow">https:&#x2F;&#x2F;twistedmatrix.com&#x2F;trac&#x2F;</a>
davidjnelson超过 5 年前
Doesn’t solve the issue with calling an api that doesn’t return a promise, but the tslint rule “no-floating-promises” flags missing awaits which is quite useful. Also useful is flagging awaits that don’t return a promise, which can be done with the tslint rule “await-promise”.
panitaxx超过 5 年前
In nodejs you can use streams or rxjs. Both handle backpressure quite nicely. They work on bytes and on objects as well.
andrewstuart超过 5 年前
The thing I like about async Python is being able to run subprocesses and capture and processes their stdout&#x2F;stderr
评论 #21932943 未加载