TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Async Rust is not safe with io_uring

212 pointsby ethegwo7 months ago

22 comments

withoutboats37 months ago
This is nothing to do with async Rust; monoio (and possibly other io-uring libraries) are just exposing a flawed API. My ringbahn library written in 2019 correctly handled this case by having a dropped accept future register a cancellation callback to be executed when the accept completes.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ringbahn&#x2F;ringbahn">https:&#x2F;&#x2F;github.com&#x2F;ringbahn&#x2F;ringbahn</a>
评论 #41994282 未加载
评论 #41994219 未加载
评论 #41994799 未加载
ordu7 months ago
<i>&gt; The title of this blog might sound a bit dramatic, but everyone has different definitions and understandings of &quot;safety.&quot;</i><p>Still in Rust community &quot;safety&quot; is used in a very specific understanding, so I don&#x27;t think it is correct to use any definition you like while speaking about Rust. Or at least, the article should start with your specific definition of safety&#x2F;unsafety.<p>I don&#x27;t want to reject the premise of the article, that this kind of safety is very important, but for Rust unsafety without using &quot;unsafe&quot; is much more important that an OS dying from leaked connections. I have read through the article looking for rust&#x27;s kind of unsafety and I was found that I was tricked. It is very frustrating, it looks to me as a lie with some lame excuses afterwards.
评论 #41994587 未加载
评论 #41995197 未加载
评论 #41998794 未加载
评论 #41995215 未加载
xyst7 months ago
Notably, io_uring syscall has been a significant source of vulnerabilities. Last year, Google security team decided to disable it in their products (ChromeOS, Android, GKE) and production servers [1].<p>Containerd maintainers soon followed Google recommendations and updated seccomp profile to disallow io_uring calls [2].<p>io_uring was called out specifically for exposing increased attack surface by kernel security team as well long before G report was released [3].<p>Seems like less of a rust issue and more of a bug(s) in io_uring? I suppose user space apps can provide bandaid fix but ultimately needs to be handled at kernel.<p>[1] <a href="https:&#x2F;&#x2F;security.googleblog.com&#x2F;2023&#x2F;06&#x2F;learnings-from-kctf-vrps-42-linux.html?m=1" rel="nofollow">https:&#x2F;&#x2F;security.googleblog.com&#x2F;2023&#x2F;06&#x2F;learnings-from-kctf-...</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;containerd&#x2F;containerd&#x2F;pull&#x2F;9320">https:&#x2F;&#x2F;github.com&#x2F;containerd&#x2F;containerd&#x2F;pull&#x2F;9320</a><p>[3] <a href="https:&#x2F;&#x2F;lwn.net&#x2F;Articles&#x2F;902466&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lwn.net&#x2F;Articles&#x2F;902466&#x2F;</a>
评论 #41994355 未加载
评论 #41993885 未加载
评论 #42014505 未加载
评论 #41993970 未加载
评论 #41993940 未加载
smatija7 months ago
This reference at the bottom of article was very interesting to me: <a href="https:&#x2F;&#x2F;without.boats&#x2F;blog&#x2F;io-uring&#x2F;" rel="nofollow">https:&#x2F;&#x2F;without.boats&#x2F;blog&#x2F;io-uring&#x2F;</a><p>&quot;So I think this is the solution we should all adopt and move forward with: io-uring controls the buffers, the fastest interfaces on io-uring are the buffered interfaces, the unbuffered interfaces make an extra copy. We can stop being mired in trying to force the language to do something impossible. But there are still many many interesting questions ahead.&quot;
n_plus_1_acc7 months ago
It&#x27;s not about memory safety, as you might assume from the title. There&#x27;s no soundness bug involved.
评论 #41993609 未加载
api7 months ago
There are async libraries like glommio, which I’m using for a new project, that avoid this I think, but they require you to factor things a little differently from tokio.<p>Maybe cancellation itself is problematic. There’s a reason it was dropped from threading APIs and AFAIK there is no way to externally cancel a goroutine. Goroutines are like async tasks with all the details hidden from you as it’s a higher level language.
评论 #41993729 未加载
评论 #41993685 未加载
whytevuhuni7 months ago
I don&#x27;t get it. What&#x27;s the ideal scenario here?<p>That going to the sleep branch of the select should cancel the accept? Will cancelling the accept terminate any already-accepted connections? Shouldn&#x27;t it be <i>delayed</i> instead?<p>Shouldn&#x27;t newly accepted connections be dropped only if the listener is dropped, rather than when the listener.accept() future is dropped? If listener.accept() is dropped, the queue should be with the listener object, and thus the event should still be available in that queue on the next listener.accept().<p>This seems more like a bug with the runtime than anything.
评论 #41994241 未加载
lifthrasiir7 months ago
This aspect of io_uring does affect a lot of surface APIs, as I have experienced at work. At least for me I didn&#x27;t have to worry much about borrowing though.
评论 #41993559 未加载
ciconia7 months ago
Disclaimer: I&#x27;m not a Rust programmer.<p>Having written a few libs for working with io_uring (in Ruby), cancellation is indeed tricky, with regards to keeping track of buffers. This is an area where working with fibers (i.e. stackful coroutines) is beneficial. If you keep metadata (and even buffers) for the ongoing I&#x2F;O op on the stack, there&#x27;s much less book-keeping involved. Managing the I&#x2F;O op lifetime, especially cleaning up, becomes much simpler, as long as you make sure to not return before receiving a CQE, even after having issued a cancellation SQE.
Sytten7 months ago
We so need a way to express cancellation safety other than documentation. This is not just an io_grind problem, you have a lot of futures in tokio that are not cancel safe. Are there some RFC of the subject?
pjdesno7 months ago
Since io_uring has similar semantics to just about every hardware device ever (e.g. NVMe submission and completion queues), are there any implications of this for Rust in the kernel? Or in SPDK and other user-level I&#x2F;O frameworks?<p>Note that I don&#x27;t know a lot about Rust, and I&#x27;m not familiar with the rules for Rust in the kernel, so it&#x27;s possible that it&#x27;s either not a problem or the problematic usages violate the kernel coding rules. (although in the latter case it doesn&#x27;t help with non-kernel frameworks like SPDK)
评论 #41994608 未加载
wg07 months ago
I have tried to learn rust and borrow checker is no problem but I can&#x27;t get lifetimes and then Rc, Box, Arc Pinning along with async Rust are a whole another story.<p>Having programmed in raw C, I know Rust is more like Typescript if you once try it after writing Javascript, you can&#x27;t go back for anything serious in plain Javascript. You would want to have some guard rails better than having no guard rails.
评论 #41996337 未加载
duped7 months ago
While it&#x27;s true that the &quot;state&quot; of a future is only mutated in the poll() implementation, it&#x27;s up to the author of the future implementation to clone&#x2F;send&#x2F;call the Waker provided in the context argument to signal to the executor that poll() should be called again by the executor, which I believe is how one should handle this case.
whalesalad7 months ago
I am really confused that rust was not designed to do async out of the box? Am I wrong that third party libraries are required (tokio) to do this?
评论 #41998781 未加载
MuffinFlavored7 months ago
How common of a pattern is it to accept in a loop but also on a timeout so that you can pre-empt and go do some other work?
评论 #41993989 未加载
NooneAtAll37 months ago
&gt; &#x2F;&#x2F; we loose the chance to handle the previous one.<p>lose?
评论 #41996446 未加载
amoss7 months ago
Who is Barbara?
评论 #41994505 未加载
评论 #41994438 未加载
jerf7 months ago
There are certain counterintuitive things that you have to learn if you want to be a &quot;systems engineer&quot;, in a general sense, and this whole async thing has been one of the clearest lessons to me over the years of how seemingly identical things sometimes can not be abstracted over.<p>Here by &quot;async&quot; I don&#x27;t so much mean async&#x2F;await versus threads, but these kernel-level event interfaces regardless of which abstraction a programming language lays on top of them.<p>At the 30,000 foot view, all the async abstractions are basically the same, right? You just tell the kernel &quot;I want to know about these things, wake me up when they happen.&quot; Surely the exact way in which they happen is not something so fundamental that you couldn&#x27;t wrap an abstraction around all of them, right?<p>And to some extent you can, but the result is generally so lowest-common-denominator as to appeal to nobody.<p>Instead, every major change in how we handle async has essentially obsoleted <i>the entire programming stack based on the previous ones</i>. Changing from select to epoll was not just a matter of switching out the fundamental primitive, it tended to cascade up almost the entire stack. Huge swathes of code had to be rewritten to accommodate it, not just the core where you could do a bit of work and &quot;just&quot; swap out epoll for select.<p>Now we&#x27;re doing it again with io_uring. You can&#x27;t &quot;just&quot; swap out your epoll for io_uring and go zoomier. It cascades quite a ways up the stack. It turns out the guarantees that these async handlers provide are very different and very difficult to abstract. I&#x27;ve seen people discuss how to bring io_uring to Go and the answer seems to basically be &quot;it breaks so much that it is questionable if it is practically possible&quot;. An ongoing discussion on an Erlang forum seems to imply it&#x27;s not easy there (<a href="https:&#x2F;&#x2F;erlangforums.com&#x2F;t&#x2F;erlang-io-uring-support&#x2F;765);" rel="nofollow">https:&#x2F;&#x2F;erlangforums.com&#x2F;t&#x2F;erlang-io-uring-support&#x2F;765);</a> I&#x27;d bet it reaches up &quot;less far&quot; into the stack but it&#x27;s still a huge change to BEAM, not &quot;just&quot; swapping out the way async events come in. I&#x27;m sure many other similar discussions are happening everywhere with regards to how to bring io_uring into existing code, both runtimes and user-level code.<p>This does not mean the problem is unsolvable by any means. This is not a complaint, or a pronunciation of doom, or an exhortation to panic, or anything like that. We did indeed collectively switch from select to epoll. We will collectively switch to io_uring eventually. Rust will certainly be made to work with it. I am less certain about the ability of shared libraries to be efficiently and easily written that work in both environments, though; if you lowest-common-denominator enough to work in both you&#x27;re probably taking on the very disadvantages of epoll in the first place. But programmers are clever and have a lot of motivation here. I&#x27;m sure interesting solutions will emerge.<p>I&#x27;m just highlighting that as you grow in your programming skill and your software architecture abilities and general system engineering, this provides a very interesting window into how abstractions can not just leak a little, but leak a <i>lot</i>, a long ways up the stack, much farther than your intuition may suggest. Even as I am typing this, my own intuition is still telling me &quot;Oh, how hard can this really be?&quot; And the answer my eyes and my experience give my intuition is, &quot;Very! Even if I can&#x27;t tell you every last reason why in exhaustive detail, the evidence is clear!&quot; If it were &quot;just&quot; a matter of switching, as easy as it <i>feels</i> like it ought to be, we&#x27;d all <i>already</i> be switched. But we&#x27;re not, because it isn&#x27;t.
评论 #41997336 未加载
lsofzz7 months ago
&lt;3
newpavlov7 months ago
Yet another example of async Rust being a source of unexpected ways to shoot yourself in the foot... Async advocates can argue as long as they want about &quot;you&#x27;re holding it wrong&quot;, but to me it sounds like people arguing that you can safely use C&#x2F;C++ just by being &quot;careful&quot;.
评论 #41995124 未加载
zbentley7 months ago
My hot take is that the root of this issue is that the destructor side of RAII <i>in general</i> is a bad idea. That is, registering custom code in destructors and running them <i>invisibly, implicitly, maybe sometimes but only if you&#x27;re polite</i>, is not and never was a good pattern.<p>This pattern causes issues all over the place: in C++ with headaches around destruction failure and exceptions; in C++ with confusing semantics re: destruction of incompletely-initialized things; in Rust with &quot;async drop&quot;; in Rust (and all equivalent APIs) in situations like the one in this article, wherein failure to remember to clean up resources on IO multiplexer cancellation causes trouble; in Java and other GC-ful languages where custom destructors create confusion and bugs around when (if ever) and in the presence of what future program state destruction code actually runs.<p>Ironically, two of my <i>least</i> favorite programming languages are examples of ways to mitigate this issue: Golang and JavaScript runtimes:<p>Golang provides &quot;defer&quot;, which, when promoted widely enough as an idiom, makes destructor semantics explicit and provides simple and consistent error semantics. &quot;defer&quot; doesn&#x27;t actually <i>solve</i> the problem of leaks&#x2F;partial state being left around, but gives people an obvious option to solve it themselves by hand.<p>JavaScript runtimes go to a similar extreme: no custom destructors, and a stdlib&#x2F;runtime so restrictive and thick (vis-a-vis IO primitives like sockets and weird in-memory states) that it&#x27;s hard for users to even get into sticky situations related to auto-destruction.<p>Zig also does a decent job here, but only with memory allocations&#x2F;allocators (which are ironically one of the few resource types that can be handled automatically in most cases).<p>I feel like Rust could have been the definitive solution to RAII-destruction-related issues, but chose instead to double down on the C++ approach to everyone&#x27;s detriment. Specifically, because Rust has so much compile-time metadata attached to values in the program (mutability-or-not, unsafety-or-not, movability&#x2F;copyabiliy&#x2F;etc.), I often imagine a path-not-taken in which automatic destruction (and custom automatic destructor code) was only allowed for types and destructors that provably interacted <i>only with in-user-memory state</i>. Things referencing other state could be detected at compile time and required to deal with that state in explicit, non-automatic destructor code (think Python context-managers or drop handles requiring an explicit &quot;.execute()&quot; call).<p>I don&#x27;t think that world would honestly be too different from the one we live in. The rust runtime wouldn&#x27;t have to get much thicker--we&#x27;d have to tag data returned from syscalls that <i>don&#x27;t</i> imply the existence of cleanup-required state (e.g. select(2), and allocator calls--since we could still automatically run destructors that only interact with cleanup-safe user-memory-only values), and untagged data (whether from e.g. fopen(2) or an unsafe&#x2F;opaque FFI call or asm! block) would require explicit manual destruction.<p>This wouldn&#x27;t solve all problems. Memory leaks would still be possible. Automatic memory-only destructors would still risk lockups due to e.g. pagefaults&#x2F;CoW dirtying or infinite loops, and could still crash. But it would &quot;head off at the pass&quot; tons of issues--not just the one in the article:<p>Side-effectful functions would become much more explicit (and not as easily concealable with if-error-panic-internally); library authors would be encouraged to separate out external-state-containing structs from user-memory-state-containing ones; destructor errors would become synonymous with specific programmer errors related to in-memory twiddling (e.g. out of bounds accesses) rather than failures to account for every possible state of an external resource, and as a result automatic destructor errors unconditionally aborting the program would become less contentious; the surface area for challenges like &quot;async drop&quot; would be massively reduced or sidestepped entirely by removing the need for asynchronous destructors; destructor-related crash information would be easier to obtain even in non-unwinding environments...<p>Maybe I&#x27;m wrong and this would require way too much manual work on the part of users coding to APIs requiring explicit destructor calls.<p>But heck, I can dream, can&#x27;t I?
评论 #41998830 未加载
评论 #41996987 未加载
sylware7 months ago
dude, machine code generated with gcc&#x2F;clang is not safe in the first place. This is only the tip of the iceberg.