> The fact that Rust developers who are interfacing with the Linux project seem completely unaware of the downsides of RAII, reminds me of when the US ambassador to Denmark thought that their collaborators biked to work because they were too poor to own a car.<p>I would imagine kernel developers working with Rust are quite aware of the downsides of RAII when it comes to large synchronous drops, and aware of arena allocation patterns. They are also likely aware of both arenas that run destructors on drop, and arenas that don't, and the relative merits of each. They are <i>also</i> likely aware of arenas that you can garbage collect over time, as enabled via generational index patterns.<p>The good news is that if profiling shows this to be a bottleneck, it is relatively easy to safely switch code that does allocations to using an arena instead (you'd have to add a lifetime parameter to everything tied to the arena).<p>RAII remains a great way to solve many real problems in systems programming, and banning tools which enable it on the off chance that you'd run into performance issues with synchronous drops, <i>without any data to back it up</i>, doesn't seem wise.
This article implies that batch allocations are some kind of panacea and that raii has no place there.<p>Neither of these implications have any case made for them, just links to other publications that do the same.<p>It also alludes to downsides but fails to define them, and also misses documenting any downsides associated with alternatives.<p>Despite deep familiarity with the space and understanding of both sides, I can’t really see it saying anything much at all in fact
This is a "Structure of Scientific Revolutions" scenario: <a href="https://press.uchicago.edu/ucp/books/book/chicago/S/bo13179781.html" rel="nofollow">https://press.uchicago.edu/ucp/books/book/chicago/S/bo131797...</a><p>In any field, be it engineering or the sciences, accomplished and intelligent practitioners find themselves resisting new technology for reasons that are ultimately inscrutable and personal --- for example, fear of the unknown or fear of new technology devaluing their in the old. <i>Because</i> these practitioners are experienced and intelligent, they're able to construct elaborate plausible-sounding technical arguments against the new technologies. But since they're <i>starting</i> with the opposition and working <i>backwards</i> to a rationale, what these practitioners do when they argue against the new technology isn't so much science as apologetics.<p>Apologies can be frustrating because, superficially, they resemble earnest argumentation --- but because, in apologetics, the authors starts with a conclusion and works backwards to an argument, an apology contains insidious logical traps that are difficult to detect and disarm, especially when the new technology (as all new technologies do) contain genuine gaps and flaws.<p>It's because of this dynamic that many fields advance "one retirement at a time". We should all make a conscious effort, when evaluating new technology, to distinguish earnest technical criticisms from justifications for feeling averse to change --- and consciously suppress the latter.
RAII is all about making the compiler do what the "disciplined" C code is already doing manually. As mentioned elsewhere in this thread, a common C coding style and idiom is to goto to the end of your scope, where all the free calls go. The resulting behavior is pretty close to RAII.<p>The argument in favor of RAII is that it's a lot less error prone to let the compiler write that part for you.<p>A fine goal of course, but I feel like this post is overstating it or perhaps conflating it with something else (eg. The aside about trying to convince the GPU API maintainer that the issues they hit are also affecting drivers written in C may have little or nothing to do with rust or RAII)
Biased and unsubstantiated article, written by "VP of Community at the Zig Software Foundation".<p>Lack of RAII in Zig is one of the reason why I refuse to write any software in it -- it's such a backwards design decision that throws decades of programming language safety in the bin. `defer` can be forgotten and is not enforced at compile-time, it's a terrible solution. Also, `defer` can be implemented in terms of RAII if needed.<p>Yes, yes... arena allocation can be faster than deallocating every object one by one, Casey is a convincing/charismatic individual with extreme and biased opinions and an extremely narrow programming domain, can RAII can be misused and cause slowdowns.<p>Every programmer that understands RAII knows these things. Guess what -- when provably better, you can use RAII for the whole arena and not each individual component. Also -- data oriented design is not inherently incompatible with abstractions and RAII.<p>I'm tired of seeing all these misconceptions, when -- TBH -- it's mostly a skill issue.
> personally hope that the Linux kernel never adopts any RAII, as I already have to waste way too much time for other slow software to load.<p>The only irrelevant example given is that of a poorly written app (Visual Studio) when discussing GPU driver. So is the GPU driver written with RAII slow? Is there a faster non-RAII version?
The confusing part of this article is the argument that the kernel maintainers are strongly against RAII. While I guess it’s possible Asahi Lina is unaware of this or is stubbornly including it anyway because she believes it’s such a large benefit, it seems more likely that it isn’t actually the dealbreaker.<p>In fairness, I’m hardly an expert on C++, Rust, or kernel development, so it’s possible that I’m missing something.
Except Arenas are well expressible in Rust. What's more, the fact that there are lifetime constraints allows to have both safe and idiomatic abstractions for them.
I'm even less sure what to make of this article after developments over the past two weeks and especially this past weekend: It seems to be mostly Rust FUD wrapped in a screed against RAII, implying that somehow Rust will harm the kernel, maybe. But in the past two weeks...<p>16 Nov, Linux 6.13 Introducing New Rust File Abstractions, <a href="https://www.phoronix.com/news/Linux-6.13-Rust-File-Abstract" rel="nofollow">https://www.phoronix.com/news/Linux-6.13-Rust-File-Abstract</a><p>26 Nov, 3K Lines Of New Rust Infrastructure Code Head Into Linux 6.13, <a href="https://www.phoronix.com/news/Linux-6.13-Rust" rel="nofollow">https://www.phoronix.com/news/Linux-6.13-Rust</a><p>30 Nov, Linux 6.13 Hits A "Tipping Point" With More Rust Drivers Expected Soon, <a href="https://www.phoronix.com/news/Linux-6.13-char-misc-More-Rust" rel="nofollow">https://www.phoronix.com/news/Linux-6.13-char-misc-More-Rust</a><p>That last one is particularly interesting, with the following quote from Greg K-H: "rust misc driver bindings and other rust changes to make misc drivers actually possible. I think this is the tipping point, expect to see way more rust drivers going forward now that these bindings are present. Next merge window hopefully we will have pci and platform drivers working, which will fully enable almost all driver subsystems to start accepting (or at least getting) rust drivers. This is the end result of a lot of work from a lot of people, congrats to all of them for getting this far, you've proved many of us wrong in the best way possible, working code :)"
This is a very silly straw man.<p>RAII style is a syntax feature that doesn’t imply a specific semantics for how big the “resource” is, or how it’s allocated or freed. The only difference in Rust versus Zig is that Rust calls the de-allocator automatically and invisibly, and Zig requires allocation use an allocator parameter and an explicit call to free. Neither forces you to allocate batches at a time, nor do they prevent batching or use of arenas. To me it seems just as easy to allocate MyStruct[100] in either language.
I'm more disappointed by someone who thinks that Rust macros are better than Zig comptime.<p>Rust macros are, IMO, one of the weakest parts of the language. Sure, you can do anything. To do so, you have to atomize down to lexical atoms, rearrange everything, and feed a new set of lexical atoms back into the stream. And to do so, you have to pull in a bunch of crates related to syntax analysis that should really be a fundamental part of the compiler and are welded to the specific Rust version. Bleargh!<p>Zig comptime has been a real breath of fresh air. It's great that it runs at compile time, and, if I need to debug it, I can generally force it to be runtime code. You write the same code you were going to write anyway without creating weird meta-languages that have a magic expansion phase that is impossible to debug.
Beyond just performance, RAII (a cryptically named concept BTW) introduces a lot of implicit, indirect behavior, making code much more complicated and harder to reason about. When does your constructor & destructor fire when a dynamic array grows? Did you use the copy-and-swap idiom properly? Did you define all the "rule of five" methods? It's bonkers.<p>I think a "defer" or "on scope exit" statement is just about ideal, gets you the same benefits but much more simply.
I am not sure what does this article achieve except cite other people's opinions on using or not using RAII (and a few other mechanisms).<p>It even admits that using arena allocator in Rust would address most (if not all) of their reservations towards RAII. And mentioning that there are badly written programs out there that take a while to shut down is, while informative in terms of an anecdote, is at the same time not at all a firm evidence the same will happen in the kernel.<p>I mean, it's the kernel, and the uttermost attention should be given to any potential performance or stability concerns -- of course!<p>But it does not seem like the people against it are willing to gather data. Which again puts the Rust devs on the backfoot. Seems like constant impeding akin to a bureaucratic structure that knows that you'll eventually get what you came for but they'll put every obstacle possible on your path first. :(<p>But overall, the article ended just when I expected it to get a bit more serious and data-oriented. Or at least give little more than opinions / feelings about the contention point.
RAII is not unavoidable though.<p>The whole point of being against RAII is that forcing it has downsides you can not get away from.<p>but Rust has mem::forget, and it is not even unsafe (it is in std only currently, but I see no reason why the kernel can not have its own version)<p>So this to me seems a pointless argument vs:<p>* calling the equivalent of drop() manually every time, except in some cases<p>* calling mem::forget only when you need it and have auto drop() everywhere else<p>...Maybe mem::forget has not taken hold as a pattern? Still, these arguments do not look very technical , and much more religious to me
The author of this is the VP of Community for Zig; his only professional experience, according to LinkedIn, is in similar advocacy roles. It's an incredibly bad look for the Zig community then, in my opinion, for him - somewhat sarcastically and disrespectfully ("oh no!", "I already have to waste way too much time for other slow software to load") - trying to school Asahi Lina on "writing performance-oriented software" when she's one of the most impressive software engineers in the public eye right now.<p>This is just a FUD-y hit piece on RAII. It, frankly, smacks of Dunning Kruger (I know, not statistically a real thing, but rhetorically useful): a writer way beyond his depth hears someone praise RAII, and out of ignorance and professional obligations launches an attack by linking some canned "RAII Bad" sources, not realizing the person they're attacking knows vastly more about...well, basically everything, than they do.
I've always wondered why not just fork Linux and add Rust there then keep it in sync with mainline and maintain it long time to demonstrate that it's a viable strategy and there's enough volunteers willing to actually spend their time instead of turning the Rust part of the kernel into abandonware when there's so few developers available, before trying to get it supported upstream?
Imagine this: we have a group of friends and we’re having a good time. We all speak English. One or two guys invite some people who are always speaking in Klingon. They invite more and now a large amount of the group wants to have equal support in d&d and other activities for Klingon. They insist it’s a better language when it’s only just a different language, not better or worse. The English speakers are trying to stop the Klingon speakers from changing their group to something new that they don’t like (and not provably better).<p>Rust is Klingon. C is working fine and is the language everyone is already familiar with. If Rust was actually better, it would flow into the ground organically.<p>Being technically more memory safe doesn’t make people like it more. Proper tooling with C can prevent memory issues, taking away the main argument for rust.<p>In the end we don’t need any more of a reason to stop rust other than “we don’t like rust”.