I don't believe that rust solves the right problems in the right ways. This is specifically with respect to the single-owner raii/lifetime system; the rest of the language is imo pretty nice (aside from the error messages, which are an implementation problem).<p>For starters, ATS[1] and f-star[2] both provide much stronger safety guarantees, so if you want the strongest possible guarantees that your low-level code is correct, you can't stop at rust.<p><pre><code> _____________________________________________
</code></pre>
Beyond that, it's helpful to look at the bigger picture of what characteristics a program needs to have, and what characteristics a language can have to help facilitate that. I propose that there are broadly three program characteristics that are affected by a language's ownership/lifetime system: throughput, resource use, and ease of use/correctness. That is: how long does the code take to run, how much memory does it use, and how likely is it to do the right thing / how much work does it take to massage your code to be accepted by the compiler. This last is admittedly rather nebulous. It depends quite a lot on an individual's experience with a given language, as well as overall experience and attention to detail. Even leaving aside specific language experience, different individuals may rank different languages differently, simply due to different approaches and thinking styles. So I hope you will forgive my speaking a little bit generally and loosely about the topic of ease-of-use/correctness.<p>The primary resource that programs need to manage is memory[3]. We have several strategies for managing memory:<p>(Note: implicit/explicit below refers to whether something something is an explicit part of the type system, not an explicit part of user code.)<p>- implicitly managed global heap, as with malloc/free in c<p>- implicit stack-based raii with automatically freed memory, as in c++, or c with alloca (note: though this is not usually a general-purpose solution, it can be[4]. But more interestingly, it can be composed with other strategies.)<p>- explicitly managed single-owner abstraction over the global heap and possible the stack, as in rust<p>- explicit automatic reference counting as an abstraction over the global heap and possibly the stack, as in swift<p>- implicit memory pools/regions<p>- explicit automatic tracing garbage collector as an abstraction over the global heap, possibly the stack, possibly memory regions (as in a nursery gc), possible a compactor (as in a compacting gc). (Java)<p>- custom allocators, which may have arbitrarily complicated designs, be arbitrarily composed, arbitrarily explicit, etc. Not possible to enumerate them all here.<p>I mentioned before there are three attributes relevant to a memory management scheme. But there is a separate axis along which we have to consider each one: worst case vs average case. A tracing GC will usually have higher throughput than an automatic reference counter, but the automatic reference counter will usually have very consistent performance. On the other hand, an automatic reference counter is usually implemented on top of something like malloc. Garbage collectors generally need a bigger heap than malloc, but malloc has a pathological fragmentation problem which a compacting garbage collector is able to avoid.<p>This comment is getting very long already, and comparing all of the above systems would be out of scope. But I'll make a few specific observations and field further arguments as they come:<p>- Because of the fragmentation problem mentioned above, memory pools and special-purpose allocators will always outperform a malloc-based system both in resource usage and throughput (memory management is constant-time + better cache coherency)<p>- Additionally, implicitly managed memory pools are usually easier to use than an implicitly managed global heap, because you don't have to think about the lifetime of each individual object.<p>- Implicit malloc/free in c should generally perform similarly to an explicit single-owner system like rust's, because most of the allocation time is spent in malloc, and they have little (or no) runtime performance hit on top of that. The implicit system may have a slight edge because it has more flexible data structures; then again, the explicit single-owner system may have a slight edge because it has more opportunity to allocate locally defined objects directly on the stack if their ownership is not given away. But these are marginal gains either way.<p>- Naïve reference counting will involve a significant performance hit compared to any of the above systems. <i>However</i>, there is a heavy caveat. Consider what happens if you take your single-owner verified code, remove all the lifetime annotations, and give it to a reference-counting compiler. Assuming it has access to all your source code (which is a reasonable assumption; the single-owner compiler has that), then if it performs even <i>basic</i> optimizations—this isn't a sufficiently smart compiler[5]-type case—it will elide all the reference counting overhead. Granted, most reference-counted code isn't written like this, but it means that reference counting isn't a performance dead end, and it's not difficult to squeeze your rc code to remove some of the rc overhead if you have to.<p>- It's possible to have shared mutable references, but forbid sharing them across threads.<p>- The flexibility gains from having shared mutable references are not trivial, and can significantly improve ease of use.<p>- Correctness improvements from strictly defined lifetimes are a myth. Lifetimes aren't an inherent part of any algorithm, they're an artifact of the fact that computers have limited memory and need to reuse it.<p>To summarize:<p>- When maximum performance is needed, pools or special-purpose allocators will always beat single-owner systems.<p>- For all other cases, the performance cap on reference counting is identical with single-owner systems, while the flexibility cap is much higher.<p><pre><code> _____________________________________________
</code></pre>
1. <a href="http://www.ats-lang.org/" rel="nofollow">http://www.ats-lang.org/</a><p>2. <a href="https://fstar-lang.org/" rel="nofollow">https://fstar-lang.org/</a><p>3. File handles and mutex locks also come up, but those require different strategies. Happy to talk about those too, but tl;dr file handles should be avoided where possible and refcounted where not; mutexes should also be avoided where possible, and be scoped where not.<p>4. <a href="https://degaz.io/blog/632020/post.html" rel="nofollow">https://degaz.io/blog/632020/post.html</a><p>5. <a href="https://wiki.c2.com/?SufficientlySmartCompiler" rel="nofollow">https://wiki.c2.com/?SufficientlySmartCompiler</a>