<i>"But the biggest potential is in ability to fearlessly parallelize majority of Rust code, even when the equivalent C code would be too risky to parallelize. In this aspect Rust is a much more mature language than C."</i><p>Yes. Today, I integrated two parts of a 3D graphics program. One refreshes the screen and lets you move the viewpoint around. The other loads new objects into the scene. Until today, all the objects were loaded, then the graphics window went live. Today, I made those operations run in parallel, so the window comes up with just the sky and ground, and over the next few seconds, the scene loads, visibly, without reducing the frame rate.<p>This took about 10 lines of code changes in Rust. It worked the first time it compiled.
> C libraries typically return opaque pointers to their data structures, to hide implementation details and ensure there's only one copy of each instance of the struct. This costs heap allocations and pointer indirections. Rust's built-in privacy, unique ownership rules, and coding conventions let libraries expose their objects by value<p>The primary reason c libraries do this is not for safety, but to maintain ABI compatibility. Rust eschews dynamic linking, which is why it doesn't bother. Common lisp, for instance, does the same thing as c, for similar reasons: the layout of structures may change, and existing code in the image has to be able to deal with it.<p>> Rust by default can inline functions from the standard library, dependencies, and other compilation units. In C I'm sometimes reluctant to split files or use libraries, because it affects inlining<p>This is again because c is conventionally dynamically linked, and rust statically linked. If you use LTO, cross-module inlining will happen.
> "Clever" memory use is frowned upon in Rust. In C, anything goes. For example, in C I'd be tempted to reuse a buffer allocated for one purpose for another purpose later (a technique known as HEARTBLEED).<p>This made me laugh
> computed goto<p>I did a deep dive into this topic lately when exploring whether to add a language feature to zig for this purpose. I found that, although finnicky, LLVM is able to generate the desired machine code if you give it a simple enough while loop continue expression[1]. So I think it's reasonable to not have a computed goto language feature.<p>More details here, with lots of fun godbolt links: <a href="https://github.com/ziglang/zig/issues/8220" rel="nofollow">https://github.com/ziglang/zig/issues/8220</a><p>[1]: <a href="https://godbolt.org/z/T3v881" rel="nofollow">https://godbolt.org/z/T3v881</a>
As an observation, performance optimized code is almost always effectively single-threaded these days, even when using all the cores on a CPU to very efficiently process workloads. Given this, it is not clear to me that Rust actually buys much when it comes to parallel programming for the purposes of performance. Is there another reason to focus on parallelism aside from performance?<p>This reminds me of when I use to write supercomputing codes. Lots of programming language nerds would wonder why we didn’t use functional models to simplify concurrency and parallelism. Our code was typically old school C++ (FORTRAN was already falling out of use). The truth was that 1) the software architecture was explicitly single-threaded — some of the first modern thread-per-core designs — to maximize performance, obviating any concerns about mutability and concurrency and 2) the primary performance bottlenecks tended to be memory bandwidth, of which functional programming paradigms tend to be relatively wasteful compared to something like C++. Consequently, C++ was actually simpler and higher performance for massively parallel computation, counterintuitively.
We've implemented network drivers in C and Rust and did a performance comparison. Interestingly, the C-to-Rust-transpiled code ended up being faster than the original C implementation: <a href="https://github.com/ixy-languages/ixy-languages/blob/master/Rust-vs-C-performance.md" rel="nofollow">https://github.com/ixy-languages/ixy-languages/blob/master/R...</a>
I completely agree with the points made here, it matches my experience as a C coder who went all-in on Rust.<p>>"Clever" memory use is frowned upon in Rust. In C, anything goes. For example, in C I'd be tempted to reuse a buffer allocated for one purpose for another purpose later (a technique known as HEARTBLEED).<p>Ha!<p>>It's convenient to have fixed-size buffers for variable-size data (e.g. PATH_MAX) to avoid (re)allocation of growing buffers. Idiomatic Rust still gives a lot control over memory allocation, and can do basics like memory pools, combining multiple allocations into one, preallocating space, etc., but in general it steers users towards "boring" use or memory.<p>Since I write a lot of memory-constrained embedded code this actually annoyed me a bit with Rust, but then I discovered the smallvec crate: <a href="https://docs.rs/smallvec/1.5.0/smallvec/" rel="nofollow">https://docs.rs/smallvec/1.5.0/smallvec/</a><p>Basically with it you can give your vectors a static (not on the heap) size, and it will automatically reallocate on the heap if it grows beyond that bound. It's the best of both world in my opinion: it lets you remove a whole lot of small useless allocs but you still have all the convenience and API of a normal Vec. It might also help slightly with performance by removing useless indirections.<p>Unfortunately this doesn't help with Strings since they're a distinct type. There is a smallstring crate which uses the same optimization technique but it hasn't been updated in 4 years so I haven't dared use it.
This entire article is nonsense. To a first approximation, the speed of your program in 2021 is determined by locality of memory access and overhead with regard to allocation and deallocation. C allows you to do bulk memory operations, Rust does not (unless you turn off the things about Rust that everyone says are good). Thus C is tremendously faster.<p>There is this habit in both academia and industry where people say "as fast as C" and justify this by comparing to a tremendously slow C program, but don't even know they are doing it. It's the blind leading the blind.<p>The question you should be asking yourself is, "If all these claims I keep seeing about X being as fast as Y are true, then why does software keep getting slower over time?"<p>(If you don't get what I am saying here, it might help to know that performance programmers consider malloc to be tremendously slow and don't use it except at startup or in cases when it is amortized by a factor of 1000 or more).
A comparison between Rust and modern C++ would be more interesting in my opinion. It seems that those languages are closer in the design goal space than either is to C.
What a well-written and interesting piece that gets to the point!<p>Compared to all the religious texts I've read about Rust, this is a huge breath of fresh air.<p>Thanks for sharing! Bookmarking this.
> Rust can't count on OSes having Rust's standard library built-in, so Rust executables bundle bits of the Rust's standard library (300KB or more). Fortunately, it's a one-time overhead.<p>No, it's not, especially if you have multiple binaries. There are hacks, like using a multi-call single binary, (forget about file-based privilege separation), or using an unmaintained fork of cargo to build a rust toolchain capable of dynamic linking libstd. See:
<a href="https://users.rust-lang.org/t/link-the-rust-standard-library-dynamically/29175/4" rel="nofollow">https://users.rust-lang.org/t/link-the-rust-standard-library...</a> and <a href="https://github.com/johnthagen/min-sized-rust" rel="nofollow">https://github.com/johnthagen/min-sized-rust</a><p>I'd be interested in any up-to-date trick to do better than this.
FTR, there are some efforts to integrate GCC & Rust:<p><a href="https://github.com/antoyo/rustc_codegen_gcc" rel="nofollow">https://github.com/antoyo/rustc_codegen_gcc</a>
<a href="https://github.com/Rust-GCC/gccrs" rel="nofollow">https://github.com/Rust-GCC/gccrs</a>
<a href="https://github.com/sapir/gcc-rust/" rel="nofollow">https://github.com/sapir/gcc-rust/</a>
> alloca and C99 variable-length arrays<p>I remember making an argument on a mailing list against using alloca on the grounds that there's usually a stack-blowing bug hiding behind it. As I revisited the few examples I remembered of it being used correctly, I strengthened my argument by finding more stack-blowing bugs hiding behind uses of alloca.
> Both are "portable assemblers"<p>I don't tend to think of Rust as "portable assembly", and this is indeed one of the points where I think it differs the most from C. I think of "portable assembly" as being applicable to C, because it is some version of a "minimal" level of abstraction for a high-level language. Rust is very much a tool for abstraction, and one of the USPs of rust is that the compiler abstracts away the low-level details of memory management in a way which is not as costly as other automatic memory management strategies.<p>Maybe it's due to lack of experience, but with C code it's fairly easy to look at a block of code and imagine approximately which assembly would be generated. With highly abstract Rust code, like with template-heavy C++ code, I don't feel like that at all.
Code 'bloat' is a bizarre metric to use for anything unless you're on a platform with incredibly constrained executable memory like an embedded device.<p>The fact that Rust specialises its generic code according to the type it's used with it not some inherent disadvantage of generics. That's what they're <i>supposed</i> to do. By choosing to not specialise, you're actively making the decision to make your code <i>slower</i>. Rust has mechanisms for avoiding generic specialisation. They're called trait objects and they work brilliantly.<p>When you use void* in your data structures in C, you're not winning anything when compared to Rust. You're just producing slower code that mimics the behaviour of Rust's trait objects, but more dangerously.<p>Code 'bloat' (otherwise known as 'specialising your code correctly to make it run faster') is not a reason to not use Rust in 2021, so please stop pretending that it is.
> For example, in C I'd be tempted to reuse a buffer allocated for one purpose for another purpose later (a technique known as HEARTBLEED).<p>You can do that in Java (with byte arrays) or in Common Lisp, so what is the point here? It is not practice in Java, Lisp nor in C and C++.<p>> It's convenient to have fixed-size buffers for variable-size data (e.g. PATH_MAX) to avoid (re)allocation of growing buffers<p>This is because OS/Kernel/filesystem guarantee path max size.<p>> Idiomatic Rust still gives a lot control over memory allocation, and can do basics like memory pools, ... but in general it steers users towards "boring" use or memory.<p>The same is done by sane C libraries (e.g. glib).<p>> Every operating system ships some built-in standard C library that is ~30MB of code that C executables get for "free", e.g. a "Hello World" C executable can't actually print anything, it only calls the printf shipped with the OS.<p>printf is not shipped with the OS, but with libc runtime. It doesn't have to be runtime (author needs to learn why this libc runtime is shared library and not the usually statically linked library) and you can use minimal implementations (musl) if you want static binaries with minimal size.<p>So you are saying Rust doesn't call (g)libc at all and directly invoke kernel interrupts? Sure, you can avoid this print "overhead" in C with 3-4 lines of inline assembly, but, why?<p>> Rust by default can inline functions from the standard library, dependencies, and other compilation units.<p>So do C compiler.<p>> In C I'm sometimes reluctant to split files or use libraries, because it affects inlining and requires micromanagement of headers and symbol visibility.<p>Functions doesn't have to be in headers to be inlined.<p>> C libraries typically return opaque pointers to their data structures, to hide implementation details and ensure there's only one copy of each instance of the struct. This costs heap allocations and pointer indirections. Rust's built-in privacy, unique ownership rules, and coding conventions let libraries expose their objects by value, so that library users decide whether to put them on the heap or on the stack. Objects on the stack can can be optimized very aggressively, and even optimized out entirely.<p>WTF? Stopped reading after this.<p>I find this post a random nonsense and I'd urge author to read some serious C book.
Human-friendlyness and bug-prevention is very important, of course everthing in Rust can be created in C or Assembler och in machine-code but the question is how feasible is it that a typical human can do it? Rust has a lot of potential I think
To practise Rust, I rewrote my small C99 library in it [1]. Performance is more or less the same, I only had to use unchecked array access in one small hot loop (details in README.md). I haven't ported multithreading yet, but I expect Rust's Rayon parallel iterators will likewise be comparable to OpenMP.<p>[1] <a href="https://github.com/GreatAttractor/libskry_r" rel="nofollow">https://github.com/GreatAttractor/libskry_r</a>
> There are other kinds of concurrency bugs, such as poor use of locking primitives causing higher-level logical race conditions or deadlocks, and Rust can't eliminate them, but they're usually easier to diagnose and fix.<p>Which is why so many people are creating formal verification languages and spending years in research to fix those ... That just isn't true. It's a very complex problem that is an issue in both hardware (cache-coherency protocols) to OS (atomics locks) to higher level construct (commit-rollback in databases).<p>Consequently<p>> But the biggest potential is in ability to fearlessly parallelize majority of Rust code, even when the equivalent C code would be too risky to parallelize. In this aspect Rust is a much more mature language than C.<p>This couldn't be more wrong either. Rust doesn't help you write synchronization primitives safely because it doesn't handle synchronization like locks, condition variables or atomics. You need formal verification to be fearless.
Shouldn’t this be Rust vs C++? C++ has a lot more parallels to Rust. Both are big, complex, and safe languages that can tuned for high performance. Infact, I would like to see more comparisons of Rust and C++ in the future.
I'm a Rust evangelist, but the article is titled "Speed of Rust vs. C" and doesn't seem to contain even one benchmark.<p>For fuck's sake.
> For example, in C I'd be tempted to reuse a buffer allocated for one purpose for another purpose later (a technique known as HEARTBLEED)<p>Pahaha
I prefer to have great ideas in rust ported over to C instead of rewriting everything with Rust. this approach will benefit all the existing softwares written in C which I think is much larger than Rust in terms of both impact and code size.<p>am I a minority having this opinion?
Its just amusing, in this thread everyone with critical thinking and skeptical is down voted, even if one expresses himself moderately. It shows how much of zealots, Rust fanboys have become.
For parallelism, Modern tooling like TSAN can close the gap somewhat. If you are planning to introduce threads, not testing it with TSAN is silly at best.
Very biased comparison without actual source or numbers to back things<p>Even more surprising it got to front page<p>Do people really have low standard of quality on hacker news too?
<a href="https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/rust.html" rel="nofollow">https://benchmarksgame-team.pages.debian.net/benchmarksgame/...</a> shows C is generally better
> While C is good for writing minimal code on byte-by-byte pointer-by-pointer level,<p>Billions of cars with multi-billion ECUs, practically every device running an OS, and several NASA rovers disagree.
The article talks way too high level and is written like a marketing people even the title sounds technical, for example:<p>"Rust enforces thread-safety of all code and data, even in 3rd party libraries, even if authors of that code didn't pay attention to thread safety. Everything either upholds specific thread-safety guarantees, or won't be allowed to be used across threads."
My experience is that languages survives not because of a particular feature, but because they are USEFUL in practice to produce a software.<p>The fact that C is used in so many places speaks for itself about it usefulness. And this is done by writing software by majority of C programmers instead of jumping on every forum to attack other languages, writing extended blog posts just to convince people that they "should" switch to the language they like.<p>Also if you believe bounds check is the most difficult thing in software development, it just mean that you haven't dealt with a sufficient system yet or you just pretends to be.<p>The similar thing also applied to that if you think naively putting pthread_mutex_lock and unlock around the data structure is hard, it just means you haven't touched the scenarios that C programmers resorts to non-trivial locking mechanisms for.
I appreciate the article, but it would be really nice if the author could add a timestamp to his blog posts. Without timestamps, it's impossible to know if any issue described in the article body still exists.<p>I didn't read it, because it might present outdated knowledge.
> "Clever" memory use is frowned upon in Rust. In C, anything goes.<p>No, it does not. If Rust programmers don't have discipline in C, other people have.<p>And don't drag out some random CVE numbers again. These are about a <i>fraction</i> of existing C projects, many of them were started 1980-2000.<p>It is an entirely different story if a project is started with sanitizers, Valgrind and best practices.<p>I'm not against Rust, except that they managed to take OCaml syntax and make it significantly worse. It's just ugly and looks like design by committee.<p>But the evangelism is exhausting. I also wonder why corporations are pushing Rust. Is it another method to take over C projects that they haven't assimilated yet?
A graph would be good. Any graph. Preferably multiple. Otherwise, this is all empirical data. Show me why Rust wins, and how. Telling me "doubly-linked lists are slow" is not useful, as a developer considering one of these two languages.