Nicholas Nethercote didn't just speed up Rust. He went in & did the dirty work of dredging through Firefox profiling<p>> It’s rare that a single micro-optimization is a big deal, but dozens and dozens of them are. Persistence is key<p>Persistence is work. Mozilla is cutting the people who put in the work of staving off bitrot
> The improvements I did are mostly what could be described as “bottom-up micro-optimizations”.<p>> I also did two larger “architectural” or “top-down” changes<p>My summer intern started doing profiling work on compile times with clang: <a href="https://lists.llvm.org/pipermail/llvm-dev/2020-July/143012.html" rel="nofollow">https://lists.llvm.org/pipermail/llvm-dev/2020-July/143012.h...</a><p>Some things we found:<p>* for a large C codebase like the Linux kernel, we're spending way more time in the front-end (clang) than the backend (llvm). This was surprising based on rustc's experience with llvm. Experimental patches simplifying header inclusion dependencies in the kernel's sources can potentially cut down on build times by ~30% with EITHER gcc or clang.<p>* There's a fair amount of low hanging fruit that stands out from bottom up profiling. We've just started fixing these, but the most immediate was 13% of a Linux kernel build recomputing target information for every inline assembly statement in a way that was accidentally quadratic and not being memoized when it could be (in fact, my intern wrote patches to compute these at compile time, even). Fixed in clang-11. That was just the first found+fixed, but we have a good list of what to look at next. The only real samples showing up in the llvm namespace (vs clang) is llvm's StringMap bucket lookup but that's from clang's preprocessor.<p>* GCC beats the crap out of Clang in compile times of the Linux kernel; we need to start looking for top down optimizations to do less work overall. I suspect we may be able to get some wins out of lazy parsing at the cost of missing diagnostics (warnings and errors) in dead code.<p>* Don't speculate on what could be slow; profiles <i>will</i> surprise you.<p>> Using instruction counts to compare the performance of two entirely different programs (e.g. GCC vs clang) would be foolish, but it’s reasonable to use them to compare the performance of two almost-identical programs<p>Agree. We prefer cycle counts via LBR, but only for comparing diffs of the same program, as you describe.
> I was surprised by how many people said they enjoyed reading this blog post series. The appetite for “I squeezed some more blood from this stone” tales is high.<p>There's something satisfying about seeing code get cleaned up and optimized. I also enjoyed following the LibreOffice commits back when they were in their "heavy cleanup" phase after it became clear OpenOffice was dead (which meant they didn't have to worry about diverging from the upstream anymore).
> Contrary to what you might expect, instruction counts have proven much better than wall times when it comes to detecting performance changes on CI, because instruction counts are much less variable than wall times (e.g. ±0.1% vs ±3%; the former is highly useful, the latter is barely useful). Using instruction counts to compare the performance of two entirely different programs (e.g. GCC vs clang) would be foolish, but it’s reasonable to use them to compare the performance of two almost-identical programs (e.g. rustc before PR #12345 and rustc after PR #12345). It’s rare for instruction count changes to not match wall time changes in that situation. If the parallel version of the rustc front-end ever becomes the default, it will be interesting to see if instruction counts continue to be effective in this manner.<p>This is a supremely surprising conclusion, especially in 2020. Is instruction count really still tied to wall clock count? I would have thought that some instructions could be slower than others (especially on x86) so that using more faster individual instructions could be faster than 1 slower instruction. Similarly, cache effects & data dependencies can result in more instructions being faster than fewer instructions.<p>I <i>think</i> what the author is trying to say is that when evaluating micro-optimizations, cycle counts are pretty valuable still because you're making a small intentional change & evaluating its impact & <i>usually</i> the correlation holds. The dashboard clearly still measures wall-clock since just comparing instruction count over time would be misleading.<p>I'm curious if the Rust team has evaluated stabilizer to be more robust about the optimizations they choose: <a href="https://emeryberger.com/research/stabilizer/" rel="nofollow">https://emeryberger.com/research/stabilizer/</a>
It's sad to see your rustc contributions stop, nnethercote. I guess rustc now has to run an experiment on how quickly performance improves without you :(.<p>IMO compiler speed still remains the main ergonomics hurdle in developing Rust software.
Nnerthercote own the best blog on performance profiling that I've ever seen, congrats to your huge skill set, Firefox, chromium, and programming languages need more people like you.
Thank you for your excellent work over the years! Your efforts have gone a long way to making Rust enjoyable to write =)<p>If there are any smart rust-using company, they should definitely hire nnethercote to continue their excellent work!
> ... Perhaps this relates to the high level of interest in Rust ...<p>I would have loved these blog posts regardless of what code was actually being optimised.<p>They offer a fascinating glimpse into a workflow that requires expertise, experimentation and creativity.<p>Sadly something that most developers can't engage in very often, due to the nature of their work or time constraints.
This is a fascinating blog series. I've been dabbling in Rust lately and really appreciate how powerful and helpful the compiler is even to beginners.<p>> Due to recent changes at Mozilla my time working on the Rust compiler is drawing to a close.<p>This sort of statement makes me a bit worried though. I don't mean to echo what a lot of the community has said over the past month, but I really hope that development on Rust doesn't stagnate because of the layoffs.