TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to speed up the Rust compiler one last time

319 pointsby nnethercoteover 4 years ago

14 comments

__sover 4 years ago
Nicholas Nethercote didn&#x27;t just speed up Rust. He went in &amp; did the dirty work of dredging through Firefox profiling<p>&gt; It’s rare that a single micro-optimization is a big deal, but dozens and dozens of them are. Persistence is key<p>Persistence is work. Mozilla is cutting the people who put in the work of staving off bitrot
评论 #24403484 未加载
ndesaulniersover 4 years ago
&gt; The improvements I did are mostly what could be described as “bottom-up micro-optimizations”.<p>&gt; I also did two larger “architectural” or “top-down” changes<p>My summer intern started doing profiling work on compile times with clang: <a href="https:&#x2F;&#x2F;lists.llvm.org&#x2F;pipermail&#x2F;llvm-dev&#x2F;2020-July&#x2F;143012.html" rel="nofollow">https:&#x2F;&#x2F;lists.llvm.org&#x2F;pipermail&#x2F;llvm-dev&#x2F;2020-July&#x2F;143012.h...</a><p>Some things we found:<p>* for a large C codebase like the Linux kernel, we&#x27;re spending way more time in the front-end (clang) than the backend (llvm). This was surprising based on rustc&#x27;s experience with llvm. Experimental patches simplifying header inclusion dependencies in the kernel&#x27;s sources can potentially cut down on build times by ~30% with EITHER gcc or clang.<p>* There&#x27;s a fair amount of low hanging fruit that stands out from bottom up profiling. We&#x27;ve just started fixing these, but the most immediate was 13% of a Linux kernel build recomputing target information for every inline assembly statement in a way that was accidentally quadratic and not being memoized when it could be (in fact, my intern wrote patches to compute these at compile time, even). Fixed in clang-11. That was just the first found+fixed, but we have a good list of what to look at next. The only real samples showing up in the llvm namespace (vs clang) is llvm&#x27;s StringMap bucket lookup but that&#x27;s from clang&#x27;s preprocessor.<p>* GCC beats the crap out of Clang in compile times of the Linux kernel; we need to start looking for top down optimizations to do less work overall. I suspect we may be able to get some wins out of lazy parsing at the cost of missing diagnostics (warnings and errors) in dead code.<p>* Don&#x27;t speculate on what could be slow; profiles <i>will</i> surprise you.<p>&gt; Using instruction counts to compare the performance of two entirely different programs (e.g. GCC vs clang) would be foolish, but it’s reasonable to use them to compare the performance of two almost-identical programs<p>Agree. We prefer cycle counts via LBR, but only for comparing diffs of the same program, as you describe.
评论 #24407290 未加载
cesarbover 4 years ago
&gt; I was surprised by how many people said they enjoyed reading this blog post series. The appetite for “I squeezed some more blood from this stone” tales is high.<p>There&#x27;s something satisfying about seeing code get cleaned up and optimized. I also enjoyed following the LibreOffice commits back when they were in their &quot;heavy cleanup&quot; phase after it became clear OpenOffice was dead (which meant they didn&#x27;t have to worry about diverging from the upstream anymore).
评论 #24406413 未加载
评论 #24405526 未加载
vlovich123over 4 years ago
&gt; Contrary to what you might expect, instruction counts have proven much better than wall times when it comes to detecting performance changes on CI, because instruction counts are much less variable than wall times (e.g. ±0.1% vs ±3%; the former is highly useful, the latter is barely useful). Using instruction counts to compare the performance of two entirely different programs (e.g. GCC vs clang) would be foolish, but it’s reasonable to use them to compare the performance of two almost-identical programs (e.g. rustc before PR #12345 and rustc after PR #12345). It’s rare for instruction count changes to not match wall time changes in that situation. If the parallel version of the rustc front-end ever becomes the default, it will be interesting to see if instruction counts continue to be effective in this manner.<p>This is a supremely surprising conclusion, especially in 2020. Is instruction count really still tied to wall clock count? I would have thought that some instructions could be slower than others (especially on x86) so that using more faster individual instructions could be faster than 1 slower instruction. Similarly, cache effects &amp; data dependencies can result in more instructions being faster than fewer instructions.<p>I <i>think</i> what the author is trying to say is that when evaluating micro-optimizations, cycle counts are pretty valuable still because you&#x27;re making a small intentional change &amp; evaluating its impact &amp; <i>usually</i> the correlation holds. The dashboard clearly still measures wall-clock since just comparing instruction count over time would be misleading.<p>I&#x27;m curious if the Rust team has evaluated stabilizer to be more robust about the optimizations they choose: <a href="https:&#x2F;&#x2F;emeryberger.com&#x2F;research&#x2F;stabilizer&#x2F;" rel="nofollow">https:&#x2F;&#x2F;emeryberger.com&#x2F;research&#x2F;stabilizer&#x2F;</a>
评论 #24404197 未加载
评论 #24405063 未加载
评论 #24406025 未加载
评论 #24404436 未加载
est31over 4 years ago
It&#x27;s sad to see your rustc contributions stop, nnethercote. I guess rustc now has to run an experiment on how quickly performance improves without you :(.<p>IMO compiler speed still remains the main ergonomics hurdle in developing Rust software.
steveklabnikover 4 years ago
Thanks for all you&#x27;ve done over the years here. I&#x27;m sad you won&#x27;t be able to do more of it.
The_rationalistover 4 years ago
Nnerthercote own the best blog on performance profiling that I&#x27;ve ever seen, congrats to your huge skill set, Firefox, chromium, and programming languages need more people like you.
Ar-Curunirover 4 years ago
Thank you for your excellent work over the years! Your efforts have gone a long way to making Rust enjoyable to write =)<p>If there are any smart rust-using company, they should definitely hire nnethercote to continue their excellent work!
评论 #24404568 未加载
alex_regover 4 years ago
&gt; ... Perhaps this relates to the high level of interest in Rust ...<p>I would have loved these blog posts regardless of what code was actually being optimised.<p>They offer a fascinating glimpse into a workflow that requires expertise, experimentation and creativity.<p>Sadly something that most developers can&#x27;t engage in very often, due to the nature of their work or time constraints.
oshea64bitover 4 years ago
This is a fascinating blog series. I&#x27;ve been dabbling in Rust lately and really appreciate how powerful and helpful the compiler is even to beginners.<p>&gt; Due to recent changes at Mozilla my time working on the Rust compiler is drawing to a close.<p>This sort of statement makes me a bit worried though. I don&#x27;t mean to echo what a lot of the community has said over the past month, but I really hope that development on Rust doesn&#x27;t stagnate because of the layoffs.
jimbob45over 4 years ago
How hasn’t Google taken over and hired the Rust team? Weren’t they practically funding them by funding their parent, Mozilla?
评论 #24404061 未加载
评论 #24406586 未加载
stackzeroover 4 years ago
gg man. Really enjoyed your posts since starting rust.
xiaodaiover 4 years ago
Hmmm... Rust needs alot more given its slow reputation
评论 #24405011 未加载
评论 #24405928 未加载
k__over 4 years ago
The title made it sound like the Rust compiler is at its performance limit and they doing the last possible optimization.