TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to speed up the Rust compiler in 2022

174 pointsby nnethercoteabout 3 years ago

9 comments

nicoburnsabout 3 years ago
I would like to add for anyone who doesn't use Rust, that these performance improvements in the Rust compiler are not just well written up, but are very meaningful in the real world. Subjectively (I've not measured scientifically), I'd say Rust compile times are ~1/2 what they were a few years ago. With that, incremental compilation, LLD, and a new machine my experience of Rust compile times has been completely revolutionised. There's still more work to be done, but a lot has been achieved here.
评论 #30466705 未加载
brutal_chaos_about 3 years ago
I tried....and it didn&#x27;t work.<p>Thank you. I appreciate the description of what was tried and didn&#x27;t work or didn&#x27;t work as well as hoped. We need more of this so I don&#x27;t waste time (or you don&#x27;t too).
twicabout 3 years ago
&gt; #93066: The Decoder trait used for metadata decoding was fallible, using Result throughout. But decoding failures should only happen if something highly unexpected happens (e.g. metadata is corrupted) and on failure the calling code would just abort. This PR changed Decoder to be infallible throughout—panicking immediately instead of panicking slightly later—thus avoiding lots of pointless Result propagation, for wins across many benchmarks of up to 2%.<p>Interesting insight into the cost of Result-based error handling.
评论 #30466041 未加载
tentacleunoabout 3 years ago
I really wonder how much these changes, especially seeing the numbers (4%, 5%), add up at scale. For example, from reading a comment on Hacker News[0], I&#x27;m told that small changes can save companies millions and millions on servers. It saves power, too.<p>[0]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30461201" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30461201</a>
评论 #30467614 未加载
评论 #30465856 未加载
ncmncmabout 3 years ago
Anybody interested in speeding up the Rust compiler should be looking into generating a new JITted parser after each macro definition, and jumping into it to parse the remaining (and any other affected) code.<p>The time to compile a new parser ought to be much less than feeding literally all tokens into a runtime macro-definition interpreter.<p>Similar infrastructure might help with generics type calculus.
评论 #30466053 未加载
tiddlesabout 3 years ago
These posts are always a pleasure to read. it&#x27;s great to see continuous work on improving compiler performance, and especially with good results too.
mrichabout 3 years ago
Recently I integrated the mold linker into a Rust project and linking of the main executable saw a speedup of 3.35x. This is pretty helpful for incremental builds where you typically edit one file and then build, so you have one compile and then the link. There is no build-job parallelism at that point so every second counts :)
mamcxabout 3 years ago
One thing that I wish were, at least, explored is how make Rust simpler.<p>I bet Rust syntax and some complex and divergents ways of typing are major implications on how Rust compile (you can see why, when comparing to Pascal). Also, modules, macros must be, IMHO, major faults here.<p>The other things: San, Quote, Serde? That stuff must be bring home. At minimum, the bare traits and some basic machinery.<p>Other thing: The orphan rule means exist a lot of places with redundant code to tie types that cause extra compilation efforts (because n-crates must pull n-crates to impl traits!).
评论 #30469584 未加载
ZeroGravitasabout 3 years ago
Does anyone do this kind of analysis but across well used libraries for a language?<p>Similar to how in this article they compiled a whole bunch of rust projects and flagged up the areas that were slow to compile for specific crates, does anyone do analysis that says this dependency shows up in a lot of hot paths when running benchmarks or test code and then get someone to apply this kind of systems level thinking to it?<p>It seems at least one fix was for a hashing library dependency and may have wider benefits, but surely that applies to more things too?