TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

How to speed up the Rust compiler in 2022

174 点作者 nnethercote大约 3 年前

9 条评论

nicoburns大约 3 年前
I would like to add for anyone who doesn't use Rust, that these performance improvements in the Rust compiler are not just well written up, but are very meaningful in the real world. Subjectively (I've not measured scientifically), I'd say Rust compile times are ~1/2 what they were a few years ago. With that, incremental compilation, LLD, and a new machine my experience of Rust compile times has been completely revolutionised. There's still more work to be done, but a lot has been achieved here.
评论 #30466705 未加载
brutal_chaos_大约 3 年前
I tried....and it didn&#x27;t work.<p>Thank you. I appreciate the description of what was tried and didn&#x27;t work or didn&#x27;t work as well as hoped. We need more of this so I don&#x27;t waste time (or you don&#x27;t too).
twic大约 3 年前
&gt; #93066: The Decoder trait used for metadata decoding was fallible, using Result throughout. But decoding failures should only happen if something highly unexpected happens (e.g. metadata is corrupted) and on failure the calling code would just abort. This PR changed Decoder to be infallible throughout—panicking immediately instead of panicking slightly later—thus avoiding lots of pointless Result propagation, for wins across many benchmarks of up to 2%.<p>Interesting insight into the cost of Result-based error handling.
评论 #30466041 未加载
tentacleuno大约 3 年前
I really wonder how much these changes, especially seeing the numbers (4%, 5%), add up at scale. For example, from reading a comment on Hacker News[0], I&#x27;m told that small changes can save companies millions and millions on servers. It saves power, too.<p>[0]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30461201" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30461201</a>
评论 #30467614 未加载
评论 #30465856 未加载
ncmncm大约 3 年前
Anybody interested in speeding up the Rust compiler should be looking into generating a new JITted parser after each macro definition, and jumping into it to parse the remaining (and any other affected) code.<p>The time to compile a new parser ought to be much less than feeding literally all tokens into a runtime macro-definition interpreter.<p>Similar infrastructure might help with generics type calculus.
评论 #30466053 未加载
tiddles大约 3 年前
These posts are always a pleasure to read. it&#x27;s great to see continuous work on improving compiler performance, and especially with good results too.
mrich大约 3 年前
Recently I integrated the mold linker into a Rust project and linking of the main executable saw a speedup of 3.35x. This is pretty helpful for incremental builds where you typically edit one file and then build, so you have one compile and then the link. There is no build-job parallelism at that point so every second counts :)
mamcx大约 3 年前
One thing that I wish were, at least, explored is how make Rust simpler.<p>I bet Rust syntax and some complex and divergents ways of typing are major implications on how Rust compile (you can see why, when comparing to Pascal). Also, modules, macros must be, IMHO, major faults here.<p>The other things: San, Quote, Serde? That stuff must be bring home. At minimum, the bare traits and some basic machinery.<p>Other thing: The orphan rule means exist a lot of places with redundant code to tie types that cause extra compilation efforts (because n-crates must pull n-crates to impl traits!).
评论 #30469584 未加载
ZeroGravitas大约 3 年前
Does anyone do this kind of analysis but across well used libraries for a language?<p>Similar to how in this article they compiled a whole bunch of rust projects and flagged up the areas that were slow to compile for specific crates, does anyone do analysis that says this dependency shows up in a lot of hot paths when running benchmarks or test code and then get someone to apply this kind of systems level thinking to it?<p>It seems at least one fix was for a hashing library dependency and may have wider benefits, but surely that applies to more things too?