TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Cranelift code generation comes to Rust

444 pointsby ridruejoabout 1 year ago

20 comments

chrisaycockabout 1 year ago
This article provides an excellent overview of the latest in <i>speed of optimizer</i> vs <i>quality of optimization</i>.<p>In particular, copy-and-patch compilation is still the fastest approach because it uses pre-compiled code, though leaves little room for optimization.<p>Cranelift uses e-graphs to represent equivalence on the IR. This allows for more optimizations than the copy-and-patch approach.<p>Of course, the most optimized output is going to come from a more traditional compiler toolchain like LLVM or GCC. But for users who want to get &quot;fast enough&quot; output as quickly as possible, newer compiler techniques provide a promising alternative.
评论 #39743635 未加载
评论 #39744770 未加载
评论 #39743664 未加载
cube2222about 1 year ago
Slightly off-topic, but if you fancy writing compilers in your free time, Cranelift has a great Rust library[0] for doing code generation - it’s a pleasure to use!<p>[0]: <a href="https:&#x2F;&#x2F;docs.rs&#x2F;cranelift-frontend&#x2F;0.105.3&#x2F;cranelift_frontend&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;docs.rs&#x2F;cranelift-frontend&#x2F;0.105.3&#x2F;cranelift_fronten...</a>
评论 #39746159 未加载
ad-opsabout 1 year ago
I see that there are many comments on full debug builds, but for me the most important difference are incremental build times when making minor changes. In my opinion this is what speeds up the development iterations.<p>Here are my build times when making a trivial change to a print-statment in a root function, comparing nightly dev vs adding cranelift + mold for rust-analyzer[0] (347_290 LoC) and gleam[1] (76_335 LoC):<p><pre><code> $ time cargo build Compiling rust-analyzer v0.0.0 (&#x2F;home&#x2F;user&#x2F;repos&#x2F;rust-analyzer&#x2F;crates&#x2F;rust-analyzer) # nightly Finished `dev` profile [unoptimized] target(s) in 6.60s cargo build 4.18s user 2.51s system 100% cpu 6.650 total # cranelift+mold Finished `dev` profile [unoptimized] target(s) in 2.25s cargo build 1.77s user 0.36s system 92% cpu 2.305 total Compiling gleam v1.0.0 (&#x2F;home&#x2F;user&#x2F;repos&#x2F;gleam&#x2F;compiler-cli) # nightly Finished `dev` profile [unoptimized + debuginfo] target(s) in 4.69s cargo build --bin gleam 3.02s user 1.74s system 100% cpu 4.743 total # cranelift+mold Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.99s cargo build --bin gleam 0.71s user 0.20s system 88% cpu 1.033 total </code></pre> For me this is the most important metric and it shows a huge improvement. If I compare it to Go building Terraform[2] (371_594 LoC) it is looking promising. This is a bit unfair since it is the release build for Go and this is really nice in the CI&#x2F;CD. Love Go compilation times and I thought it would be nice to compare with another language to show the huge improvements that Rust has made.<p><pre><code> $ time go build go build 3.62s user 0.76s system 171% cpu 2.545 total </code></pre> I was looking forward to parallel front-end[3], but I have not seen any improvement for these small changes.<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;rust-lang&#x2F;rust-analyzer">https:&#x2F;&#x2F;github.com&#x2F;rust-lang&#x2F;rust-analyzer</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;gleam-lang&#x2F;gleam">https:&#x2F;&#x2F;github.com&#x2F;gleam-lang&#x2F;gleam</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;hashicorp&#x2F;terraform">https:&#x2F;&#x2F;github.com&#x2F;hashicorp&#x2F;terraform</a><p>[3] <a href="https:&#x2F;&#x2F;blog.rust-lang.org&#x2F;2023&#x2F;11&#x2F;09&#x2F;parallel-rustc.html" rel="nofollow">https:&#x2F;&#x2F;blog.rust-lang.org&#x2F;2023&#x2F;11&#x2F;09&#x2F;parallel-rustc.html</a><p>*edit: code-comments &amp; links + making it easier to see the differences
评论 #39746413 未加载
评论 #39748598 未加载
评论 #39746334 未加载
digganabout 1 year ago
Tried out the instructions from the article on a tiny Bevy project, and compared it to a &quot;normal&quot; build:<p>&gt; cargo build --release 23.93s user 22.85s system 66% cpu 1:09.88 total<p>&gt; cargo +nightly build -Zcodegen-backend 23.52s user 21.98s system 68% cpu 1:06.86 total<p>Seems just marginally faster than a normal release build. Wonder if there is something particular with Bevy that makes this so? The author of the article mentions 40% difference in build speed, but I&#x27;m not seeing anything near that.<p>Edit: just realized I&#x27;m caching my release builds with sccache and a local NAS, hence the release builds being as fast as Cranelift+debug builds. Trying it again with just debug builds and without any caching:<p>&gt; cargo +nightly build 1997.35s user 200.38s system 1878% cpu 1:57.02 total<p>&gt; cargo +nightly build -Zcodegen-backend 280.96s user 73.06s system 657% cpu 53.850 total<p>Definitely an improvement once I realized what I did wrong, about half the time spent compiling now :) Neat!
评论 #39744017 未加载
评论 #39754971 未加载
评论 #39749608 未加载
评论 #39745681 未加载
CodesInChaosabout 1 year ago
You can use different backends and optimization for different crates. It often makes sense to use optimized LLVM builds for dependencies, and debug LLVM or even Cranelift for your own code.<p>See <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;rust&#x2F;comments&#x2F;1bhpfeb&#x2F;vastly_improved_recompile_times_in_rust_with&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;rust&#x2F;comments&#x2F;1bhpfeb&#x2F;vastly_improv...</a>
评论 #39747136 未加载
metadatabout 1 year ago
The Equality Graphs link [0] led me to discover ESC&#x2F;Java [1] [2]. Has anyone actually tried or had any success with ESC&#x2F;Java? It&#x27;s piqued my curiosity to compare with Spot bugs (formerly known as Findbugs).<p>[0] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;E-graph" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;E-graph</a><p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;ESC&#x2F;Java" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;ESC&#x2F;Java</a><p>[2] <a href="https:&#x2F;&#x2F;www.kindsoftware.com&#x2F;products&#x2F;opensource&#x2F;escjava2&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.kindsoftware.com&#x2F;products&#x2F;opensource&#x2F;escjava2&#x2F;</a>
Tehnixabout 1 year ago
Very excited for Cranelift for debug builds to speed up development iteration - in particular for WASM&#x2F;Frontend Rust where iteration speed is competing with the new era of Rust tooling for JS which lands in the sub 1 second builds sometimes (iteration speed in Frontend is crucial).<p>Sadly, it does not yet support ARM macOS, so us M1-3 users will have to wait a bit :&#x2F;
评论 #39749624 未加载
评论 #39761925 未加载
Deukhoofdabout 1 year ago
Does anyone by chance have benchmarks of runtime (so not the compile time) when using Cranelift? I&#x27;m seeing a mention of &quot;twice as slow&quot; in the article, but that&#x27;s based on data from 2020. Wondering if it has substantially improved since then.
评论 #39752073 未加载
mmoskalabout 1 year ago
&gt; JIT compilers often use techniques, such as speculative optimizations, that make it difficult to reuse the compiler outside its original context, since they encode so many assumptions about the specific language for which they were designed.<p>&gt; The developers of Cranelift chose to use a more generic architecture, which means that Cranelift is usable outside of the confines of WebAssembly.<p>One would think this has more to do with Wasm being the source language, as it&#x27;s fairly generic (compared to JS or Python), so there are no specific assumptions to encode.<p>Great article though. It&#x27;s quite interesting to see E-matching used in compilers, took me down a memory lane (and found myself cited on Wikipedia page for e-graphs).
posix_monadabout 1 year ago
Can anyone explain why Cranelift is expected to be faster than LLVM? And why those improvements can&#x27;t also be applied to LLVM?
评论 #39746315 未加载
评论 #39746539 未加载
评论 #39749259 未加载
namuolabout 1 year ago
Is there no native support for M1-M3 Macs currently, and no Windows support either?<p>Unclear what the roadmap is there, as this update from the most active contributor is inconclusive:<p>&gt; Windows support has been omitted for now. And for macOS currently on supports x86_64 as Apple invented their own calling convention for arm64 for which variadic functions can’t easily be implemented as hack. If you are using an M1 processor, you could try installing the x86_64 version of rustc and then using Rosetta 2. Rosetta 2 will hurt performance though, so you will need to try if it is faster than the LLVM backend with arm64 rustc.<p>Source is from Oct 2023 so this could easily be outdated, but I found nothing in the original article: <a href="https:&#x2F;&#x2F;bjorn3.github.io&#x2F;2023&#x2F;10&#x2F;31&#x2F;progress-report-oct-2023.html" rel="nofollow">https:&#x2F;&#x2F;bjorn3.github.io&#x2F;2023&#x2F;10&#x2F;31&#x2F;progress-report-oct-2023...</a>
评论 #39751615 未加载
评论 #39751195 未加载
Someoneabout 1 year ago
FTA: <i>“Because optimizations run on an E-graph only add information in the form of new annotations, the order of the optimizations does not change the result. As long as the compiler continues running optimizations until they no longer have any new matches (a process known as equality saturation), the E-graph will contain the representation that would have been produced by the optimal ordering of an equivalent sequence of traditional optimization passes […] In practice, Cranelift sets a limit on how many operations are performed on the graph to prevent it from becoming too large.”</i><p>So, in practice, the order of optimizations <i>can</i> change the result? How easy is it to hit that limit?
评论 #39757476 未加载
k_bxabout 1 year ago
Any fresh compilation time benchmarks and comparisons to LLVM?
评论 #39744001 未加载
评论 #39743555 未加载
评论 #39743589 未加载
评论 #39743516 未加载
rayinerabout 1 year ago
Very interesting article. I had not heard of equality graphs before. Here&#x27;s some pretty good background reading on the subject: <a href="https:&#x2F;&#x2F;inst.eecs.berkeley.edu&#x2F;~cs294-260&#x2F;sp24&#x2F;2024-03-04-eqsat-paper" rel="nofollow">https:&#x2F;&#x2F;inst.eecs.berkeley.edu&#x2F;~cs294-260&#x2F;sp24&#x2F;2024-03-04-eq...</a>
rishav_sharanabout 1 year ago
It sucks that there is no way to use cranelift from outside of rust to create your own toy language. I would have loved to use cranelift in a toy compiler, but I am not ready to pay the Rust price of complexity.
评论 #39746495 未加载
评论 #39760188 未加载
评论 #39746444 未加载
Dowwieabout 1 year ago
Would it be naive to assume a general compile-time reduction of 20% for all Rust projects by swapping llvm with cranelift?
评论 #39744670 未加载
makemake_kboabout 1 year ago
imo rust debug builds are fast enough, but its nice to see things are going to get even faster! hopefully this will eventually make `rust-analyzer` faster and more efficient.
Ericson2314about 1 year ago
Really looking forward to the death of non-e-graph-based compilation :)
评论 #39749455 未加载
pjmlpabout 1 year ago
Finally, looking forward for wider adoption.
tsegratisabout 1 year ago
I feel like im reading advertising blurb reading that article<p>I wish them every success, but i hope for a more balanced overview of pros and cons rather than gushing praise at every step...