TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Scalable self-improvement for compiler optimization

88 点作者 matt_d7 个月前

2 条评论

vanderZwan7 个月前
I find it a little annoying that in the paper[0] they show various graphs of megabytes <i>saved</i> in the paper, but no actual <i>size</i> of the binaries that these policies are applied to, as far as I can tell.<p>So when they say the inline policies end up saving 20 MiB on the training data, and then only a few megabyte on a different binary not in the training data, I lack the context to really judge what that says. Is the other binary much smaller? The same size? What if it&#x27;s <i>bigger</i> and therefore hides a much smaller relative size savings?<p>At the very end of the paper do they mention one binary size: namely that they save about 3 MB on the Chrome on Android binary, which is 213.32 MB after implementing the policy. A solid 1%, probably makes an enormous difference at Google Scale, especially for their main Android browser, so I hope it&#x27;s obvious that I&#x27;m not trying to diminish the achievement of these people. But I find the other benchmarks kind of hard to interpret.<p>[0] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2403.19462" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2403.19462</a>
评论 #42054989 未加载
pvillano7 个月前
Someone once said the most fruitful research in AI is making models scale to larger compute&#x2F;data.<p>I think the same could become true for compilers, and I think equality saturation is the key. AI + equality saturation could scale the optimization of a single program to an entire data center
评论 #42054194 未加载
评论 #42053355 未加载