TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Beyond malloc efficiency to fleet efficiency

40 点作者 LeegleechN将近 4 年前

3 条评论

dragontamer将近 4 年前
&gt; As an example of the benefits of this approach, one service increased its time in TCMalloc from 2.7% to 3.5%, an apparent regression, but reaped improvements of 3.4% more requests-per-second, a 1.7% latency reduction, and a 6.5% reduction in peak memory usage!<p>This is the stuff of performance nightmares. Anyone thinking about optimization often will get single-tracked into the performance regression there and maybe not necessarily see the improved overall performance (requests per second).
评论 #27890844 未加载
romesmoke将近 4 年前
&gt; In Google’s data centers, this improvement reduced TLB stalls by 6% and memory fragmentation by 26%.<p>Yet after Ctr+F-ing the paper for the term, I have yet to find an accurate definition of &quot;fragmentation&quot;. Keeping in mind that fragmentation is an allocator&#x27;s major enemy, it bugs me to realize that there is no universally agreed-upon formulation yet.<p>Does anyone more knowledgeable have a more informed opinion on the matter?
PoignardAzur将近 4 年前
Seems like a bit of a tragedy of the commons. Individual containers benefit from a faster malloc, but the whole fleet benefits from one doing more work.