TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Go GC: Latency Problem Solved [pdf]

101 点作者 xkarga00将近 10 年前

6 条评论

Animats将近 10 年前
The LISP community went through this in the 1980s. They had to; the original Symbolics LISP machine had 45-minute garbage collections, as the GC fought with the virtual memory. There&#x27;s a long list of tricks. This one is to write-protect data memory during the GC&#x27;s marking phase, so marking and computation can proceed simultaneously. When the code stores into a write-protected page, the store is trapped and that pointer is logged for GC attention later. This works as long as the GC&#x27;s marker is faster than the application&#x27;s pointer changing. There are programs for which this approach is a lose. A large sort of a tree, where pointers are being retargeted with little computation between changes, is such a program.<p>If they&#x27;re getting 3ms stalls on a 500MB heap, they&#x27;re doing pretty well. That the stall time doesn&#x27;t increase with heap size is impressive.<p>Re <i>&quot;avoid fragmentation to begin with by storing objects of the same size in the same memory span.&quot;</i> That&#x27;s easy today, because we have so much memory and address space. The simplest version of that is to allocate memory in units of powers of 2, with each MMU page containing only one size of block. The size round-up wastes memory, of course. But you can use any exponent in the range 1..2, and have, for example, block sizes every 20%. This approach is popular with conservative garbage collectors (ones that don&#x27;t know what&#x27;s a pointer and what&#x27;s just data that looks like a pointer) because the size of a block can be determined from the pointer alone.
评论 #9912805 未加载
评论 #9912834 未加载
rgbrenner将近 10 年前
This page adds some context to the slides: <a href="https:&#x2F;&#x2F;sourcegraph.com&#x2F;blog&#x2F;live&#x2F;gophercon2015&#x2F;123574706480" rel="nofollow">https:&#x2F;&#x2F;sourcegraph.com&#x2F;blog&#x2F;live&#x2F;gophercon2015&#x2F;123574706480</a><p>It was posted here 10 days ago: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9854408" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9854408</a>
joosters将近 10 年前
Garbage collecting seems to get solved in each new release of Go and Java, apparently.
评论 #9910978 未加载
评论 #9911554 未加载
评论 #9911294 未加载
评论 #9911105 未加载
jnordwick将近 10 年前
Still slowish. Far far from &quot;solved.&quot; The charts they zoom in on only go to about 500MB in heap, showing 2 ms pause times. It makes me suspicious that the nice linear trend he&#x27;s showing doesn&#x27;t hold up under more reasonable values -- my IDE takes up 500 MB and my web browser over a GB.<p>So if by his possibly rosy calculations, a basic 3GB heap is still pausing 6 ms. God forbid I use a 500 GB heap and now we&#x27;re into the one second range again. This is assuming the linear relationship holds up, but given his choice of graph domain, I have a suspicion that there are issues to the right.<p>This seems typical of Google technology. They say they care about performance, but I have yet to see a piece of Google tech that is actually useful if you care about performance. People automatically assume Google is synonymous with performance, but it definitely isn&#x27;t.<p>Remember, he says this improved GC pause time is going to come at the expense of Go top-line speed. You Go will get slower, and you sill will have second pauses with any serious work.
评论 #9911423 未加载
评论 #9911998 未加载
评论 #9911409 未加载
评论 #9912480 未加载
评论 #9912527 未加载
jmount将近 10 年前
I thought the issue with GO garbage collectors wasn&#x27;t so much speed as correctness (as they GO team historically has gotten GCD speed by sacrificing correctness, or is correctness a goal past version 1.3?).
评论 #9911713 未加载
shmerl将近 10 年前
I prefer RAII approach to GC.
评论 #9913205 未加载