The LISP community went through this in the 1980s. They had to; the original Symbolics LISP machine had 45-minute garbage collections, as the GC fought with the virtual memory. There's a long list of tricks. This one is to write-protect data memory during the GC's marking phase, so marking and computation can proceed simultaneously. When the code stores into a write-protected page, the store is trapped and that pointer is logged for GC attention later. This works as long as the GC's marker is faster than the application's pointer changing. There are programs for which this approach is a lose. A large sort of a tree, where pointers are being retargeted with little computation between changes, is such a program.<p>If they're getting 3ms stalls on a 500MB heap, they're doing pretty well. That the stall time doesn't increase with heap size is impressive.<p>Re <i>"avoid fragmentation to begin with by storing objects of the same size in the same memory span."</i> That's easy today, because we have so much memory and address space. The simplest version of that is to allocate memory in units of powers of 2, with each MMU page containing only one size of block. The size round-up wastes memory, of course. But you can use any exponent in the range 1..2, and have, for example, block sizes every 20%. This approach is popular with conservative garbage collectors (ones that don't know what's a pointer and what's just data that looks like a pointer) because the size of a block can be determined from the pointer alone.
This page adds some context to the slides:
<a href="https://sourcegraph.com/blog/live/gophercon2015/123574706480" rel="nofollow">https://sourcegraph.com/blog/live/gophercon2015/123574706480</a><p>It was posted here 10 days ago:
<a href="https://news.ycombinator.com/item?id=9854408" rel="nofollow">https://news.ycombinator.com/item?id=9854408</a>
Still slowish. Far far from "solved." The charts they zoom in on only go to about 500MB in heap, showing 2 ms pause times. It makes me suspicious that the nice linear trend he's showing doesn't hold up under more reasonable values -- my IDE takes up 500 MB and my web browser over a GB.<p>So if by his possibly rosy calculations, a basic 3GB heap is still pausing 6 ms. God forbid I use a 500 GB heap and now we're into the one second range again. This is assuming the linear relationship holds up, but given his choice of graph domain, I have a suspicion that there are issues to the right.<p>This seems typical of Google technology. They say they care about performance, but I have yet to see a piece of Google tech that is actually useful if you care about performance. People automatically assume Google is synonymous with performance, but it definitely isn't.<p>Remember, he says this improved GC pause time is going to come at the expense of Go top-line speed. You Go will get slower, and you sill will have second pauses with any serious work.
I thought the issue with GO garbage collectors wasn't so much speed as correctness (as they GO team historically has gotten GCD speed by sacrificing correctness, or is correctness a goal past version 1.3?).