I've worked with garbage collected languages (ruby, Java, Objective-C in the bad old days), automatic reference counting (Objective-C and Swift), in the Rust model and manual reference counting/ownership in C over about 15 years now.<p>Having thought about this a lot, I just don't really understand why people continue to work on garbage collection. Non-deterministic lifecycle of objects/resources, non-deterministic pauses, huge complexity and significant memory and CPU overhead just aren't worth the benefits.<p>All you have to do with ARC in Swift and Objective-C is type 'weak' once in a while (which effectively builds a directed acyclic graph of strong references). With Rust you can get away with just structuring your code in accordance with their conventions.<p>I'm sure this won't resonate with everyone but I think it's time to walk away from GC. I'm curious, is there something I'm missing? The only true benefit I can think of is reducing heap fragmentation; and there must be a better way to address that.
> ...there has been a virtuous cycle between software and hardware development. CPU hardware improves, which enables faster software to be written, which in turn...<p>This is the exact opposite of the experience I've had with (most) software. A new CPU with a higher clock speed makes existing software faster, but most new software written for the new CPU will burn all of the extra CPU cycles on more layers of abstraction or poorly written code until it runs at the same speed that old code ran on the old CPU. I'm impressed that hardware designers and compiler authors can do their jobs well enough to make this sort of bloated software (e.g. multiple gigabytes for a word processor or image editor) succeed in spite of itself.<p>There are of course CPU advancements that make a huge performance difference when used properly (e.g. SSE, multiple cores in consumer machines) and some applications will use them to great effect, but these seem to be few and far between.
Better images of the plots:<p><a href="https://pbs.twimg.com/media/CJatKFQUkAE5qcR.png:large" rel="nofollow">https://pbs.twimg.com/media/CJatKFQUkAE5qcR.png:large</a><p><a href="https://pbs.twimg.com/media/CJavrIAUMAAIaq8.png:large" rel="nofollow">https://pbs.twimg.com/media/CJavrIAUMAAIaq8.png:large</a>
I am a bit surprised by most of the discussion here so far. Garbage collection has first of all one fundamental advantage: correctness. You are guaranteed never ever to have a pointer to a freed object and that any unreachable object does get freed. For almost all programs that get written, correctness should go over speed.<p>And speaking of speed, unless you require hard realtime behavior, garbage collection can be quite beneficial. A generational GC offers faster allocation times than any malloc based allocator, and the collection of the nursery generation is instantaneous in most cases. ARC has the overhead of counting for each referencing/dereferencing and while it might predictable about kicking in when killing a reference frees memory, the time required to free a given object completely depends on how much objects get consequently freed.<p>Furthermore, garbage collection helps to write clean code, as it is safe (and usually cheap) to allocate memory during a function call and return results referencing the memory.<p>Of course, badly written programs might perform badly with GC - but without the same kind of programs would just be a disaster. And most strategies for efficient memory usage used in non-GC languages (e.g. memory pools for certain objects) can and should be equally used in GC languages.
Erlang's per-process (Erlang process, not Unix process) GC is pretty good from this point of view. I'm surprised they didn't mention it as something to think about.
"Go programs will get a little bit slower in exchange for ensuring lower GC latencies."<p>How much slower are Go1.5 programs compared to their Go1.4 version? Is this relevant for web apps?