TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Practical Garbage Collection - Part 1: Introduction

181 pointsby cgbystromover 13 years ago

6 comments

bitopsover 13 years ago
A very good writeup, but one thing always confuses me.<p>What is meant specifically by the "heap" and "stack"? I know what a stack is, but "heap" gets thrown around in many different contexts and I've yet to find any explanation that made it clear.<p>If anyone has a good explanation or good links for those two terms in this context, I'd be very grateful. Thanks!<p>[EDIT: thanks everyone for the answers so far!]
评论 #3397103 未加载
评论 #3396205 未加载
评论 #3396512 未加载
评论 #3396150 未加载
评论 #3396143 未加载
评论 #3396113 未加载
pixie_over 13 years ago
The complexity of generational garbage collection vs. the speed of manual collection, makes me feel like the happy medium of speed and simplicity is reference counting, like that found in objective-c. iPhone apps are fast, but take a bit longer to design, develop, and debug due to memory management issues. Though with experience these can be minimized.<p>It probably isn't possible without a ton of modification, but I wish the JVM/CLR had an option to garbage collect through reference counting.
评论 #3395942 未加载
评论 #3396836 未加载
评论 #3396794 未加载
评论 #3395925 未加载
评论 #3396813 未加载
stcredzeroover 13 years ago
What are the long term implications of the increasing amount of parallel computing resources available to programmers? Are we getting to a point where there is enough excess CPU available, that the extra instruction on every assignment for reference counting is no big deal? (In certain contexts. In some contexts extra instructions are always a big deal, but these don't span all of computing.) Combine that with incremental algorithms for cycle reclamation, and you'd have great low-latency GC.
评论 #3395691 未加载
评论 #3395669 未加载
ruggeriover 13 years ago
I enjoyed this basic introduction to GC; upvoted. What I'd look forward is further discussion of incremental and concurrent GC algorithms.<p>Until then, does anyone know what makes concurrent GC non-trivial? It seems like it shouldn't be too hard to trace concurrent to program execution. And if you don't compact, collection seems to just involve updating some structure tracking free blocks. I'd imagine it's possible to write a thread-safe version of that structure where every "free" request doesn't need to block every "malloc" request. But I must have missed something.<p>I'd also be interested to read how compaction works; how are references remapped from the old address to the new one? Is it possible that a reference value is a pointer to a reference "object" which contains the pointer to data, which needs to be updated? Then you only need to update a single pointer when moving data, but every dereferene incurs an extra layer of indirection.
评论 #3401857 未加载
valyalaover 13 years ago
While the article is interesting, it skips important things, which have high influence on practical GC speed - write barriers and finalizers. The following ancient article from Microsoft has better coverage of GC internals <a href="http://msdn.microsoft.com/en-us/library/ms973837.aspx" rel="nofollow">http://msdn.microsoft.com/en-us/library/ms973837.aspx</a> (somewhat biased to .NET :) ).
rubashovover 13 years ago
&#62; The default choice of garbage collector in Hotspot is the throughput collector, which is ... entirely optimized for throughput<p>I just want to confirm this is true? Say you're doing a long running simulation. You don't care about pauses at all. You just want it to finish fast. The default GC with no particular options is the way to go?
评论 #3396668 未加载
评论 #3396935 未加载
评论 #3398308 未加载