TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Practical Garbage Collection - Part 1: Introduction

181 点作者 cgbystrom超过 13 年前

6 条评论

bitops超过 13 年前
A very good writeup, but one thing always confuses me.<p>What is meant specifically by the "heap" and "stack"? I know what a stack is, but "heap" gets thrown around in many different contexts and I've yet to find any explanation that made it clear.<p>If anyone has a good explanation or good links for those two terms in this context, I'd be very grateful. Thanks!<p>[EDIT: thanks everyone for the answers so far!]
评论 #3397103 未加载
评论 #3396205 未加载
评论 #3396512 未加载
评论 #3396150 未加载
评论 #3396143 未加载
评论 #3396113 未加载
pixie_超过 13 年前
The complexity of generational garbage collection vs. the speed of manual collection, makes me feel like the happy medium of speed and simplicity is reference counting, like that found in objective-c. iPhone apps are fast, but take a bit longer to design, develop, and debug due to memory management issues. Though with experience these can be minimized.<p>It probably isn't possible without a ton of modification, but I wish the JVM/CLR had an option to garbage collect through reference counting.
评论 #3395942 未加载
评论 #3396836 未加载
评论 #3396794 未加载
评论 #3395925 未加载
评论 #3396813 未加载
stcredzero超过 13 年前
What are the long term implications of the increasing amount of parallel computing resources available to programmers? Are we getting to a point where there is enough excess CPU available, that the extra instruction on every assignment for reference counting is no big deal? (In certain contexts. In some contexts extra instructions are always a big deal, but these don't span all of computing.) Combine that with incremental algorithms for cycle reclamation, and you'd have great low-latency GC.
评论 #3395691 未加载
评论 #3395669 未加载
ruggeri超过 13 年前
I enjoyed this basic introduction to GC; upvoted. What I'd look forward is further discussion of incremental and concurrent GC algorithms.<p>Until then, does anyone know what makes concurrent GC non-trivial? It seems like it shouldn't be too hard to trace concurrent to program execution. And if you don't compact, collection seems to just involve updating some structure tracking free blocks. I'd imagine it's possible to write a thread-safe version of that structure where every "free" request doesn't need to block every "malloc" request. But I must have missed something.<p>I'd also be interested to read how compaction works; how are references remapped from the old address to the new one? Is it possible that a reference value is a pointer to a reference "object" which contains the pointer to data, which needs to be updated? Then you only need to update a single pointer when moving data, but every dereferene incurs an extra layer of indirection.
评论 #3401857 未加载
valyala超过 13 年前
While the article is interesting, it skips important things, which have high influence on practical GC speed - write barriers and finalizers. The following ancient article from Microsoft has better coverage of GC internals <a href="http://msdn.microsoft.com/en-us/library/ms973837.aspx" rel="nofollow">http://msdn.microsoft.com/en-us/library/ms973837.aspx</a> (somewhat biased to .NET :) ).
rubashov超过 13 年前
&#62; The default choice of garbage collector in Hotspot is the throughput collector, which is ... entirely optimized for throughput<p>I just want to confirm this is true? Say you're doing a long running simulation. You don't care about pauses at all. You just want it to finish fast. The default GC with no particular options is the way to go?
评论 #3396668 未加载
评论 #3396935 未加载
评论 #3398308 未加载