I'd be curious to hear performance numbers (assuming a reasonable front-end server to this library). I get it that the replicated in-memory caching part is valuable. But (from painful experiences with Java) I also fear that a GC based memory management system is anti-optimal for an in-memory cache of small objects, especially as the size of heap grows to beyond a couple of GB( * ).<p>(* ) <a href="http://cdn.parleys.com/p/5148922a0364bc17fc56c60f/GarbageCollection.pdf" rel="nofollow">http://cdn.parleys.com/p/5148922a0364bc17fc56c60f/GarbageCol...</a>
I really wouldn't call this an alternative. If you are running memcached, it's very unlikely you can switch to Groupcache.<p>Parts of your application may rely on the expiration feature. But the biggest change is the inability to overwrite a current cache key. Every application I've used does this constantly (object updates).<p>Groupcache in its current form is useful for a very narrow set of applications.
I want to use this, but since the keys are immutable, how can I store data like sessions which can change and would sometimes have to be invalidated from the server side (i.e. you can't simply change the session ID in the cookie and use a new cache entry, because bad-guy could still be holding on to an old stolen session ID)?<p>In general, how can one learn to think in an immutable fashion to effectively exploit this?
Am I reading this as a distributed, immutable, weak hash table rather than what one would consider a 'cache'?<p>Mind you, doing so avoids the hardest parts of caching (and especially distributed caching, which otherwise begins to underperform around ≥ 5-7 nodes), so I can see significant upside. No surprise stales, distribution update clogging, etc.
I noticed this when he talked about speeding up the Google download servers. Very interesting :)<p>Its an alternative to memcache but not a direct replacement. I hope he adds CAS etc.<p>I hope they start using the kernel's buffer cache as the backing store, or explain why its not a good idea: <a href="http://williamedwardscoder.tumblr.com/post/13363076806/buffcacher" rel="nofollow">http://williamedwardscoder.tumblr.com/post/13363076806/buffc...</a>
<a href="http://talks.golang.org/2013/oscon-dl.slide#46" rel="nofollow">http://talks.golang.org/2013/oscon-dl.slide#46</a><p>> 64 MB max per-node memory usage<p>So this is best used as a LRU cache of hot items.<p>It doesn't compete/replace memcache comprehensively, but it does attack the use of memcache as a relief for hot items.<p>I can see me mixing my Go programs with both groupcache and memcache.<p>Edit: I have glanced through the code and cannot see where the 64 MB per-node limit comes in. Anyone see that?
How does its sharding by key algorithm work and how does it handle adding new peers? I was looking for at in the source, but couldn't find anything related to it.
<a href="https://github.com/golang/groupcache/blob/master/sinks.go#L59" rel="nofollow">https://github.com/golang/groupcache/blob/master/sinks.go#L5...</a><p>Uh oh. The dreaded cast we see here <a href="http://how-bazaar.blogspot.co.nz/2013/07/stunned-by-go.html" rel="nofollow">http://how-bazaar.blogspot.co.nz/2013/07/stunned-by-go.html</a>
I'm a bit confused about how you use this system with immutable keys. At face value it's a great idea, but I need a simple example of how's it is used to say retrieve a piece of data, then later update that to a new value.<p>Is this anything like how vector clocks are used, where the client uses the clocks to figure out which is the right state in a distributed system?
I like the idea, but it seems like it would make deployment a pain. How do I spin up a new server without rebalancing and/or restarting the world? Not to mention that now when I <i>do</i> need to restart the world, I can't do so without also clearing my cache.
How does this compare to Hazelcast[1]? Seems like the same idea, but far less features?<p>[1] <a href="http://www.hazelcast.com/" rel="nofollow">http://www.hazelcast.com/</a>