TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Hammerspace: Persistent, Concurrent, Off-heap Storage

73 点作者 lennysan超过 11 年前

12 条评论

noelwelsh超过 11 年前
Talk about an inaccurate title! The improvement is a combination of off-heap storage and sharing storage amongst processes. I&#x27;m surprised they didn&#x27;t look at Redis for this problem.<p>These tricks have been used for a while in the JVM world. Here&#x27;s a JVM equivalent of Hammerspace: <a href="http://www.mapdb.org/" rel="nofollow">http:&#x2F;&#x2F;www.mapdb.org&#x2F;</a> And here&#x27;s some slides concerning off-heap optimisations in Cassandra: <a href="http://www.slideshare.net/jbellis/dealing-with-jvm-limitations-in-apache-cassandra-fosdem-2012" rel="nofollow">http:&#x2F;&#x2F;www.slideshare.net&#x2F;jbellis&#x2F;dealing-with-jvm-limitatio...</a><p>On the JVM GC time is usually only an issue when the heap gets over 2GB or so. MRI&#x27;s GC is not in the same league as the JVM&#x27;s, but even so, 80MB should be easily handled. As such I&#x27;m guessing the memory consumption of multiple processes is causing the main issue, which would be solved if Ruby had real threads. JRuby has real threads, and many other language runtimes do as well. It seems like a lot of engineering effort is going into working around the deficiencies of MRI, a problem that can be easily solved by switching to something better.
评论 #7020563 未加载
joevandyk超过 11 年前
I wonder if they would need this if they used a single ruby process with many threads (instead of many ruby processes).<p>Their problems are mainly a result of needing to access 80 megabytes of slowly changing translation data. Since they run many ruby processes and have memory growth issues, this translation data was taking a while to load.<p>If they had a single stable ruby process running on each box, possibly they wouldn&#x27;t have had these issues.
评论 #7020505 未加载
jblow超过 11 年前
These guys never heard of shared memory, apparently?<p>Does Ruby not provide a facility to use shared memory? I guess you don&#x27;t get it by default in a GC&#x27;d language because the GC thinks it owns the world.
评论 #7020601 未加载
评论 #7020588 未加载
babs474超过 11 年前
Man there is a lot of negative snark at the top of this thread.<p>I&#x27;m not sure if this system is a good idea or not but I wish some commenters would spend more time comparing their proposed solutions (shared mem, local db, memmap...) to Hammerspace rather than contentless dismissal.
评论 #7021796 未加载
georgemcbay超过 11 年前
Original HN thread topic (&quot;How Airbnb Improved Response Time by 17% By Moving Objects From Memory To Disk&quot;) is misleading compared to actual article contents, but speaking to the topic subject rather than the article, I do find it pretty common for many developers to blanket assume that memory-based caching is always the way to go, because, well, memory is fast and disks are slow.<p>This sort of thinking ignores the fact that filesystems already have their own (often very well-tuned) caching systems and in some cases (eg. sendfile(2) in Linux) the kernel can do zero-copy writes from files to the network that (along with decent fs caching) will easily outperform app-level memory-caching. Of course, this only applies for data that will remain relatively static, but often your best option is to mostly get out of the way and let the OS do the heavy lifting unless you&#x27;ve measured actual loads and are sure your solution is better.
joshwa超过 11 年前
Armchair quarterbacking:<p>* Dedicate ruby processes to a particular subset of locales<p>* Parallelize your memcache queries<p>* Break up locale files into MRU&#x2F;LRU strings to reduce size<p>* Denormalize locales (in memory, cache, whatever) into single values for most common pages. (use with MRU&#x2F;LRU above)<p>As an aside, still don&#x27;t understand how process-&gt;kernelspace driver-&gt;platter is faster than process-&gt;kernelspace socket-&gt;process-&gt;RAM? Especially for random access patterns. I suspect a memcache misconfiguration?
评论 #7020591 未加载
评论 #7020930 未加载
toddh超过 11 年前
You can dynamically load&#x2F;unload shared libraries so the data is only shared once between all processes. A win is you can also optimize the memory layout of the translation tables (can be in C), for which a hash is probably not optimal. This can all be automated in the build process using the database as a source. During software upgrades processes must be aware enough to know when to reload. And since all shared memory schemes use virtual memory you still have potential latency issues because of paging. Not sure if a .so can be pinned. Another win is it is read only so you don&#x27;t have to worry about corruption.
评论 #7021201 未加载
jcampbell1超过 11 年前
Sounds like they re-invented .mo files from gettext.<p><a href="https://www.gnu.org/software/gettext/manual/html_node/MO-Files.html" rel="nofollow">https:&#x2F;&#x2F;www.gnu.org&#x2F;software&#x2F;gettext&#x2F;manual&#x2F;html_node&#x2F;MO-Fil...</a>
northisup超过 11 年前
The article does not address why this outsourced heap is better than other outsourced heaps.
ashayh超过 11 年前
I don&#x27;t get it.<p>Did they actually benchmark all possible options like shared memory or Sqlite or mysql memory engine (periodically backed)?<p>They say memcache (or redis) would have been slower because of network latency even over localhost. But did they benchmark.
rnbrady超过 11 年前
Pretty graphs! Drawn using?
评论 #7020796 未加载
gustaf超过 11 年前
Awesome work!