Am I the only one wondering why facebook hasn't implemented a compression backend into memcache much like Reiser4 and ZFS has done?<p>They've made it very clear that they're RAM limited (in particular with respect to capacity), so why not just have the processor compress/decompress memcache operations back and forth with a highly efficient and relatively low compression algorithm?<p>It's not even like you couldn't tune the algorithm to detect duplicate/similar data and create atomic globs of data that represent multiple informational objects.<p>It seems like their big cost is putting together machines with tons of RAM for their memcache clusters, so why not bring that cost down?
<i>If you are making an argument to recode your entire site from PHP to some other language, the answer is you just lost that argument.</i><p>This only works if execution time was a major part of the argument, and the site meets the conditions for benefiting from HipHop discussed in the article.
I was afraid that the usefulness of HipHop would be as limited as is regarding that it's not an easy feat to create a PHP-to-C++ compiler that handles C library dependencies (which PHP has a lot!) well.<p>BTW It was the second time in a week that there was a product that created incredible buzz in HN community without anyone able to trying out the product (the other was iPad of course) and I was amazed at the amount of well-informed opinion based on such little information.<p>PS. This blog is a good reading if you are interested in Facebook architecture, scaling and design issues in general.
Terry's earlier article about the future of PHP (linked in the first paragraph of this one) is also very good reading:<p><a href="http://phpadvent.org/2009/1500-lines-of-code-by-terry-chay" rel="nofollow">http://phpadvent.org/2009/1500-lines-of-code-by-terry-chay</a>