<i>So what happens with squids elaborate memory management is that it gets into fights with the kernels elaborate memory management, and like any civil war, that never gets anything done.</i><p>This quote, much like various scientific quantum mechanics quotes adopted by the laymen, keeps haunting honest systems programmers because people with a little bit of knowledge read it, misinterpret (or misunderstand) it, and then share it.<p>Look, I don't know how Squid is designed, but most database systems use this strategy and it <i>does not</i> get into wars with the kernel for a whole slew of reasons that aren't addressed in the article. I know, because we've done a ton of sophisticated benchmarking comparing custom use case cache performance to general purpose page cache performance. Here are a few of the many, many reasons why this quote cannot be applied to sensibly designed pieces of systems software:<p>1. If the database/proxy/whatever server is designed correctly, it'll always use just enough RAM that it won't go into swap. That means the kernel won't magically page out its memory preventing it from doing its job.<p>2. In fact, kernels provide mechanisms to <i>guarantee</i> this by using various mechanisms (such as mlock).<p>3. Also, if your process misbehaves, modern kernels will just deploy the OOM killer (depending on how things are configured), so you can't just get into fights with the page cache without being sniped.<p>4. Of course you have to be smart and read from the file in a way that bypasses the page cache (via DIRECT_IO). Yes, it complicates things greatly for systems programmers (all sorts of alignment issues, journal data filesystems issues, etc.) but if you want high performance, especially on SSDs, and have special use cases to warrant it, it's worth it.<p>5. If you really know what you're doing, a custom cache can be significantly more efficient than the general purpose kernel cache, which in turn can make significant impact on performance bottom line. For example, a b-tree aware caching scheme has to do less bookkeeping, is more efficient, and has more information to make decisions than the general purpose LRU-K cache.<p>In fact, it is absolutely astounding how many 1975 abstractions translate wonderfully into the world of 2012. Architecturally, almost everything that worked back then <i>still</i> works now, including OS research, PL research, algorithms research, and software engineering research -- the four pillars that are holding up the modern software world. Some things are obsolete, perhaps, but far, far fewer than one might think.<p>Incidentally, this is also one of the reasons why I cringe when people say "the world is changing so fast, it's getting harder and harder to keep up". In matters of fashion, perhaps, but as far as core principles go (in computer science, mathematics, human emotions/interaction, and pretty much everything else of consequence) the world is moving at a glacial pace. Shakespeare might be a bit clunky to read these days because the language is a bit out of style, but what Hamlet had to say in 1600 is, amazingly, just as relevant today (and likely much more useful, because instead of actually reading Hamlet, most people read things like The Purple Cow, The 22 Immutable Laws of Marketing, The 99 Immutable Laws of Leadership, etc.)