This rant is a mixed bag of decent advise and dismal advice.<p>Let's begin with Squid. I can't speak to the performance of Squid. Never measured it. Don't care. I do care about how it is described in this article.<p>We're told that you tell Squid how much RAM it may use, and how much disk, and it honors those constraints.<p>Then we're told that, as a result, Squid page thrashes and so it performs badly.<p>A well-written program described that way page thrashes <i>because it is running on a poorly administered system</i>. The entire point of writing a program like squid to let you say how much RAM to use, is that you then use Squid on a machine on which you can ensure that it will not ever need to page.<p>Some programs (and this was as true in the very earliest days of virtual memory as it is today) are purposefully designed so that they will run correctly even if they page, but will run very fast if they do not page. Server software is often in this category because server software often runs on dedicated hardware where the hardware budget is large enough.<p>If you look at a server program (like squid) running on some dedicated server and find that it is paging a significant amount, you don't just decide "that program is poorly written" --- you must consider the possibility that the machine is poorly configured.<p>When writing a program, you've a choice: manage your own working set and write assuming you won't be page-thrashed? Or punt it to the underlying OS. Which is better? The answer really depends on what you know about the memory-use patterns of your program. If you know nothing, consider leaving it to the OS. If you know the OS's paging policies are a good fit, leave it to the OS. If you can beat the OS's policies and count on not being page thrashed (and performance is worth the work) --- then write like it's 1975 for goodness sake.