PHK's post, which inspired this, assumes that the process is swapping. It describes writing an page to disk to free up that page, then reading in the anonymous page of data that needs to be used for the write() system call the process uses to manually cache the data to disk. For the stuff that I use and work on, if the system is swapping anonymous pages, the situation is dire and it's time to kill (processes).<p>Let me back up and try to explain a bit:<p>While OS kernel developers have put a huge amount of effort into virtual memory management and paging, which was and is a good and necessary thing, the definition of "interactive" and "low latency" has changed. Long ago, half-second latency at a virtual terminal connected to a mainframe with hundreds or thousands of users was fantastic, compared with dropping off your stack of punch-cards and coming back 12 hours later.<p>For most of the software I use and work on today, I want low sub-second latency. It's often only achievable with reasonable direct control of what is in memory and what is on disk. If I click a menu in a GUI program that I haven't clicked in weeks, I don't want to wait half a second for a few scattered pages to be paged in/out of swap. Same goes for requests to web or api servers - I don't want less-common requests to take a half second longer than the typical 50ms or so. For desktop environments, GUIs, databases, caches, services: no swap.<p>Certainly, <i>data</i>, multimedia files, dictionaries, etc will need to be read from disk. The processes can arrange for separate threads to do that. We can have responsive progress bars, cancel buttons, priorities, timeouts before hitting an alternative data source - but only if the process itself is in RAM, not in swap.<p>Now that desktop and server systems measure DRAM in 10s of gigabytes, this really should not be hard to achieve!<p>I've struggled with swap and out-of-memory situations on Linux many times. The linux kernel never seems to OOM-kill processes fast enough for me. If I have no swap, then if memory pressure sets in, the kernel struggles to shrink buffers, practically freezing most processes, for <i>a few minutes</i> before finally killing the obvious culprit. (I've also tried memory-limiting containers, and they suffer the same problem - freeze up for a few minutes instead of immediately killing when OOM.) I used to enable plenty of swap, more than RAM, because that was the common wisdom, but it causes the same problem when the system comes under memory pressure, everything freezes for a few minutes. But it also has the additional problem that despite setting swappiness to 1 or 0, some strange services/applications will cause the kernel to put some anonymous pages in swap, even when there's <i>plenty</i> of free physical memory. I never want that! I need to periodically swapoff and swapon to correct it.<p>So, at each company I work for, I end up writing a bash script, run by cron each minute, which checks for low system memory, looks among the application services for an obvious culprit, and sends it SIGTERM. In practice, this solves the problem pretty much every time, in the most graceful way. It's extremely rare that a critical system process is the problem or looks like the problem. (Except dockerd a couple times ;)<p>(This is not to bash Linux in particular, Windows and MacOS use way more RAM and swap in general. I've heard the BSDs have been good at particular things at particular times, but driver support has always been more of a struggle. Besides the swap / OOM behavior, I'm pretty happy with Linux.)<p>Letting the OS manage disk and RAM makes perfect sense for bulk data processing - hadoop, spark, or other map-reduce or stream-processing where a few seconds pause here and there is no problem if throughput is maximized. But I personally don't work much on those things - and I'm not a rare case.