<a href="https://serverfault.com/a/684800/2321" rel="nofollow">https://serverfault.com/a/684800/2321</a> indicates it is "not recommended" (at least on Linux), and I have always setup at least <i>some</i> swap on every machine I have had since moving off DOS in the early 90s<p>Is there ever a good reason to not enable it (on any OS)?
Most people think of swap as "emergency memory in case I run out of memory" and while it's true that it can get used in this way, it usually serves a much more critical purpose in your OS's ability to reason about and use memory.<p>For a good article on why this is true for Linux: <a href="https://chrisdown.name/2018/01/02/in-defence-of-swap.html" rel="nofollow">https://chrisdown.name/2018/01/02/in-defence-of-swap.html</a><p>I believe that most operating systems are going to make use of memory in a similar manner.<p>With that said, I'll turn off swap on devices that have unreliable storage. (Anything using an SD card)
For a couple of decades I have ran my desktop without swap, for the simple reason that all it does in practice is slow down the system to a crawl when a rogue process is gobbling all RAM. For my taste, a straightforward failure is better than an unresponsive system. Not that it happens very often, anyway.
If you're running a raspberry pi type machine with storage on an SD card or similar. It's too easy to kill them with many writes.<p>Also on a dedicated systems where you have full control of the stack, the apps, and have resource limits in place anyway. If you know your embedded system will never need more than X GB, you can use it as resource limit of last resort. (And hopefully let the watchdog reboot it after the required app fails to check in)
Nowadays it is the inverse. Is there ever a good reason to enable it? Most devices run on TLC or other fast wearing flash, and swapping there is both expensive in terms of durability loss, as well as still much slower than just having enough RAM.<p>I think my only device with swap is my Mac laptop and it is relatively conservative when it swaps, unlike Linux with default settings.
Take in mind that executable code on Linux is mapped in from disk, either from an executable or from a shared library. So every application's performance on Linux is heavily dependent on the disk cache.<p>If you have no swap, anonymous pages (stacks, heaps) cannot be evicted to disk and the thrashing is forced onto the disk cache. So the hard lock-up occurs earlier.<p>If you want to delay the lock-up as much as possible, enable swap and set swappiness high.
Kubernetes insists that you disable swap. The reason is that swap messes with the accounting of memory usage. Kubernetes expects hosts to have a very direct action memory reservation. Containers have a memory request; if they are over this request when memory pressure hits, they will be killed. Because the swapped out pages are not included in the memory accounting[1] you can end up in states where nothing is over the request, even though you theoretically haven't overcommitted. This requires falling back to true random killing, which is to be avoided.<p>It's also based on observations of performance - for many moons, the performance hit of swapping was bad enough that it was never worth it to run two jobs concurrently that didn't both fit into ram together. The exceptions weren't even exceptions. Disk thrashing was a serious impediment and way to slow your whole fleet down.<p>Now with fast flash being so common, swap is probably actually a good thing for many workloads again, but only "many", and SREs would prefer you make that explicit by using memory mapped files for data that's of random utility, so that the OS can manage that pressure for you, understanding that those files don't need to be fully resident.<p>[1] This is an oversimplification that I don't remember the real truth of off the top of my head
In a previous cloud hosting provider experience, swap was disabled everywhere<p>Every instances were designed to:<p><pre><code> - have ~1GB for basic server requirements
- have XGB for whatever the server was hosting: database ? web server ? proxy ? All have known memory consumptions
- if required, have some extra GB for IO cache
</code></pre>
So we had some known requirements (the "app" line) and some variables requirements (IO cache and "basic server requirements")<p>Some extra informations:<p><pre><code> - one instance = one service (this is the way to handle technical debt, all managements issues, but also risk management and security-related stuff)
- storage was backed with half-millions effective IOPS stuff
</code></pre>
No oom and no waste
It allows the "fail fast" philosophy, where things break quickly and noticeably (in this case, when you run out of RAM), rather than risking a silent degradation in performance.
Without swap you have two modes of operation: working fine, and not working at all (killed by the OOM killer). With swap, you introduce a third mode: working very slowly. The more modes of operation, the harder to reason about.
I have a special Linux ISO I use to boot a secure enclave. It also runs the whole file system entirely on RAM so nothing persists on reboot. I disabled swap as part of this strategy. However I must make sure it's run on systems with enough physical memory for all of this.<p>That said, I run a swap partition in the encrypted portion of this laptop, which I think obviates the problem the original ServerFault poster was trying to solve.
I generally want my applications to be oom killed when they run out of physical memory, not when they run out of swap. The latter is much slower and more painful.
I’ve seen a sentiment expressed widely that disabling swap enables their machines to fail fast when the workload runs out of memory. I would be partial to such behaviour as well, but in my experience, without swap, I’ve more often seen systems run into livelock with the OOM killer unbearably slow in reaping its victims meanwhile I cannot ssh in nor view logs, and that such situations basically disappear on systems where I do have swap enabled.<p>Of course, with or without swap, I can manually obtain the fail-fast behaviour by using systemd-oomd, oomd, early-oom, but I wonder why the reputation of swap differs so from my experience (for context, I’ve mostly run systems for small medium businesses and machines mostly under twenty gigabytes of RAM).
Yes I had swap fully disabled on Windows since there was 64G of ram and the system was still swapping by default which was creating a lot of unnecessary disk activity and writes. In modern windows (10+) the swappiness is much more sane and that doesn't happen as much but if you have a ton of ram you can safely turn off swapping and force the OS to manage memory more aggressively.
Fedora changed to zram instead of swap by default back in 2021 with Fedora 33[1]. So no more swap on disk for one of the biggest Linux distributions. There was no major wailing, nor gnashing of teeth. Most users didn't even notice.<p>For <i>most</i> users, swap is unnecessary and zram does the job better. For users with 8GiB or less RAM, swap is more likely to be useful (except for embedded systems running from SD cards like Raspberry Pis where swap will kill the card).<p>[1] <a href="https://fedoraproject.org/wiki/Changes/SwapOnZRAM#Benefit_to_Fedora" rel="nofollow">https://fedoraproject.org/wiki/Changes/SwapOnZRAM#Benefit_to...</a>
I'm old school. I usually maintain a swap that 2x size RAM, unless RAM exceed 16GiB, then I usually keep swap between 2x and 1x of RAM. This is particularly the case for mission critical servers. Swap gives me half a chance if something hoses memory.
Where there is no disk. For example, computer with no HDD booted from read-only USB stick. RAM disk embedded in kernel for rootfs. Writable directories mounted as tmpfs. Everything runs from memory.
I've seen so many good reasons to leave it enabled, and they're correct and well reasoned and many times I agree, enable swap ...<p>... but on my personal computer I'm not optimizing for long term stability or aggregate behavior. I'm optimizing for "it is fast when I use it, or it throws identifiable issues I can fix to get back to fast".<p>In that context, most software I use does not thrash with no swap (definitely not all! Java is particularly thrashy). It simply runs fine until it OOMs, which I can immediately see and address (and go kill a Docker I forgot about, 95% of the time).<p>I've gone back and forth quite a few times, and no-swap consistently gets me MUCH closer to the behavior I want. I enable it and occasionally I get thrashing that takes time to notice, fight with slow UI, and fix, which I very much dislike while I'm doing all this. I disable it and I get crashes, say "oh right" and fix it without losing my focus, and <i>almost never</i> get visible slowdown.<p>It's not <i>just</i> "emergency memory", there are definitely benefits I'm losing by doing this. But "emergency memory" is something it <i>allows</i>, and avoiding that behavior is worth losing everything else for what I want.
On audio work stations, swapping can lead to an interruption of the audio stream, which can manifest itself as garbled sound up until some very loud noise, not unlike an explosion in a game, which is potentially damaging to the ear, depending on the listening level. So: limiters on all busses, and swap off. If your samples don't fit in memory, try another route.
Do you really have to "disable" it to get rid of it?<p>I can't remember when I dabbled with a swap partition for the last time.<p>Now that I saw this post, I ran "swapon -s" on my laptop and on my servers. It comes back empty everywhere.<p>Why would I ever create a swap partition or file in the first place? Isn't it something from the past when RAM was scarce?
I’ve always disabled swap on my servers. There’s a severe performance penalty when swap is used for a real workload, and that’s far more difficult to diagnose than OOM messages from the kernel. And in every case, swap or not, the resolution is the same. You either get a bigger box, or fix the application.
A few times in me career I've had a narrow use case to disable swap on some systems. The most notable is trying to limit interactions between garbage collected runtimes and the latency introduced by the page faults.<p>The TLDR is GC pauses were high enough to be causing problems in a particular service, at the time the swap was on rotating media so latencies on a page fault were high, and this was especially a problem during mark and sweep operations leading to long pauses. Due to other GC tuning and the overall IO workload, disabling the swap made sense for the particular servers.<p>But if I had to do it today the situation might be very different, I might have NVME drives to drop swap onto with much better access latencies, or be able to use mlock to lock critical pages and preventing swapping, etc.<p>Also, there are some very clear problems introduced with disabling swap, especially on NUMA systems. Again, the particular times I disabled swap, we were able to lock processes onto their relevant numa nodes and do other tuning to make this worthwhile.<p>So as a general rule, especially with modern hardware, I would agree that it isn't recommended. However, you can probably find narrow use cases where amongst a number of other tunings it makes sense to drop swap. Also, there are plenty of other things you likely want to tune before removing swap.
Not really. If there’s no swap, then at some point programs will just die when you run out of RAM. Once things start going to swap regularly you might notice a slowdown and have a chance to do something about it rather than kill a process unexpectedly.<p>For server workloads, you probably never want to use the swap, but it’s safest to have it enabled because you don’t know which process is going to get killed when memory runs out. You can monitor for swap usage and tweak your settings appropriately.
I tend to live a small swap volume (1-4Gb) on my systems where RAM is much, much bigger, 32-64Gb mean, more because of habits than some reasoned technicality. Normally is 100% unused 99+% of the time.
Only if you have burst of requests that need to be replied with constant latency and therefore want to avoid any risk of having to read back swapped out memory.<p>It's a very rare corner case.
I usually turn it off to save disk space. I dunno, with 32GB of RAM I just dont see how the OS should run out of memory, and I don't really want Windows burning out my SSD
Yes, if you absolutely crave CPU performance for tasks that don’t fill all the RAM.<p>You can set swappiness to 0 on Linux so you’ll only swap in emergencies. Better than crashing.