TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Which Linux filesystem “produce” less wear and tear on SSD NAND?

28 pointsby programaticoabout 5 years ago
Linux filesystems usually do lot of read and write on storage eg. hhd and ssd. On the other hand SSD are getting cheaper but give a less TBW per capacity. So which Linux FS is doing less writes (write amplification) on SSD? EXT3, EXT4, XFS, BTRFS, F2FS ...? Linux OS is for desktop use!

6 comments

rwhaabout 5 years ago
I have two laptops with SSDs running only Linux, and they both have mostly been powered on for two years or more (XFS and BTRFS). Both are still operating normally and smartctl shows minimal wear.<p>I would focus on mount options that limit writing (e.g., relatime&#x2F;noatime) or putting ~&#x2F;.cache on tmpfs.<p>In my experience ~&#x2F;.cache gets the most frequent writing during normal desktop usage. A lot of applications ignore XDG standards and create their own snowflake folder directly in $HOME. You might want to watch for and replace those making a lot of writes with a symlink to where they belong. (This quickly became a frustrating battle that I lost).
评论 #22768750 未加载
评论 #22768398 未加载
tyingqabout 5 years ago
I think you would get more mileage out of tracking where all the writes are, and making whatever changes are needed to reduce them.<p>Auditd, can, for example, track every write. Track it over a good sample period of typical use, then make whatever changes are needed. Might be database tuning, moving specific files to tmpfs, changing the way you do backups, reducing writes to syslog, changing fs mount options, etc.<p>Auditd is a little complex, but it&#x27;s fairly easy to find write-ups on how to monitor writes and generate use reports.
评论 #22768465 未加载
评论 #22773205 未加载
moviuroabout 5 years ago
Rule of thumb: your preferred FS will be OK. Limiting write is <i>not</i> a goal in and of itself.<p>See <a href="https:&#x2F;&#x2F;wiki.archlinux.org&#x2F;index.php&#x2F;Improving_performance#Reduce_disk_reads&#x2F;writes" rel="nofollow">https:&#x2F;&#x2F;wiki.archlinux.org&#x2F;index.php&#x2F;Improving_performance#R...</a>
评论 #22768978 未加载
loser777about 5 years ago
Is there likely to be any meaningful difference? With most worthwhile SSDs incorporating a sizable DRAM cache and OS file system caching on top, would day-to-day journaling and other overhead be expected to make a dent in SSD longevity?
cmurfabout 5 years ago
Ext3, ext4, and XFS have a journal that&#x27;s constantly being overwritten. On Btrfs, the file system is the journal. It does have a bit of a wandering trees problem [1], where F2FS expressly intends to reduce the wandering tree problem.[2] There&#x27;s also different approaches that don&#x27;t involve filesystems [3].<p>But I think you have to assess the crash resistance and repairability of filesystems, not just worry about write amplification. I think there&#x27;s too much made about SSD wear. The exception are the consumer class of SD Card and USB flash, those are junk to depend on for persistent usage, best suited for occasional use, and all eventually fail. If you&#x27;re using such flash, e.g. in an embedded device, you probably want to go with industrial quality flash to substantially improve reliability.<p>Consider putting swap on zram [4] or using zswap [5]. I&#x27;ve used both, typically with a small pool less than 1&#x2F;2 of RAM. I have no metric for clearly deciding a winner, either is an improvement over conventional swap. Perhaps hypothetically zswap should be better because it&#x27;s explicitly designed for this use case; where zram is a compressed RAM disk on which you could put anything, including swap. But in practice, I can&#x27;t tell a difference performance wise.<p>[1] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1707.08514" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1707.08514</a><p>[2] <a href="https:&#x2F;&#x2F;lwn.net&#x2F;Articles&#x2F;520829&#x2F;" rel="nofollow">https:&#x2F;&#x2F;lwn.net&#x2F;Articles&#x2F;520829&#x2F;</a><p>[3] <a href="https:&#x2F;&#x2F;www.usenix.org&#x2F;conference&#x2F;fast13&#x2F;technical-sessions&#x2F;presentation&#x2F;lu_youyou" rel="nofollow">https:&#x2F;&#x2F;www.usenix.org&#x2F;conference&#x2F;fast13&#x2F;technical-sessions&#x2F;...</a><p>[4] <a href="https:&#x2F;&#x2F;www.kernel.org&#x2F;doc&#x2F;Documentation&#x2F;blockdev&#x2F;zram.txt" rel="nofollow">https:&#x2F;&#x2F;www.kernel.org&#x2F;doc&#x2F;Documentation&#x2F;blockdev&#x2F;zram.txt</a> <a href="https:&#x2F;&#x2F;github.com&#x2F;systemd&#x2F;zram-generator" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;systemd&#x2F;zram-generator</a><p>[5] <a href="https:&#x2F;&#x2F;www.kernel.org&#x2F;doc&#x2F;Documentation&#x2F;vm&#x2F;zswap.txt" rel="nofollow">https:&#x2F;&#x2F;www.kernel.org&#x2F;doc&#x2F;Documentation&#x2F;vm&#x2F;zswap.txt</a>
kasabaliabout 5 years ago
Have a look at this paper [0]<p>While there are big gaps of write amplification for metadata writes, on macro benchmarks all filesystems have similar results.<p>btrfs has the biggest WAF, but you can enable compression globally and I suspect that difference alone will make it come ahead of others.<p>[0] Analyzing IO Amplification in Linux File Systems <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1707.08514" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1707.08514</a>