TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Five Years of Btrfs

214 pointsby vordooover 5 years ago

26 comments

InTheArenaover 5 years ago
I went on a quest a few years ago, thinking it would be good for the industry to standardize on a single next generation filesystem for UNIX. I started with ZFS on linux since that seemed to have the most vocal advocates. That lasted about a half year, until a bug in the code resulted in a completely corrupt disk, and I had to restore 4TB of data over a month from offside backups. That plus the licensing confusion around ZFS has made it impossible for ZFS to be the defacto choice.<p>I went down the BTRFS path, despite it&#x27;s dodgy reputation when netgear announced their little embedded NASes, and switched my server over to it. The experience was solid enough that I bought high-end synology and have had zero problems with it.
评论 #22162588 未加载
评论 #22164957 未加载
评论 #22161705 未加载
评论 #22161432 未加载
评论 #22162345 未加载
评论 #22171389 未加载
derefrover 5 years ago
A question for HN: what filesystem and&#x2F;or block-device abstraction layer would you use on a database server, if you wanted to perform scheduled incremental backups using filesystem-level consistent snapshotting and differential snapshot shipping to object storage, <i>instead of</i> using the DBMS’s own replication layer to achieve this effect? (I.e. you want disaster recovery, not high availability.)<p>Or, to put that another way: what are AWS and GCP using in their SANs (EBS; GCE PD) that allows them to take on-demand incremental snapshots of SAN volumes, and then ship those snapshots away from the origin node into safer out-of-cluster replicated storage (e.g. object storage)? It it proprietary, or is it just several FOSS technologies glued together?<p>My naive guess would be that the cloud hosts are either using ZFS volumes, or LVM LVs (which <i>do</i> have incremental snapshot capability, if the disk is created in a thin pool) under iSCSI. (Or they’re relying on whatever point-solution VMware et al sold them.)<p>If you control the filesystem layer (i.e. you don’t need to be filesystem-agnostic), would Btrfs snapshots be better for this same use-case?
评论 #22162196 未加载
评论 #22163808 未加载
评论 #22161526 未加载
评论 #22162391 未加载
评论 #22160737 未加载
评论 #22164964 未加载
评论 #22160739 未加载
评论 #22163580 未加载
评论 #22161814 未加载
gravypodover 5 years ago
I&#x27;ve seen a lot of the hacker community focusing on btrfs and zfs but very little focusing on ceph. I think ceph has a lot of the features that we want in a file system and some things that aren&#x27;t even possible on traditional file systems (per-file redundancy settings) with very little downsides. The setup is a little more complex involving a few daemons to manage disks, balance, monitor, etc. I wish there was something similar to FreeNAS for ceph that only focused on making the experience seemless because I think if it became more popular in the home lab space we&#x27;d see lots of cool tools pop up for it.
评论 #22162587 未加载
tezzerover 5 years ago
I&#x27;ve had one issue with btrfs that took it off my radar completely. A customer had a runaway issue that filled a btrfs device with unimportant things. We found the errant process and killed it, but apparently if a btrfs device is completely full, you can&#x27;t delete anything to free up space. File removal requires some amount of free space. Bricked the device, annoyed a customer, back to ext4.
评论 #22162220 未加载
评论 #22162517 未加载
评论 #22163041 未加载
评论 #22166177 未加载
评论 #22164817 未加载
pojntfxover 5 years ago
Love using Btrfs; the is no better filesystem than it nowadays that it&#x27;s reliability issues have been fixed.
评论 #22160394 未加载
评论 #22160158 未加载
评论 #22163986 未加载
评论 #22160730 未加载
评论 #22160755 未加载
kineyover 5 years ago
I use BTRFS on several devices for years. The tooling is a bit rough, but no major problems. Just recently data checksumming saved me: In December I replace an old 2TB drive in my RAID1 (2+4+4+4) with an 8TB drive. The new drive had checksum errors after a few weeks which BTRFS handled gracefully. With &quot;classical&quot; RAID i might only have noticed when it&#x27;s to late. (I RMAed the bad drive)<p><pre><code> [&#x2F;dev&#x2F;mapper&#x2F;h4_crypt].write_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h4_crypt].read_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h4_crypt].flush_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h4_crypt].corruption_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h4_crypt].generation_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h2_crypt].write_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h2_crypt].read_io_errs 30 [&#x2F;dev&#x2F;mapper&#x2F;h2_crypt].flush_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h2_crypt].corruption_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h2_crypt].generation_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h1_crypt].write_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h1_crypt].read_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h1_crypt].flush_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h1_crypt].corruption_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h1_crypt].generation_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h3_crypt].write_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h3_crypt].read_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h3_crypt].flush_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h3_crypt].corruption_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;h3_crypt].generation_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;luks-e120f41e-9c8a-4808-876f-fa6665ee8bb8].write_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;luks-e120f41e-9c8a-4808-876f-fa6665ee8bb8].read_io_errs 16 [&#x2F;dev&#x2F;mapper&#x2F;luks-e120f41e-9c8a-4808-876f-fa6665ee8bb8].flush_io_errs 0 [&#x2F;dev&#x2F;mapper&#x2F;luks-e120f41e-9c8a-4808-876f-fa6665ee8bb8].corruption_errs 20619 [&#x2F;dev&#x2F;mapper&#x2F;luks-e120f41e-9c8a-4808-876f-fa6665ee8bb8].generation_errs 0 </code></pre> edit: formatting
epxover 5 years ago
I have been using btrfs in my &quot;NAS&quot;&#x2F;personal server for 3 years, changed disk configuration a couple times, I do snapshots every hour and prune them using a Fibonacci-like timeline, no problems yet.
评论 #22162886 未加载
alyandonover 5 years ago
I use btrfs in raid1 mode and the ability to shrink&#x2F;grow&#x2F;add&#x2F;remove devices at will without data loss or extended downtime led me to choose btrfs over zfs on my home servers.
评论 #22160529 未加载
Shalle135over 5 years ago
Is there any specific reasons to run btrfs over for example ext4? You can create&#x2F;shrink&#x2F;grow pools, create encrypted volumes etc by using LVM.<p>It all depends on the application but in the majority of cases the io performance of btrfs is worse than the alternatives.<p>Redhat for example choose to deprecate btrfs for unknown reasons while SUSE made it it’s default. The future of it seems uncertain which may cause a lot of headache’s in major environments if implemented there.
评论 #22160524 未加载
评论 #22163592 未加载
评论 #22160621 未加载
评论 #22162182 未加载
评论 #22162141 未加载
评论 #22162996 未加载
评论 #22179798 未加载
zielmichaover 5 years ago
fsync is still a bit slow on BTRFS (on ZFS too, but to a smaller degree). For example, I just did a quick benchmark on Linux 5.3.0 - installing Emacs on fresh Ubuntu 18.04 chroot (dpkg calls fsync after every installed package).<p>ext4 - 33s, ZFS - 50s, btfrs - 74s<p>(test was ran on Vultr.com 2GB virtual machine, backing disk was allocated using &quot;fallocate --length 10G&quot; on ext4 filesystem, the results are very consistent)
评论 #22163294 未加载
louskenover 5 years ago
Did anyone had the courage to use btrfs in production? Any stories to share?
评论 #22160140 未加载
评论 #22160537 未加载
评论 #22160214 未加载
评论 #22161163 未加载
评论 #22160500 未加载
评论 #22160426 未加载
评论 #22161414 未加载
pQdover 5 years ago
i&#x27;ve been using BTRFS since 2014 to store backups. there is a noticeable performance penalty when rsync&#x27;ing hundreds of thousands of files to a spinning-rust disk connected to USB-SATA dock when BTRFS is used instead of EXT4. i&#x27;m accepting it in exchange for ability to run scheduled scrub of the data to detect potential bitrot.<p>since 2017 i&#x27;m also using BTRFS to host mysql replication slaves. every 15 min, 1h, 12h crash-consistent snapshots of the running database files are taken and kept for couple of days. there&#x27;s consensus that - due to its COW nature - BTRFS is not well suited for hosting vms, databases or any other type of files that change frequently. performance is significantly worse compared to EXT4 - this can lead to slave lag. but slave-lag can be mitigated by: using NVMe drives and relaxing durability of MySQL innodb engine. i&#x27;ve used those snapshots few times each year - it worked fine so far. snapshots should never be the main backup strategy, independently of them there&#x27;s a full database backup done daily from masters using mysqldump. snapshots are useful whenever you need to very quickly access state of the production data from few minutes or hours ago - for instance after fat fingering some live data.<p>during those years i&#x27;ve seen kernel crashes most likely due to BTRFS but i did not lose data as long as the underlying drives were healthy.
izacusover 5 years ago
It&#x27;s also worth noting that Synology uses btrfs as an option to do checksumming and snapshots on their NAS devices.<p>They&#x27;re still using their own RAID layer though.
评论 #22162619 未加载
cmurfover 5 years ago
kernel 5.5 released Sunday. Btrfs now has raid1c3, raid1c4 profiles for 3 and 4 copy raid1. Adds new checksum algorithms: xxhash, blake2b, sha256.<p>Async discards coming in 5.6. <a href="https:&#x2F;&#x2F;lore.kernel.org&#x2F;linux-btrfs&#x2F;cover.1580142284.git.dsterba@suse.com&#x2F;T&#x2F;#u" rel="nofollow">https:&#x2F;&#x2F;lore.kernel.org&#x2F;linux-btrfs&#x2F;cover.1580142284.git.dst...</a>
abotsisover 5 years ago
It’s worth noting that much of the premise of the article (wanting flexibility) is outdated. Zfs has support for removing top-level raid 0&#x2F;1 vdevs now. So you can take a raid10 pool, and remove a top level mirror vdev completely. Note that this doesn’t work for raid5&#x2F;6 vdevs, but as the author points out, those are becoming less and less used because of rebuild time and performance.<p>In addition to the slew of other features Btrfs is missing (send&#x2F;recv, dedup, etc) zfs allows you to dedicate something like an Intel optane (or other similar high write endurance, low latency ssd) to act as stable storage for sync writes, and a different device (typically mlc or tlc flash) to extend the read cache.
评论 #22161913 未加载
评论 #22161982 未加载
评论 #22161815 未加载
评论 #22160903 未加载
geophertzover 5 years ago
Is using btrfs on a personal machine something to do? It seems that all the comments as well as articles about it, just assume you&#x27;re running it on a server.<p>The ability to add and remove disks on a desktop machine is very tempting.
评论 #22162458 未加载
mdipover 5 years ago
I&#x27;ve been a `btrfs` user for the better part of 4 years despite, at the time, a very vocal group providing advice against it[0].<p>I&#x27;ll be the first to say that it isn&#x27;t a silver bullet for everything. But then, what filesystem really is? Filesystems are such a critical part of a running OS that we expect perfection for every use case; filesystem bugs or quirks[1] result in data loss which is usually <i>Really Bad</i>(tm).<p>That said, for the last two years, I&#x27;ve been running Linux on a Thinkpad with a Windows 10 VM in KVM&#x2F;qemu -- both are running all the time. When I first configured my Windows 10 VM, performance was <i>brutal</i>; there were times when writes would stall the mouse cursor and the issue was directly related to `btrfs`. I didn&#x27;t ditch the file-system, I switched to a raw volume for my VM and adjusted some settings that affected how `btrfs` interacted with it. I discovered similar things happened when running a `balance` on the filesystem and after a bit of research, found that changing the IO scheduler to one more commonly used on spindle HDDs made everything more stable.<p>So why use something that requires so much grief to get it working? Because those settings changes are a minor inconvenience compared against the things &quot;I don&#x27;t have to mess with&quot; to cover a bigger problem that I frequently encountered: OS recovery. An out-of-the-box OpenSUSE Tumbleweed installation uses `btrfs` on root. Every time software is added&#x2F;modified, or `yast` (the user-friendly administrative tool) is run, a snapshot is taken automatically. When I or my OS screws something up, I have a boot menu that lets me &quot;go back&quot; to prior to the modification. It Just Works(tm). In the last two years, I&#x27;ve had around 4-5 cases where my OS was wrecked by keeping things up to date, or tweaking configuration. In the past, I&#x27;d be re-installing. Now, I reboot after applying updates and if things are messed up, I reboot again, restore from a read-only snapshot and I&#x27;m back. I have no use for RAID or much else[2] which is one of the oft-repeated &quot;issues&quot; people identify with `btrfs`.<p>It fits for my use-case, along with many of the other use-cases I encounter frequently. It&#x27;s not perfect, but neither is <i>any</i> filesystem. I won&#x27;t even argue that other people with the <i>same use case</i> will come to the same conclusion. But as far as I&#x27;m concerned, <i>damn</i> it works well.<p>[0] I want to say that an installation of openSUSE ended up causing me to switch to `btrfs`, but I can&#x27;t remember for sure -- that&#x27;s all I run, personally, and it is a default for a new installation&#x27;s root drive.<p>[1] Bug: a specific feature (i.e. RAID) just doesn&#x27;t work. Quirk: the filesystem has multiple concepts of &quot;free space&quot; that don&#x27;t necessarily line up with what running applications understand.<p>[2] My servers all have LSI or other hardware RAID controllers and present the array as a single disk to the OS; I&#x27;m not relying on my filesystem to manage that. My laptop has a single SSD.
nickikover 5 years ago
Being &#x27;The Dude&#x27; of file system is literally the opposite of what I want. When looking at ZFS talks and the incredible complexity of some of those operations that Btrfs seems to think are &#x27;no big deal&#x27;, I will simply not trust that. Specially because it has been proven over and over again that Btrfs claims its &#x27;stable&#x27; and then a new series of issues show up. Or its &#x27;stable&#x27; but not if you use &#x27;XY feature&#x27;, or if the disk is &#x27;to full&#x27; or whatever.<p>I remember using it after I had heard it was &#x27;stable&#x27; and it eat my data not long after (not using crazy features or anything). I certainty will not use it again. A FS should be stable from the beginning, as stable core that you can then build features around, rather then a system with lots of feature that promises to be stable in a couple years (and then wasn&#x27;t years after being in the kernel already).<p>Using ZFS for me has been nothing but joy in comparison. Growing the ZFS pool for me has been no issue at all, I never saw a reason why I would want to reconfigure my pool. I went from 4TB to 16TB+ so far in multiple iterations.<p>Overall not having ZFS in Linux is a huge failure of the Linux world. I think its much more NIMBY then a license issue.
评论 #22160657 未加载
评论 #22160367 未加载
curt15over 5 years ago
BTRFS is well known for being ill-suited to VMs or databases. How come ZFS doesn&#x27;t have that reputation?
评论 #22160222 未加载
评论 #22160232 未加载
评论 #22160119 未加载
c0ffeover 5 years ago
I have a small Nextcloud instance at home that uses BTRFS (on HDD, with noatime option) for file storage, and XFS (on SSD) for database.<p>I started it just for testing, and has been running for up to two years, and had no problems so far.
shmerlover 5 years ago
I&#x27;m using Btrfs currently, but I&#x27;m waiting for Bcachefs to replace it.
评论 #22180484 未加载
e40over 5 years ago
I&#x27;ve heard a lot of people say they won&#x27;t use Btrfs due to reliability. Would have been nice to see that addressed.
评论 #22160095 未加载
cypharover 5 years ago
This article makes a few mistakes with regards to ZFS. Some are understandable (the author presumably last looked at the state of ZFS 5 years ago), but some were not true even 5 years ago:<p>&gt; If you want to grow the pool, you basically have two recommended options: <i>add a new identical vdev</i>, or replace both devices in the existing vdev with higher capacity devices.<p>You can add vdevs to a pool which are different types or have different parities. It&#x27;s not really recommended because it means that you&#x27;re making it harder to know how many failures your pool can survive, but it&#x27;s definitely something you can do -- and it&#x27;s just as easy as adding any other vdev to your pool:<p><pre><code> % zpool add &lt;pool&gt; &lt;vdev&gt; &lt;devices...&gt; </code></pre> This has always been possible with ZFS, as far as I&#x27;m aware.<p>&gt; So let’s say you had no writes for a month and continual reads. Those two new disks would go 100% unused. Only when you started writing data would they start to see utilization<p>This part is accurate...<p>&gt; and only for the newly written files.<p>... but this part is not. Modifying an existing file will almost certainly result in data being copied to the newer vdev -- because ZFS will send more writes to drives that are less utilised (and if most of the data is on the older vdevs, then most reads are to the older vdevs, and thus the newer vdevs get more writes).<p>&gt; It’s likely that for the life of that pool, you’d always have a heavier load on your oldest vdevs. Not the end of the world, but it definitely kills some performance advantages of striping data.<p>This is also half-true -- it&#x27;s definitely not ideal that ZFS doesn&#x27;t have a defrag feature, but the above-mentioned characteristic means that eventually your pool will not be so unbalanced.<p>&gt; Want to break a pool into smaller pools? Can’t do it. So let’s say you built your 2x8 + 2x8 pool. Then a few years from now 40 TB disks are available and you want to go back to a simple two disk mirror. There’s no way to shrink to just 2x40.<p>This is now possible. ZoL 0.8 and later support top-level mirror vdev removal.<p>&gt; Got a 4-disk raidz2 pool and want to add a disk? Can’t do it.<p>It is true that this is not possible at the moment, but in the interest of fairness I&#x27;d like to mention that it is currently being worked on[1].<p>&gt; For most fundamental changes, the answer is simple: start over. To be fair, that’s not always a terrible idea, but it does require some maintenance down time.<p>This is true, but I believe that the author makes it sound much harder than it actually is (it does have some maintenance downtime, but because you can snapshot the filesystem the downtime can be as little as a minute):<p><pre><code> # Assuming you&#x27;ve already created the new pool $new_pool. % zfs snapshot -r $old_pool&#x2F;ROOT@base_snapshot % zfs send $old_pool&#x2F;ROOT@base_snapshot | zfs recv $new_pool&#x2F;ROOT # The base copy is done -- no downtime. Now we take some downtime by stopping all use of the pool. % take_offline $old_pool # or do whatever it takes for your particular system % zfs mount -o ro $old_pool&#x2F;ROOT # optional % zfs snapshot -r $old_pool&#x2F;ROOT@last_snapshot % zfs send -i @base_snapshot $old_pool&#x2F;ROOT@last_snapshot | zfs recv $new_pool&#x2F;ROOT # Finally, get rid of the old pool and add our new pool. % zpool export $old_pool % zpool import $new_pool $old_pool % zfs mount -a # probably optional </code></pre> [1]: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Njt82e_3qVo" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Njt82e_3qVo</a>
lazylizardover 5 years ago
¯\_(ツ)_&#x2F;¯<p>Raidz2+spares, compression, snapshots and send&#x2F;receive are very useful. And zil and cache are easier than lvmcache..
zozbot234over 5 years ago
I&#x27;m so sorry teacher, Btrfs ate my homework.
gitgudnubsover 5 years ago
Storage spaces is probably the best software raid available today. Unfortunately, it comes with windows.<p>It supports heterogenous drives, safe rebalancing (create a third copy, THEN delete the old copy), fault domains (3-way mirror, but no 2 copies can be on the same disk&#x2F;enclosure&#x2F;server&#x2F;whatever), erasure coding, hierarchical storage based on disk type (e.g., use NVMe for the log, SSD for the cache), clustering (paxos, probably). Then you toss ReFS on top, and you&#x27;re done.<p>The only compelling reasons to buy windows server are to run third party software or a storage spaces&#x2F;ReFS file share.