Was I supposed to be maintaining my btrfs partition all this time? I just formatted my disk as btrfs when I bought my laptop and haven't thought about it since.
I have happily used BTRFS in Raid1 on my Plex server for a good while now.
I originally had a chucked WD 8TB drive. I started dreading to rebuild my library if the drive failed. So I got a 8TB Seagate disk and created a Raid-1 setup with one drive missing. I then copied over the data and added the old disk to the new raid. It took a good while to balance itself but it's been problemfree since. I might use ZFS if I switch to Ubuntu for home server, but I am using Fedora now, and want to stick to what is natively supported.
There is no reason to ever use btrfs, IMO.<p>btrfs is the result of Oracle trying to clone, badly, ZFS. When they bought Sun, they discontinued development on btrfs completely, as they (thought they) owned ZFS. Nobody sponsors btrfs development anymore, and its development has completely stagnated; while ZFS, under the OpenZFS project, continues to accelerate and absolutely dominates the enterprise mission critical file system sort of market.<p>Just use the real deal: use ZFS.<p>Due to the massive data loss issues with btrfs, for example, Redhat completely removed btrfs support in 8 entirely, after the preview feature in 7 bombed. They default to, and highly recommend, XFS (as do I, its a good file system).
I just scrub my ZFS pools once in a while and that's it... OpenIndiana autosnap is the rest of it which is built in. Then Monit to check pool health and alert me automatically.
This does not exactly inspired confidence in BTRFS, if I need a script to maintain it.<p>I only use it for one thing, read only compressed lower layers. It's great for that because it can be written to at setup time with standard tools, as opposed to some extra image compile step like true compressed read only FSes, but I wouldn't use it for anything that gets written in normal use.
Why is nobody mentioning?:<p><pre><code> btrfs fi defrag -r /
</code></pre>
This is a recovery that ZFS cannot make.<p><a href="https://www.usenix.org/system/files/login/articles/login_summer17_02_conway.pdf" rel="nofollow">https://www.usenix.org/system/files/login/articles/login_sum...</a>