I've used ZoL since it was created, and zfs-fuse before that. I ran it on my workstation for a few years (managing a 4x750gb RAID-Z (= ZFS's RAID-5 impl), with ext3 on mdadm RAID 1 2x400gb root), and then swapped to BTRFS for 2x2TB BTRFS native RAID 1 (which was Oracle's ZFS competitor that seems to be largely abandoned although I see commits in the kernel changelog periodically), and now back to ZFS on a dedicated file server using 2x128GB Crucial M550 SSD + 2x2TB, setup as mdadm RAID 1 + XFS for the first 16GB of the SSDs for root[2], 256MB on each for ZIL[1], and the rest as L2ARC[3], and the 2x2TB as ZFS mirror. I honestly see no reason to use any other FS for a storage pool, and if I could reliably use ZFS as root on Debian, I wouldn't even need that XFS root in there.<p>All of this said, I get RAID 0'ed SSD-like performance with very high data reliability and without having to shell out the money for 2TB of SSD. And before someone says "what about bcache/flashcache/etc", ZFS had SSD caching before those existed, and ZFS imo does it better due to all the strict data reliability features.<p>[1]: ZFS treats multiple ZIL devs as round robin (RAID 0 speed without increased device failure taking down all your RAID 0'ed devices). You need to write multiple files concurrently to get the full RAID 0-like performance out of that because it blocks on writing consecutive inodes, allowing no more than one in flight per file at a time. ZIL is only used for O_SYNC writes, and it is concurrently writing to both ZIL and the storage pool, ie, ZIL is not a write-through cache but a true journal.<p>The failure of a ZIL device is only "fatal" if the machine also dies before ZFS can write to the storage pool, and the mode of the failure cannot leave the filesystem in an inconsistent state. ZFS does not currently support RAID for ZIL devices internally, nor is it recommended to hijack this and use mdadm to force it. It only exists to make O_SYNC work at SSD speeds.<p>[2]: /tank and /home are on ZFS, the rest of the OS takes up about 2GB of that 16GB. I oversized it a tad, I think. If I ever rebuild the system, I'm going for 4GB.<p>[3]: L2ARC is a second level storage for ZFS's in memory cache, called ARC. ARC is a highly advanced caching system that is designed to increase performance by caching often used data obsessively instead of being just a blind inode cache like the OS's usual cache is, and is independent of the OS's disk cache. L2ARC is sort of like a write through cache, but is more advanced by making a persistent version of ARC that survive reboots and is much larger than system memory. L2ARC is implicitly round robin (like how I described ZIL above), and survives the loss of any L2ARC dev with zero issues (it just disables the device, no unwritten data is stored here). L2ARC does not suffer from the non-concurrent writing issue that ZIL "suffers" (by design) from.