This rambles a bit. Here's the summary:<p>btrfs is currently optimized for normal applications that do open("foo", O_RDWR). With this mode, integrity semantics are quite loose in POSIX.<p>Because VMs emulate physical hardware with strong integrity semantics, they usually do either open("foo", O_DIRECT) or open("foo", O_SYNC).<p>btrfs sucks for O_SYNC. It's not just VMs, databases also tend to make heavy use of O_SYNC.
So BTRFS is very efficient for big sequential read (which you generally don't care for much, because they're pretty fast in any case) and dies when subjected to small random read (which are the bane of platters in the first place)... isn't that dumb for a general-purpose FS?
Huh, that's funny. I've been running a VM out of a btrfs partition for months and haven't seen these problems. It's not blindingly fast, but (a) the partition is encrypted, and (b) the VM is running Windows with antivirus software, so there are a couple of things other than btrfs slowing down the write path. But I certainly haven't seen freezes such as those described in this post.
I'm glad there are about a dozen different file systems that don't suck with VM work and quite relieved BtrFS developers are actively working on improving the case that hurts VM performance.<p>Having said that, I'd love to know if there are automated tests within the kernel that could verify integrity/correctness/performance of things like filesystem drivers in a simple and automated form. Something like it could prevent surprising developments performance regressions like this and provide better mapping between what you want to do and how you should do it.
I think the problem also depends on what kind of virtual disk you end up using. Let me ellaborate:<p>I dont think the problem is purely buffered vs. unbuffered IO. The guest operating system will have performed some block coalescing anyway so the block requests will often NOT be 4K chunks but should have slightly larger granularity. However, if you use a COW based virtual disk layout like QCOW2 which I guess is standard in KVM, you may see additional scattered IO.<p>I think it is weird to be using COW virtual disk layouts on a file system that natively supports COW as is the case with BTRFS. I would be curious to see what the performance of raw sparse files on BTRFS is vs. qcow2 etc.
It sounds as though the same issues that make it perform suboptimally on VM hypervisors would make it also perform suboptimally for OLTP databases -- in both, the I/O patterns generally involve high numbers of small writes.
This problem is similar to (and exacerbated by) the IO bottlenecks VMs experience when using traditional hard drive disks, due to high levels of random IO operations. For this reason, many new virtual setups are using solid state drives, which have no seek time. This keeps the high level of random IO operations from significantly impacting performance.
The quoted text is painful to read. I don't know why so many mailing list pages have to look like this. At the very least could the linebreaks be taken out?
Observing the dynamics of the list, I have to ask: who is JB and why is he/she so worried about VM performance under BtrFS?<p>Fedora is not a Linux you recommend for someone who doesn't know what they are doing and, if you know VM performance sucks with BtrFS, please, by all means, add another partition and use ext4 (or 3 or 2 or XFS or anything you think may offer you better performance)