> It already supports a number of obscure options (you can make QEMU claim to support a CPU feature regardless of whether the host CPU supports it, really?), so adding one more woild fit in just fine.<p>> Nope. “there are no plans to address it further or fix it in an upcoming release”.<p><<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1408810" rel="nofollow noreferrer">https://bugzilla.redhat.com/show_bug.cgi?id=1408810</a>><p>I could see that being the response of an individual open-source developer working for free. But that was IBM saying that, and people pay big bucks to IBM to fix things like this.
> So if you wish to have more than 14 PCIe slots in your VM, you’ll have to use QEMU directly.<p>No need, libvirt can pass arbitrary options to QEMU.<p><a href="https://libvirt.org/kbase/qemu-passthrough-security.html" rel="nofollow noreferrer">https://libvirt.org/kbase/qemu-passthrough-security.html</a>
I'm curious to know more about the VM host machine that they plugged 15 e1000 cards into to test this limitation. And even more curious about the non-test environment in which somebody ran into this limitation.<p>I can only imagine trying to passthrough 20 nvme devices to a guest, but it seems like a very weird configuration.
If I'm not wrong, the pre-allocation of I/O ranges in PCIe bridges is needed only if you intend to hot-plug devices that were not present in the first enumeration.. but in VMs the hardware is known from the start and the PCIe enumeration can assign I/O ranges only if devices underneath actually needs them... is there a reason why hot-plugging is needed in VMs?
I ran into this on FreeNAS which uses Bhyve. Not sure if it's FreeNAS' way of doing things, but adding a virtual disk using VirtIO creates a separate SATA controller.<p>I tried forwarding quad NVMe's and couldn't get it working until I discovered I was hitting this limitation between the existing disks and VirtIO network card.