That's very impressive!<p>You may want to do the same for NVMe: creating several namespaces is not supported on most consumer drives, while laptops can rarely have more than 1 NVMe (same problem as with the GPUs: a passthrough requires having 2 of them)<p>Being able to split the NVMe drive not by partition but by namespace would let each OS see a "full drive".
This is insanely impressive. Having tried set up GPU passthrough in proxmox a few years ago, it was an absolute disaster. I would love to see this kind of approach more widely supported by other hypervisors!<p>It's a real shame consumer GPUs are arbitrary locked down when the enterprise counterparts (often with the exact same chip) have much better support for virtualization.
Really hoping AMD eventually does the right thing here. Not that it particularly matters seeing as how decent AMD video cards have been unpurchaseable for 18 months now.<p>Consumers should have the ability to use their hardware well too. Selling the same thing at 2X the price differentiating only on virtualization capabilities is not a moral path.<p>> <i>We remain hopeful that AMD will recognize forthcoming changes in GPU virtualization with the creation of open standards such as Auxiliary Domains (AUX Domains), Mdev (VFIO-Mdev developed by Nvidia, RedHat, and Intel), and Alternative Routing-ID Interpretation (ARI) especially in light of Intel's market entrance with their ARC line of GPUs supporting Intel Graphics Virtualization Technology (GVT-g).</i><p>Really cool to hear there are a bunch of vGPU-related efforts underway! That's so great.
I'm familiar with linux virtualization, gpu passthrough, etc. I've never heard of arcd and they've made no attempt in this doc or on their git to explain what it is or why it exists as, I assume, a replacement (or wrapper?) for qemu.<p>My past experience with looking-glass is that it falls on its face at anything > 1440p@60Hz. I'm interested in vGPU for my linux VMs (spice is slow and sdl/gtk display is flakey) but for gaming, I don't want looking-glass and prefer to just do the passthrough thing with a KVM switch.
Great! Still requires vGPU support and the merged driver approach last time I tried won't support CUDA on host (I was probably the first one tried the merged driver thing with vgpu_unlocked?).<p>Looking forward someone write a Vulkan driver on Windows that just shuttles down to the Linux host. virgl used to be a promising project ...
This seems awesome! I have a passthrough setup with a very old card and a much newer one for games in a windows vm, it'll be nice to look into getting this set up and I can reduce the power draw on my system which was causing some problems...<p>Is this something that could work with/be integrated with libvirt for easy configuration? Itd be neat to set it up with my current install, although not at all a real problem.
I tried doing this years ago, and never quite go it to work.<p>Some of the software involved in that article simply didn't exist yet, and GPUs weren't shipping with SR-IOV support yet (instead, I did Intel iGPU for Linux fbcon, real AMD GPU fed directly to the Windows VM with PCI-E Passthrough). In the end, I bailed on that dream and moved the Linux install to its own smaller machine, and ran Windows bare on the big machine.<p>The problem was, if the GPU locked up hard, and GPUs back then would not respond to PCI Device Reset, if it wasn't something that merely re-initializing it on VM restart would fix... I had to restart the <i>entire</i> machine, thus defeating the purpose of having Windows in the VM in the first place!<p>All my long-lived processes now run on the stand-alone Linux machine, and anything that is free to explode runs on my Windows machine. Windows gets wonky? Restart, ssh back into my screen sessions, reopen the browser, restart a bunch of cloud slaved apps, tada.
Very cool approach. It's going to be a fight I suspect; vendors lock things up specifically so they can have price differentials in different markets. It may end up being like the fight between workstations versus PCS back in the 90s
Very, very cool!<p>I've set up a dual-GPU system in the past using two nvidia GPUs and whilst I found the trek towards PCI passthrough to other virtual machines rewarding when it finally worked, I also found the arrangement to be inconvenient.<p>What you've achieved here, seems the ideal. Well done :)<p>I will either patiently wait for an Arch Linux version of the install, or I'll eventually end up impatient and see if I can rustle up something - an install script is an install script, it should be just a matter (famous last words) of altering the install script/procedures to suit.
isn't this natively supported by nvidia?<p>i.e. you have a vgpu card (or a consumer card you an map to the equivalent vgpu card) and nvidia drivers and tools lets you load drivers that essetially partition it into XGB (all partitions being the same X) and then you just gpu passthrough the newly created device that maps to a single partition into the vm?<p>the whole trick being the ability to trick nvidia's drivers into thinking that the consumer gpu is really the server model, but otherwise it becomes just normal nvidia usage?
This article almost got my hopes up too much! I'm curious whether Intel's "Ark" GPUs will support the same or if they'll go the same path of Nvidia in locking down virtual function support.
Just to clarify because I've failed at doing pretty much the setup that you are describing with my 1080Ti. Does this still require the vgpu_unlock changes for Nvidia cards or is this something that bypasses the need for it entirely?
What's the story with SR-IOV for consumer AMD GPUs on Linux? Last time I looked into it, it was impossible to use or more like AMD didn't want to support it.