This summary page is absolutely terrible.<p>It appears to be an open-source implementation of nvidia-cli for managing mdevs, which is arbitrarily nice, but it's not clear to end-users what this means.<p>Pre-Ampere GPUs (Ampere+ is MIG) were able to use the mediated device subsystem to partition cards into time slices. It's similar to SR-IOV, except that you can specify the size of the partition with more granularity than "give me a new virtual device".<p>Intel has GVT-g, which is reasonably widely supported on Gen12 and above (edit: too late/early -- Gen11 and earlier, not Gen12 and later -- thanks to my123 for the correction). nVidia had vGPU/mdev, and newer generations (Ampere and later) use MIG. It's unclear whether this supports MIG at all. AMD uses MxGPU, and they've never really cared about/pursued anything related to this, probably because their datacenter penetration is about 1%.<p>MxGPU is only supported on some FirePro cards. mdev was largely on GRID cards (mostly Tesla, some Quadros). MIG is on AXXX cards.<p>It's unclear why anyone should use this over mdevctl, which already supports GVT-g, and it's also unclear whether this is tied to the (very much "don't use in production") open source nvidia drivers.<p>For end-users, GVT-g, getting a cheap older GRID card, or using Looking Glass for your GPU are all more reasonable options.<p>This effort is great, but the readme is appallingly short on information even for someone who knows the problem domain.
no AMD, why for Intel and doesn't cover 90% of nvidia products most people will have (unles they own and run a datacenter)... which leads me to ask...<p>why even bother?
This seems to be at the wrong level of abstraction.<p>I want a virtual Vulkan, one per CPU VM. I think I read somebody was working on that, or something like. That way it works on any GPU, not just NVidia or NVidia/Intel.