The real reason for these projects (Virlet, KubeVirt, RancherVM) is that OpenStack is too damn hard. Even with all this focus on containers these days people still need an onprem VM solution. But the requirements are really pretty simple for most use cases and the cost and complexity of OpenStack is not justified.<p>Edit: disclaimer: my company does RancherVM.
Interesting to see how this <i>really</i> compares to KubeVirt, which seems to be doing the same thing. I don't think KubeVirt is "just" pets as far as I understand. [disclaimer: I've been very peripherally involved with KubeVirt because they asked me about integrating virt-v2v support].
I'm using kubevirt for this. My usecase is to allow preconfigured windows VMs to boot up inside of kubernetes - so that both Windows VMs and Linux containers are manipulated using the same API. It works very well!
What's the key difference between Virtlet/KubeVirt and just running VMs alongside your Kubernetes pods? Is the main goal to centralize management? That seems reasonable, I'm just wondering.<p>Also, how does a Kubernetes-managed VM compare to a plain VM from, say, AWS EC2? I imagine it's a little less efficient, since the situation I'm imagining involves various VMs running a Kubernetes pod running the VM, but I may have this all wrong.
reminds me of vmlets term from the virtual virtual machines <a href="https://pages.lip6.fr/vvm/publications/0008SBAC.pdf" rel="nofollow">https://pages.lip6.fr/vvm/publications/0008SBAC.pdf</a>