I’ve been asked to write a blog post about “The PITA of managing containers and VMs.”<p>It's meant to be a rant listicle (with explanations as appropriate). What should I be sure to include?
Back in the day I had 2 Mbps ADSL and couldn't install anything with Docker because it didn't properly support caching and resolving failed downloads. Allegedly you can run the Danbooru image server software by just typing "docker compose" but I tried it and got a huge number of errors but not a clear explanation I was running the wrong version of docker compose. I guess I could try installing an old version of compose or I could figure out how to translate it to the new version. Either one seems like an unpleasant adventure that makes me think "maybe I can turn my RSS reader into an image sorter instead"<p>It also bothers me that people are so out of control of their tools. Back in 2005 I was running servers with 300+ different web sites running on them and could deploy a new instance in five minutes with scripts because I was disciplined with configuration files.<p>Allegedly it helps you be in better control of things, but I worked at a place where data scientists were always finding defective Pythons to build into images, like the Python that had Hungarian as a default charset.
Consistency. Small discrepancies between environments.<p>Abstraction. Not assuming too much that you code yourself in a corner, but also not abstracting away so much that the code is difficult to work with.<p>Industry standard tools that have strongly opinionated built in paradigms (I would include terraform, ansible, etc.). I feel like the tools of future ought to be general purpose programming language frameworks just to avoid this, too many times I either can’t do something, or have to hack together something utterly convoluted, that could have easily been done if the tool was a framework and I could have just thrown in some Go or Python.
One of the goals of containers are to unify the development and deployment environments. I hate developing and testing code in containers, so I develop and test code outside them and then package and test it again in a container.<p>Containerized apps need a lot of special boilerplate to determine how much CPU and memory they are allowed to use. It’s a lot easier to control resource limits with virtual machines because the application in the system resources are all dedicated to the application.<p>Orchestration of multiple containers for dev environments is just short of feature complete. With Compose, it’s hard to bring down specific services and their dependencies so you can then rebuild and rerun. I end up writing Ansible playbooks to start and stop components that are designed to be executed in particular sequences. Ansible makes it hard to detach a container, wait a specified time, and see if it’s running. Compose just needs to be updated to support management of shutting down and restarting containers, so I can move away from Ansible.<p>Services like Kafka that query the host name and broadcast it are difficult to containerize since the host name inside the container doesn’t match the external host name. Requires manual overrides which are hard to specify at run time because the orchestrators don’t make it easy to pass in the host name to the container. (This is more of a Kafka issue, though.)
I still can’t run a rootless kubernetes installation, even though i can run rootless containers as an unprivileged user with podman.<p>Kubernetes assumes it can take the whole node for itself.<p>Storage in kubernetes is messy.<p>Networking in kubernetes is largely developed on the assumption that you only have a single nic.<p>On-pre kubernetes feels like a second-class citizen. Too many helm charts assume you’re either in aws or gcp.
For containers, we have Kubernetes, which OK can be a pain in its own right, but at least we're almost all in it together. For VMs, we have lots of choices. But, how do you manage them both with one pane of glass or APIs? Aye, there's the rub.