I would argue that (1) Kubernetes isn't that complicated, and (2) you're paying a one-time cost in complication that, when managed correctly, gives you an operationally much simpler substrate to run apps.<p>To explain, consider the situation with bare VM, managed with something like Puppet/Ansible/Salt/Chef, with SSH access, iptables, Nginx, etc. -- a classic stack where you address individual nodes, which you may add/remove somewhat dynamically, but where node identity does matter a bit because you have to think about it. You need monitoring, you need logging, you need some deployment system to clone apps onto the nodes and restart them, and so on. Whatever you choose, it's going to be something of a mish-mash of solutions. Most of your config goes into the configuration management engine (Puppet or whatever), which has a data model that maps a static configuration to a dynamic environment -- a model that, having used it for 10+ years, is inarguably rather awkward. You have to jump through all sorts of ugly hoops to make a Unix system truly declarative and reactive. It wasn't made for it. Unix isn't stateless. For example, many adventures in package management has shown that deploying an app -- whether it uses RubyGems, NPM, PIP, Go packages or whatever -- in a consistent, encapsulated form with all its dependencies is nigh impossible without building it once and then distributing that "image" to the servers. You <i>don't</i> want to run NPM on <i>n</i> boxes on each deploy. Not only is it inefficient, there's also no guarantee that it produces the same build every time on every node, or even that it will work (since NPM, in this example, uses the network and can fail). Just this problem alone demands something like Docker. Then there's the next step of how you run the damn app and make sure that it keeps running on node failure.<p>Kubernetes <i>is</i> a dynamic environment. You tell it what to run, and it figures out how. It's a beautiful sight to behold when you accidentally take a node down and see Kubernetes automatically spread the affected apps over the remaining set of nodes. It's also beautiful to see the pod autoscaler automatically start/stop instances of your app as its load goes up and down. It also feels amazing to bring up a parallel version of an app that you built from a different branch and only receives test traffic because you're not ready to deploy it to production quite yet. It's super nice to create a dedicated nodepool, then start 100 processing jobs that will queue up and execute when the nodepool has enough resources to run the next one. Kubernetes turns your cluster into LEGO blocks that can constantly shift around with little oversight. I'm never going back to a basic VM, not even if I'm running a single node.<p>Now, if your choice is not between Kubernetes and "classical VMs" but between Kubernetes and some other Docker-based solution, then... I would still choose Kubernetes. There are so many advantages, not least the ease with which you can transfer an entire orchestration environment to your developers' laptops -- Kubernetes runs fine locally, and all you need to replicate the same stack is a bit of templating. (We use Helm here.) The competition just isn't as good.