I work in on-prem infrastructure software. The reasoning that our customers have given us is that they care about security. We have to deploy to customers who don't air gap their systems, but have fun access control policies. For example, we regularly support customers who have issues using a web session. We can't control their computers, but we can ask them to type (or copy and paste) anything in we'd like. Often times, the commands we ask them to run are like: [K || K = {kv, [navstar, _]} <- ets:lookup(kv)]. Do you know what that does? I can almost assure you that our customers do not. We ship compiled binaries to the customer, and although we would never ever send them code that could hurt them, we link against a dozen libraries (that we ship), and who knows who could have poisoned those?<p>We also have a requirement for our software that time has to be synced. We also recently started to actually check if people's time was synced (using the requisite syscalls), and it quickly became our #1 support issue for a little bit. Customer environments are far too uncontrolled to simply be tamed by such a system as Kubernetes.<p>I think that if you can avoid on-prem software and use XaaS, you should. You probably don't need to run your own datacenters, databases, etc.. because it's unlikely that you're better than GOOG, AMZN, Rackspace, DigitalOcean, and others. It's very unlikely that your application will be running at sufficient economies of scale to benefit from the kind of work Amazon, and others have done to run datacenters efficiently at scale. Not only this, but Google has figured out how to run millions of servers.<p>Although GOOG / EC2 tend to makes hardware available (NVMe, SSDs, etc..) far after the market releases it, if you compare it to enterprise hardware cycles, it's lightning fast. We still have customers who run our system on SAS 15K disks, and prefer that over SSDs, even though we recommend it. On the other hand, if you control the environment, rather than spending days, if not months on how to make your application more I/O efficient, you can simply spend a few more cents an hour, and get 1000 more IOPs.<p>I have a technique (Checmate) to make containers 20-30% faster and expose other significant features. Unfortunately, we have customers which run Docker 1.12, the latest version of our software, want to use containers, and yet they want to use a kernel no newer than a third of a decade old. I literally have spent months making code work on old kernels, at significant performance, and morale cost. If we controlled the kernel, this wouldn't be a problem, and it's yet another problem that K8s cannot solve.<p>Lastly, networking on-prem tends to be an afterthought. IPv6 would make many container networking woes go away nearly immediately. Full bisection bandwidth could allow for location oblivious scheduling, making the likes of K8s, and Mesos significantly simpler. BGP-enabled ToRs give you unparalled control. Unfortunately, I have yet to see a customer environment with any of these features.<p>I really hope the world doesn't become more "on-prem".