Kubernetes is making amazing progress. A lot of people are involved and even more people are watching closely. But, who actually walks the walk and runs production software in Kubernetes cluster?
I do! The hardware layer consists of physical machines running XenServer. Networking layer is 1Gbit WAN interfaces and 10Gbit LAN interfaces on "virtual switches" all wired together with pfSense. Gitlab-CI takes care of deploying just about all of the fabric on top of that including the images and app / system components / resources. Ingress is currently being overhauled, but right now it's essentially exposed HAProxy (pfSense) on top of dedicated, HA ingress VM's. Oh and it's all CoreOS. We're running three sites on it each with dev environments. Maybe a couple random API's, too. I haven't looked through all of the namespaces in a bit.<p>Edit: Gitlab-CI runners run on kubernetes as well using the dind images. Ingress nodes will soon be given public IP's. Public IP's are currently on CARP failover. After the gitlab-ci-multi-runner 1.1.1 release (allowing shared artifacts) and Kubernetes Deployment resources (providing a way easier deployment workflow and orchestration of pods), CI/CD is a breeze. We have dedicated nodes for MySQL (PXC) and ZooKeeper because these don't play well in the Kubernetes network environment - don't ask me to look at the examples ;) Currently running with Flannel for the overlay, but we're evaluating Calico and waiting on new Docker features to pull the trigger on something else... Multicast, isolated namespaces, VLAN's would be awesome :)<p>Edit2: I don't know why I keep saying "we" ... I've built and run this thing solo on top of programming... Not enough hours in a day...
Kubernetes in Production in The New York Times newsroom
<a href="https://kubeconeurope2016.sched.org/event/67f2/kubernetes-in-production-in-the-new-york-times-newsroom" rel="nofollow">https://kubeconeurope2016.sched.org/event/67f2/kubernetes-in...</a>
500 node clusters EVERY darn day. Glad they removed the 500 node the limit since kube 1.2 to allow clusters of larger size. We run primarily on google compute but we also run smaller clusters on Amazon.
SoundCloud are moving to Kubernetes too: <a href="https://www.youtube.com/watch?v=5378N5iLb2Q" rel="nofollow">https://www.youtube.com/watch?v=5378N5iLb2Q</a>
We are running it in production<p>AWS with kubernetes 1.2.1 and calico as the overlay network. We have all our web apps in kubernetes and working on our background job apps next.
We're running many small (10s of nodes) clusters on metal with coreos. Networking is some in house stuff we've purpose built so we can get public ips on the pod.<p>Internal adoption seems to be going well so hopefully this grows.