I've used both mesosphere and Kube now (in production) and I feel I can safely comment on this.<p>Kube is winning for the same reason React/Redux (and now Mobx) is winning and why Rails was winning at the time. Community.<p>The community for Kube is awesome and the work people are doing in the field is being noticed all over the place.<p>I've seen people (myself included) that moved production clusters from mesos to Kube just because the activity of the development and how secure they feel with the community and the project going forward.<p>React and Rails (at the time) had the same sort of pull towards the community and why a lot of people on-boarded.<p>Golang is most likely a factor here too. I feel most people find Golang friendlier than Scala/Java. That's why Kube has many more contributors, the hurdle for contributing is easier to jump
For a devOps fan like me, k8s has been a godsend, and what I like in particular is their 3 month release schedule. There are still some hiccups like no good documentation (or a tutorial really) on setting up shared writeable storage and how to handle databases, or more importantly replication.<p>The k8s team is very responsive and I'm sure these will be ironed out in the near future so we can all adore each other's cattle :)
Disclaimer: I work for Mesosphere (Champions of Apache Mesos and DC/OS)<p>We have total respect for K8s, but I don't think you can claim winning just based on the community and stars.<p>OpenStack has a huge larger community of developers and advocates, but it is still haven't reached it's potential despite many years and incredible effort, seminars and summits<p>Also most of these next gen infr projects now (DC/OS which is powered by Mesos, K8s, docker swarm ) are converging feature wise, but they also have their strengths and weaknesses, some of these are temporary, some of these are structurally by design<p>Mesos (and DC/OS) were not just designed for scale, but also for extensibility and different workloads, which is why you can run Cassandra, Kafka, Spark, ..etc In production. None of these workloads run as traditional containers, with DC/OS they have their own operational logic such as honoring placement, simple installation, and upgrade, which is a core design function of the two-level scheduler<p>People usually complain that standing up Mesos is hard, which is why we we built DC/OS and open sourced it to give you the power of mesos and its awesome capabilities in managing containers and data workloads without going through all the effort to stitch everything yourself. Check it out at DCOS.io, I am sure you guys will be blown away.
I know it's young still, but I think Nomad is going to get a share of this market with little effort.<p>I played with Mesos & k8s and I picked Nomad instead. Now I'm not managing a huge fleet of servers I want to abstract away as much as I wanted a simple, consistent framework for allocating resources to containerized tasks across a small group of nodes. And I don't think that use case is anything to sneeze at and for a new user there just isn't anything out there as easy as nomad IMO.<p><a href="https://www.nomadproject.io/" rel="nofollow">https://www.nomadproject.io/</a>
It pretends to compare Kubernetes, Apache Mesos and Docker Swarm. This article says Kubernetes has a lot of stars on github (doesn't compare it to Docker or Mesos, only says Kubernetes has a lot), same for Slack/Stack Overflow and number of CVs mentioning the tech ... I will pass Infoworld opinion from now on.
I can bring up an app on Linux or Windows from bare metal in minutes by hand. But the way it's supposed to be done now is something like this, right:<p><pre><code> 1) Use one of Chef/Puppet/Salt/Ansible to orchestrate
2) One if those in item 1 will use Vagrant which
3) Uses Docker or Kubernetes to
4) Set up a container which will
5) finally run the app
</code></pre>
Really?
<i>It's all about knowing how to build an open source community</i><p>This. Engineering excellence is secondary. You can get away with complete craptitude in your tech if you can build community. (I won't name examples.) Of course, it's better if you also have technical excellence. On the other hand, you can have technical excellence, but it will come to naught if you have community destroying anti-patterns.
In my experience, I haven't been coming to k8s because I particularly like the developer experience (despite their efforts to focus heavily on it), but because it cleanly supported some things that I need.<p>For instance, with k8s, out of the box every running container in a clustered system is discoverable and has its own IP. If you're writing distributed applications, and you're using containers principally as a tool to make your life easier (and not as part of an internal paas or for handling data pipelines or some other use case), having that sort of discovery available out of the box is great.
One thing I've found extremely difficult to handle is the Zookeeper cluster model of containers. Where when a thing dies, a thing has to come back and be able to referred to as "zookeeper-1" forever. The way to do this currently is to use a service in front of a replication controller with one pod. This feels wrong all over. Supposedly they have a thing called Pet Sets [1] coming to solve this, but it's been in the works for an eternity. Also we've started to outgrow the load balancing simplicity that the k8 load balancer gives you, and I have not seen a nice migration path to something like HAProxy in Kubernetes. All that said, we like kubernetes a lot.<p>[1] To distinguish from cattle. If you have a red blue and green goldfish, and your red goldfish dies, you can replace with another red fish and not really notice, but if it's purple, the others won't play with it.
One thing that I think Kube (and dc/os) are missing is what Chef is working on right now. Application definition should live within the app and consumed by the scheduler.<p>Chef's product is called Habitat <a href="https://www.habitat.sh/" rel="nofollow">https://www.habitat.sh/</a> and it has some VERY interesting concepts that if Kube will implement it will be much more interesting (to a lot of people).<p>Right now, the deployment and configuration of an application is supposed to be separated but I feel they need to be just a bit more coupled. An engineer that develops the application will define a few things like "domain" and how you connect to the application and this will be consumed by the scheduler.<p>Right now, dc/os and mesos are really fine grained around the DevOps people and I feel that the first that will crack the "batteries included" approach will win the fight by a knock-out.<p>Imagine something like Heroku on your own infrastructure, if an engineer can launch and deploy a micro-service with the same ease they deploy and access a Heroku application. That will be awesome.
Just as an outside observer developing on a platform, I see my fellow devOps team members working in Kubernetes and it's been a shit storm for the most part where containers disappear randomly, stuff breaks and they are on call on the weekends not having fun. I have my own clients on other projects using AWS where I just upload and click buttons like a dumb monkey and I end up looking more competent even though I'm completely not. I've consequently not been motivated to dive into these DIY deployments just yet.
I haven't dived into Kubernetes yet, but I set up Rancher for our new application and it has been nothing short of amazing so far. I can't express how happy we've been with it.<p>I previously tried the Mesos/Marathon route (with Mesosphere and then again with Mantl) and that was nothing but a huge waste of time due to all the maintenance that was necessary for all the required servers. With Rancher, it's just spin up a container for each host with a single command and you're done.
From the subtitle:<p><i>It's all about knowing how to build an open source community -- plus experience running applications in Linux containers, which Google invented</i>
The setup to get k8s running isn't great, but once it's running and you understand it's config files it makes things so much easier. We're getting ready to deploy k8s at work soon and begin moving more there as we can.<p>From what I understand, and is completely not in the article, Mesos is designed for scale while most start-ups (and even established companies) can't afford or justify. K8s is simpler but still robust. Better than just fleet or compose and clearly still better than swarm (based on posts read here on hn).
I use Mesos and K8s heavily as well as contribute back to the projects, and while I do agree this is leaning towards being fan-fare there is a bit of truth to this.<p>Community is a big deal, people tend to underestimate this; Putting aside newer companies, when a larger enterprise ventures out to open source they do take community as a major factor as you have to consider the tool you build will have to last 5 years, maybe 10, maybe more.<p>To delve further into the Docker side of things, I personally wish that the company would focus on its core business instead of stretching itself with extra things. I get the need to improve the UX, which they do very well considering how far we have come from LXC.<p>I feel Mesosphere starting to go down the feature creep route as well, but I wish them all the best as I loved Mesos since the beginning all those years ago.
Why is no one mentioning Docker's response with version 1.12, the new built-in orchestration called swarm mode (different than just swarm):
<a href="http://blog.nigelpoulton.com/docker-launches-kubernetes-killer/" rel="nofollow">http://blog.nigelpoulton.com/docker-launches-kubernetes-kill...</a><p>Granted, the title is fanboyish, but it really seems to be a significant response to Kubernetes.
IMHO nobody is winning anything that matters right now because the current transition is a transition to an additional level of abstraction which is <i>definitely</i> not properly met by any of the tools available.<p>What we now need is <i>tools that allow architectural fences around components, and reliability guarantees around subsystems</i> ... versus not only technical but also business-level risks (including state-level actors)... often across borders, including for example exchange rate risk. This is based on business-level risk models, not some engineer feels X or Y type reasoning which is (often very well, but broad-picture uselessly) based on technical know-how.<p>I prototyped such a system pretty successfully, you can read the design @ <a href="http://stani.sh/walter/cims/" rel="nofollow">http://stani.sh/walter/cims/</a> .. it's incomplete (critically hard to explain investment utility for non-tech business types) but at least infrastructure agnostic and practically informed.<p>NB. To be non-humble, by way of demonstration I am a guy who has been called to conceive/design/set up/manage from-scratch datacenters for extremely visible businesses with highly motivated attackers with USD$Ms at stake for penetration. Systems I personally designed run millions of USD/month and have many times that in investment. And it's obvious that both Google ("compete with amazon on cloud business... high-level desperate for non-advertising income!") and Docker ("VC! growth! growth! growth!") have their own agendas here... we should trust no-one. It's early days. Bring on the ideas, build the future. Hack, hack, hack. We the little people own the future. It's ideas.
Looking through all of these comments we should be glad, as a community, that there are a number players in this burgeoning orchestration space. Nobody has won and nobody should win. They each have their strengths and weaknesses and there is no one size that fits all.
Sad to see Mesos losing steam. My understanding was that Mesos subsumes the functionality of Kubernetes thanks to its Aurora scheduler. But it has much more customized schedulers, for different purposes, that might make it more efficient to run complicated pieces of software.<p>For instance, it certainly is possible to run a Cassandra cluster by having each instance run in its own Docker container. My understanding is that it would be much more efficient to run this cluster with a dedicated Cassandra cluster instead.<p>Is this right? Or are the performance benefits of running a dedicated Cassandra scheduler on Mesos negligible compared to running them in containers?
It would be nice if open source projects (especially popular databases) came with k8s definition files so that you wouldn't have to write the yaml yourself.
I wish kubernetes has more examples for example their vsphere volume driver has almost 0 documentation/tutorials on how to set that up.<p><a href="http://kubernetes.io/docs/user-guide/volumes/#vsphere-vmdk-example-configuration" rel="nofollow">http://kubernetes.io/docs/user-guide/volumes/#vsphere-vmdk-e...</a><p>I believe is inadequate
What is the best resource to learn Kubernetes like a boss? I like ebooks, but will take anything, as long as it's up-to-date and easy to follow without being a long-term sysadmin.
These articles never mention the elephant in the room, AWS.<p>How many containers are running on Elastic Beanstalk and ECS?<p>I'd wager magnitudes more are running containers by reading the docs and clicking around than those that mastering the cutting edge challenges getting running on Mesos, Kube or Swarm.<p>Another blind spot is Heroku. Every day new businesses spin up containers with a 'git push Heroku master' and don't even know about it.<p>All these providers, platforms and tools have their place.<p>I simply don't think the "winning the war" talk is accurate or constructive.<p>Disclaimer: I worked on the container systems at Heroku and now manage 100s of production ECS clusters at Convox.
Innovation is what makes this industry so exciting to be a part of. I joined the tech world, via Holberton School[1] just a mere nine months ago, and already so much has changed. It makes it challenging to keep up, but that's the fun part.<p>The future of technology is held in the hands of open source projects.<p>[1] <a href="https://www.holbertonschool.com" rel="nofollow">https://www.holbertonschool.com</a>