Hi all. A few clarifications.<p>- The meme that we are adding more and more features into the docker binary is unfounded. Please, please, I ask that before repeating it you do your homework and ask for actual examples. For example 1.4 is coming out next week: it has 500+ commits and basically no new features. It's all bugfixes, refactoring and a general focus on quality. That's going to be the trend from now on<p>- Swarm and Machine are separate binaries. They are not part of the core docker runtime. You can use them in a completely orthogonal way.<p>- Swarm follows the "batteries included but removable" principle. We are not doing all things to all people! There is a default scheduling backend but we want to make it swappable. In fact we had Mesosphere on stage today as part of a partnership to make Mesos a first-class backend.<p>- there is an ongoing proposal to merge compose into the docker binary. I want to let the design discussion play out, but at the moment I'm leaning towards keeping it separate. Now's the time to comment if you care about this - that's how open design works :)<p>Yes, our blog post is buzzwordy and enterprise-sounding. I am torn on this, on the one hand it helps make the project credible in IT departments which associates that kind of language with seriousness. We may find that strange but if it helps with the adoption of Docker, then it benefits every Docker user and that's ok with me. On the other hand, it is definitely not popular on HN and has the opposite connotation of dorky pencil holder suit douchiness. Being from that tribe I share that instinctive reaction. But I also realize it's mostly psychological. I care less about the specific choice of words than the substance. And the substance here is that we launched a lot of new stuff today, and put a lot of effort in keeping the focus on a small, reliable runtime, composable tools which do one thing well, pluggability, open APIs, and playing nice with the ecosystem. Basically everything the community has been worrying about recently.
I think the community really ought to take a good minute to consider, beyond technical reasons, whether it really makes sense to so tightly tie the future of computing to a single for-profit company's quickly enlarging platform.<p>Someone below compared this to systemd - it's really more like your entire containerization operating system. And since you run everything via containers, it effectively is your operating system/platform.<p>So, clearly they (and CoreOS, etc.) will want to monetize their container operating system/platforms. But is it really a good idea to build the entire industry's concept and implementation of containers themselves on the back of a single company's implementation, when we know a healthy ecosystem would see a number of companies with competing implementations of container OS with varying degrees of compatibility, and hopefully eventually open standards.<p>I really am beginning to see the CoreOS guys point here - if Docker could have just stuck to running containers and doing that awesome, there would have been space for other companies to build out the ecosystem around that shared interoperable container format. But if Docker is now set on tightly bundling a toolchain for the container operating system around their format, suddenly it looks a lot more like they took a Microsoft embrace-extend-extinguish approach to LXC.<p>And thus the need for Rocket.
I'm a little bit afraid about the fragmentation occurring in the container world right now. I felt like in the beginning I could rely on Docker being focused on containers and really making that a stable building block and utilise tools around that provided by industry leaders. Now Docker have thrown their own hat in the ring, creating a monopoly for themselves. Do you choose docker and their whole ecosystem? Do you pick something else off the shelve? How about Amazon ECS container service, CoreOS with their array of tools.<p>I don't feel like I can depend on any of these things, so I stick with the absolute bare minimum of what will build me a container. Which of these technologies will stay? Which will go? What will change as time passes? What will be deprecated?<p>In all honesty with Kubernetes talking about supporting Rocket and probably any other container technology that creeps up in the next few years, I'm leaning towards using that as the point of stability which I can deploy anywhere and know that I get the exact same API. Google, the leader in cluster management writing open source orchestration technology, think that's where I'll keep my focus.
Wonderful. We just containerized all of our apps and are in the process of choosing our approach for running and deploying them in a cluster.<p>Now what?<p>Flynn? Deis? Kubernetes? Mesos? Shipyard? Pure Fig instead? CoreOS, Serf, Maestro? Rather stay on AWS with Elastic Beanstalk or the new docker service?<p>Welcome to the party, Swarm and Compose. By now we are not even sure anymore if Docker itself is still the way to go, now that Rocket and LXD have arrived. I don't even have the time to compare all these options, respectively get a deeper look into architectural considerations.<p>What to decide by? Company backing? Because it's good or bad? Github stars? Deis for self-announcing it's 1.0, even if it's based on pre1.0 components or Flynn for being honest they're still in beta?<p>Honestly, I've rarely been as tired of new technologies as I have been by now. I could roll a dice as well. If you have a good and reasonable choice for me, let me know (I'm actually serious)
The 2 examples in Docker Swarm were Redis and MySQL.<p>From the announcement: "Docker Swarm provides high-availability and failover. Docker Swarm continuously health-checks the Docker daemon’s hosts and, should one suffer an outage, automatically rebalances by moving and re-starting the Docker containers from the failed host to a new one.".<p>Does anyone know how they'll handle the data? Both Redis and MySQL have various ways to deal with high availability e.g. Redis Sentinel, MySQL master / slave or MySQL multi master with Galera.
I think this sheds a little more light on the reasons CoreOS decided to start building the Rocket container runtime[1], and not tie it's destiny to being paired with Docker.<p>[1] <a href="https://coreos.com/blog/rocket/" rel="nofollow">https://coreos.com/blog/rocket/</a>
GitHub repo for Machine: <a href="https://github.com/docker/machine" rel="nofollow">https://github.com/docker/machine</a><p>GitHub repo for Swarm: <a href="https://github.com/docker/swarm" rel="nofollow">https://github.com/docker/swarm</a><p>Compose is still being designed in the open. If you want to have a say about how it works, check out the proposal: <a href="https://github.com/docker/docker/issues/9459" rel="nofollow">https://github.com/docker/docker/issues/9459</a>
Talking about Docker as we are<p>Does any one agree it's still too hard for dumb dumb developers like me? I'm on windows (boo hiss) so in the past I've tried to use boot2docker, but you can't just point your webserver container at a place on your local file system and say serve that please<p>You have to bring in some crazy storage file container which will serve it all via samba or something and then you need to figure out linking those containers together and then how the hell do you tell a web server "hey you, document root is over here on another container"<p>At this point I'm usually like fuck it we'll use some bad idea .exe web stack and develop as normal
I like the idea of containers, quicker smaller than vms, nice file system history going on but in practice
it isn't easy enough in my opinion
Maybe I'm just thick in the head, but one of the thing that continues to disappoint me about Docker is the size of the binaries. Wouldn't it be good if we could build the container a single time, and then ship that top-level changeset around? For example. If I build a 200mb binary on top of `ubuntu:latest` I would like to be able to just ship that 200mb around, instead of 200mb + ubuntu:latest (another ~167mb?). If you colocate many services in a single machine (say 10-12) the network of grabbing those tarballs makes Docker less appealing.<p>edit: Also, its inefficient to build this Dockerfile every single time on every single host, which is why I'm talking about shipping tars. You could have 30 hosts with these 12 containers running on each one.<p>Any plans on dealing with something like this in the future?
I'm excited to see Docker continue to progress so quickly, but I'll admit to being more and more confused over how many components and services you have to contend with now. I'm sure I could sort out all of these names if I spent more time playing, but it's getting a little confusing to me.<p>There's a lot to be said for making something only do one thing and doing it well, but it starts getting tough to keep track of when you've got a bunch of somethings.
These aren't new projects... just rebranded versions of half-baked feature proposals that I thought were still being reviewed/discussed. I guess somewhere a decision was made to move forward regardless of community concerns?<p>Baking these features into Docker is the beginning of the end of Docker's Enterprise story. Moving forward with these proposals guarantees the rise of Rocket and other Enterprise focused containers. Docker is forking its own community here.
I'm excited to watch this battle between CoreOS and Docker heat up. I recently took a CoreOS/Docker-based system into production on AWS and there are definitely still some missing pieces. Swarm appears to be a slightly higher-level version of fleetd. Compose is something CoreOS is missing though.
They sound like a closed-source vendor at this point. I'm surprised to see an open-source project mention "ecosystem partners":<p><pre><code> Each one is implemented with a “batteries included, but
removable” approach which, thanks to our orchestration
APIs, means they may be swapped-out for alternative
implementations from ecosystem partners designed for
particular use cases.
</code></pre>
So if I have a startup working on an orchestration solution, what is the process to become an approved 'ecosystem partner'. Do I need to sign a NDA and pay for an approval process to get my stuff merged in?
With this amount of buzz words needed to install and run an app I see a bright future for Go and its "all compiled in one, web-server included, ready to go" executable structure. I mean sure: you get automation and repeatability of installs but at what cost? You have to maintain all the buzz-word hoops that your app needs to be wrapped into - requiring what amounts to a full new job in medium sized software company. And you still need the sysadmin to actually make the servers work.
Mesos 0.20.0 adds the support for launching tasks that contains Docker images, with also a subset of Docker options supported while we plan on adding more in the future.<p>Users can either launch a Docker image as a Task, or as an Executor. The Docker Containerizer is translating Task/Executor Launch and Destroy calls to Docker CLI commands.
I haven't been reading Docker related news lately. Is there anything I should know if I already have my own working continuous deployment system made with Ansible, Jenkins and Docker? For example, it seems like I don't need Docker Machine if I already have my own Ansible recipes for provisioning.