I was ready to come in swinging in defending Docker, but I found myself agreeing with most of the points after spending a lot of time with Docker over the last month (ported my Rails application's provisioning from Ansible to it).<p>I would add to the list that it is currently hard to even find decent images of popular services that you would trust deploying to production (e.g. Postgres). I see with the launch of Docker Hub that they have some flagged as "official" now, but for example the Postgres one is a black box (no Dockerfile source available - not a "trusted build") so I can't trust it.[1] I've had to spend time porting my Ansible playbooks over to Dockerfiles due to this.<p>I think part of the problem is that composition of images is strict "subclass"-type inheritance, so they don't compose easily. So it's hard to maintain a "vanilla" image rather than a monolithic one that has everything you need to run your particular service - so people just keep their images to themselves. For example, if I want a Ruby image that has the latest nginx and git installed, I can't just blend three images together. I have to pick a base image, and then manually add the rest myself via Dockerfile.<p>Also, although Vagrant 1.6 added Docker syntax, you really have to understand how Docker works in order to use it. If you're learning Docker I'd stick with the vanilla `docker` syntax first in building your images, maybe using something like fig[2] to start with.<p>At the end of the day it's another abstraction to manage. It does bring great benefits in my opinion, but the learning curve and time investment aren't cheap, so if you already have a suite of provisioning scripts it may not be worth it to make the leap yet.<p>[1]: <a href="https://registry.hub.docker.com/_/postgres/" rel="nofollow">https://registry.hub.docker.com/_/postgres/</a><p>[2]: <a href="http://orchardup.github.io/fig/" rel="nofollow">http://orchardup.github.io/fig/</a>
There are 2 other misconceptions I would really like to be less prevalent:<p>1) Linux containers are not an entirely new concept. If you didn't know about BSD jails or Solaris zones, you were missing out. If you still don't know the differences, I highly recommend you broaden your horizons. Even if you use Linux anyway, just knowing about what's out there will help you be smarter.<p>2) Docker is not a drop-in replacement for every current VM use case. It's just not. To HN's credit, I really haven't seen people here who seem to think that, but it's on my list nonetheless.
I'm so glad someone finally said this publically. Everyone always writes posts about how magical docker is, but the reality is that Docker more of a PR engine then a usable production virtualization solution.
I think you're going to need serious ops fu for "orchestrating deploys & roll backs with zero downtime" whether you use containers or not. It seems like people with complex environments are flocking to Docker despite its supposed complexity, but maybe that's the echo chamber talking.
I'd be curious to see how CoreOS might help simplify some of the issues you mention. I'm starting to dig deep into both and the learning curve is definitely a bit high.
Original comment and posting: <a href="https://news.ycombinator.com/item?id=7869831" rel="nofollow">https://news.ycombinator.com/item?id=7869831</a>
Interesting article! There are some misconceptions I disagree with, but I believe I agree with the spirit.<p>What Docker does is allows those who are best qualified to make the decisions mentioned (the ops guys!) to have a clear separation of concerns from application developers.<p>It doesn't magically solve this hard problems in and of itself.
Great article.
My understanding of Docker is quite new, so take my remarks with a grain of salt.<p>One thing I would emphasise in the first paragraph is that you need at least other provisioning/configuration tools to set up the servers where the docker containers will be deployed to. I know it would obvious to most, but you would still need to start/stop these machines with correct docker install, firewalls rules, and probably more. The VM layer has been flushed out, but still exists and needs attention.<p>After spending more than a week looking into Docker, my one frustration with Docker is that I have not found a way to correctly develop with it. Most of the docs I see are about deploying established apps, but I would love to see tutorials about how to develop, and start from scratch with it. Does Docker stand in your way when developing or does it make it easier? Maybe a solution is to create docker wrappers for our favourite framework that would abstract Docker. Anyways, I'd love to see more on this.
Im always wondering how people pinning versions manage software updates (security or not).<p>If you grow to a few hundred (or thousand or hundred of thousand) systems, it seems pretty hard to test and install combinations, and even if its a single combination with regular updates, you still need a very well oiled and consistent automated testing.
<i>a huge leap forward for advanced systems administration.</i> - this is the one great misconception.<p>How a user-level virtualization technology back from mainframe's era, reincarnated as FreeBSD jails years before solves the problem of, say, optimizing a "slowed down under heavy load" RDBMS server, via re-partirioning, profiling, reindexing, optimizing client's code, which, in my opinion has something to do with "advanced administration".<p>But ok, nowadays we have new meaningless memes "orchestration", "provisioning", "containerization".<p>What punks cannot grasp is that it absolutely doesn't matter how quickly and by which tool you will install some set of deb or rpm packages into some virtualized environment and run it. The real administration has nothing to do with this. It is all about adaptation to reality, data-folow analysis and fine tuning.
Docker is one of those things I keep installing and uninstalling. I simply can never quite make it work for me as a use-case.<p>My current commitment is to try looking at raw LXC again, specifically because it's VM oriented (and also because the unprivileged containers look more like what I'd want to target).
The only thing I'd add to the "You don't need to Dockerize everything" section is be careful of dependencies and what the electric power companies call black start capability.<p>However tempting it might be, don't virtualize all your deep infrastructure like your LDAP and DHCP servers or DNS or whatever as you'll inevitably find yourself after a power failure unable to light up your virtualization system containing your kerberos servers because every single KDC is down or whatever. Its happened...<p>Most virtualization seems to push for the customer facing "app side" not infrastructure anyway. But its something to keep in mind.
Only thing I took issue with was "Instead, explicitly define your servers in your configurations for as long as you can."<p>Maybe you can get away with this for a small scale deployment, but "as long as you can" sends a bit too mixed of a message. You should only defer a service discovery implementation until you get to the point where you have more than one read path in your stack.<p>IMO, as soon as you go from load balancer -> app -> cache -> database to having a services layer you should start thinking service discovery.<p>The simplest bandaid is to leverage DNS as much as possible until you get even bigger, and use static CNAMEs.
I'd like to hear what the consensus on using the Phusion base-image is. It seems to "fix" some pretty important issues with an Ubuntu base image, but I'm not sure they are really even "problems".<p>I use Phusion's base image almost all the time, especially since I tend to group services together (nginx+php-fpm, for example).<p><a href="https://github.com/phusion/baseimage-docker" rel="nofollow">https://github.com/phusion/baseimage-docker</a>