TLDR; A frank retraction of the sentiment expressed sarcastically in the previous post is summarized under <i>Real problems solved</i>. However, each of these points is dubious...<p>1. <i>Up until now, we’ve been running our service-oriented architectures on AWS and Heroku and other IaaSes and PaaSes that lack any real tools for managing service-oriented architectures. Kubernetes and Swarm manage and orchestrate these services.</i><p>While some options for managing large groups of services running on one type of infrastructure do indeed now exist, and this is one step further in automation and therefore a good-thing(tm), it is by no means the end-game and in fact at this stage may not even be desirable as it is in effect simply shifting the basic scope of service-oriented infrastructure comprehension and management from a single service to a group of services, and likewise the units of deployment and management of infrastructure a cluster rather than a host, while making certain (and not safely universal) assumptions about how the service(s) will need to be managed in future.<p>2. <i>Up until now, we have used entire operating systems to deploy our applications, with all of the security footprint that they entail, rather than the absolute minimal thing which we could deploy. Containers allow you to expose a very minimal application, with only the ports you need, which can even be as small as a single static binary.</i><p>Yes but this rarely happens in practice. It's like saying "now we use Linux, we get the benefits of NSA SEL". No, you don't. You have to put a lot of effort in to get that far, and it's highly unlikely to be used. So this is basically a moot point right now.<p>3. <i>Up until now, we have been fiddling with machines after they went live, either using “configuration management” tools or by redeploying an application to the same machine multiple times. Since containers are scaled up and down by orchestration frameworks, only immutable images are started, and running machines are never reused, removing potential points of failure.</i><p>Yes, immutable infrastructure is good, but we have 100 ways to do this without docker. Docker is like an overpriced gardener who comes to your door, knocks around the garden for half an hour, flashes a thousand dollar smile - ie. put a cute process convention over the top of what's there already - and tell you all smells sweet in the rose garden (PS. here's your fat invoice). Never trust a workman with an invoice, and never trust abstraction to solve a fundamental problem.<p>4. <i>Up until now, we have been using languages and frameworks that are largely designed for single applications on a single machine. The equivalent of Rails’ routes for service-oriented architectures hasn’t really existed before. Now Kubernetes and Compose allow you to specify topologies that cross services.</i><p>Well that's cute, but actually bullshit. We've had TCP/IP and DNS for decades. To "specify topologies that cross services" you just go <i>host</i>:<i>port</i>. What's more, the standard approach and protocols actually have deployment, documentation, and are known to work pretty well on real world infrastructure. Their drawbacks are known. Now, I'm not saying there's zero improvement to be made, but the way this is phrased is ridiculous.<p>5. <i>Up until now, we’ve been deploying heavy-weight virtualized servers in sizes that AWS provides. We couldn’t say “I want 0.1 of a CPU and 200MB of RAM”. We’ve been wasting both virtualization overhead as well as using more resources than our applications need. Containers can be deployed with much smaller requirements, and do a better job of sharing.</i><p>Sure, we've known that container-based virtualization was far more efficient than paravirtualization for decades. Docker has not actually provided either, nor made it measurably easier to mix and match them as required, so this claim seems bogus.<p>6. <i>Up until now, we’ve been deploying applications and services using multi-user operating systems. Unix was built to have dozens of users running on it simultaneously, sharing binaries and databases and filesystems and services. This is a complete mismatch for what we do when we build web services. Again, containers can hold just simple binaries instead of entire OSes, which results in a lot less to think about in your application or service.</i><p>What kool-aid is this? The implication is that unix and its security model are going to go away as a basis for service deployment because... docker. What? Frankly, I would assert that many application programmers can barely <i>chmod</i> their <i>htdocs/</i> if pushed, let alone understand a process security model including socket properties, process state, threads, resource limits and so forth. Basically, the current system exists because <i>it is simple enough to mostly work most of the time</i>. While it may not be perfect, it's a whole lot better than throwing the baby out with the bathwater and attempting to rewrite every goddamn tool to use a new security model. The mystical single binary services that docker enthusiasts seem to hold up as their <i>raison d'être</i> are likely therefore to either tend to be huge, complex, existing processes allowing almost anything (like scripting language interpreter VMs) or nonexistant. By contrast, the 'previous' unix model of multi-process services with disparate per-process UIDs/GIDs, filesystem and resource limitations seems positively elegant.<p>All in all, this post's argument doesn't hold that much water in my view. However, I applaud CircleCI for working on workflow processes ... I think ultimately these are the bigger picture, and docker is merely one step in that direction.