This kinda makes the mistake that nearly everyone makes with regard to containers: They view the container as a black box into which complexity disappears.<p>A container doesn't consume complexity and emit order. The complexity is still in there; you still have to build your containers in a way that is replicable, reliable, and automatable. I'm not necessarily saying configuration management is the only way to address that complexity, but it <i>does</i> have to be addressed in a container-based environment.<p>Now, I understand that in many cases some of the complexity is now being outsourced to others who maintain the containers (in the same way we've been outsourcing package management to operating system vendors for a couple decades), and so maybe some of the complexity has been pushed outside of your organization, but you have <i>something</i> being deployed that is internal to your org, so you have to deal with that complexity. Container management tools just don't do things at that level.<p>There's always a point where what you're doing is unique to your environment, and you have to be able to manage/reproduce/upgrade it somehow.
This article reads as very developer-centric, is my first thought. When comparing Docker and Configuration Management, he uses deploying code and scripts as his comparison, which is far from extensive when thinking through a list of all changes on a system.<p>As a former operations engineer himself, the author didn't touch on a suite of other day to day complexities to consider : how to handle emergency change management and track that (most configuration management tools can revert to baseline if they detect a change they don't expect), keeping systems patched, etc.<p>While not a bad article, I feel like we've talked about just deployment, and quite frankly there are a zillion ways to solve the automation of it.
I see some parallels between J2EE and containers. In both cases it is a platform that runs a "bundled application" with the hope of interoperability and ease of use. Download and deploy the .war (and edit five .xml config files) and you're done. Pull the image and run CMD in it, and you're done.<p>No matter how good the "easy app runner" platform is, one still has to specify the network between apps, logical dependencies, and --volume dependencies. And also "runtime config" to the individual apps. I really like the docker compose approach, but does a docker compose recipe work on every docker hosting provider? Also, the 12factor app approach is good, but is just a bunch of ENV vars sufficient for every config one might want to do for an app?
When facing this duality a few years ago, I had to look at the facts that encourage Config Management in favour of a Dockerfile approach:<p>* Dockerfiles do not work for configuring host systems, only Cfg Management is applicable<p>* Configuration management systems usually have a very declarative approach, easier to extend and maintain that Dockerfiles and bash scripts<p>* Dockerfiles contain too many arbitrary choices that do not work for everyone, starting with the choice of the distribution and OS version: companies like standardizing around one distribution and would have to fork Dockerfiles based on other systems, every single time.<p>The best solution I see at the moment is to use setup containers using configuration management.
The author seems to think configuration management is about installing packages. That's something the package subsystem does, with a reasonably mature handling of dependencies too. That makes it look like a glorified shell script. (Which may be a case of when all you've got is a hammer everything looks like a nail.)<p>But configuration management concerns centralized management (hence "management") of decentralized systems. It can answer questions such as "why is this sceduled job running on nodes of type x but not y and who put it there?", and guarantees such as "application w and z are always in lock step concerning parameter p".<p>I've seen organizations move to containers, and they all inevitably end up with more and more containers and increasing complexity. Centralized configuration management is more important in that environment, not less. Modern tools such as Ansible and Puppet have grown up in a devops worls and have good support for managing containers (even if they are a bit of a moving target) and there is no reason to be scared of them.
> All of the logic that used to live in your cookbooks/playbooks/manifests/etc now lives in a Dockerfile that resides directly in the repository for the application it is designed to build. This means that things are much more organized and the person managing the git repo can also modify and test the automation for the app.<p>It sounds like the author didn't have experience with larger systems - or maybe did, but my experience contradicts this.<p>Let's say you have everything you can in the containers. Now you want to deploy test and production environments. How do containers know which environment they're running in? Or specifically, things like what's the database user/password, what queue to connect to, where to find local endpoint for service X?<p>That still needs to live outside of containers. And at some point etcd and similar solutions have a problem of "what if I don't want to share all the data with all the infrastructure all the time"? Well... you can solve that with a config management service. Edit: just noticed etcd actually gained ACL last year - good for them. But how do you configure/deploy etcd in the first place?
> All of the dependencies of the application are bundled with the container which means no need to build on the fly on every server during deployment. This results in much faster deployments and rollbacks. ... Not having to pull from git on every deployment eliminates the risk of a github outage keeping you from being able to deploy.<p>Obviously there are other benefits, but it's funny how much of the motivation for containers comes from "now we don't have to do git clone && npm install!", which was always the case.
><i>Instead you can write a block of code with Chef, for example, that abstracts away the differences between distributions. You can execute this same Chef recipe and it will work anywhere that a libxml2 package exists.</i><p>But this doesn't really work. What if the package is differently named on different distributions? What if one distribution's version of the package isn't compatible with your use of it? Besides, how often do you switch between distributions on your servers?
I decided to use docker since a particular client I am working with mandated that we use bare metal servers they own. A problem that I haven't quite solved yet is that I have several distinct hosts running docker daemons, with certain apps designated for certain hosts. I wish there were a way for docker compose to know about multiple docker hosts. Docker swarm could maybe help, but the clients specifically wants certain things to be redundant across two specific hosts.
No mention of how to debug code in containers... or shared containers created so developers can share libraries.... or a multitude of other things which do happen when you start letting developers directly push things to production<p>If you are using docker you need to ensure the docker container author takes the pager for the services he provides :)
> Not having to pull from git on every deployment eliminates the risk of a github outage keeping you from being able to deploy. Of course it’s still possible to run into this issue if you rely on DockerHub or another hosted image registry.<p>lol
Link is obfuscated and shortened. Should be changed to target URL: <a href="https://blog.containership.io/containers-vs-config-management-e64cbb744a94" rel="nofollow">https://blog.containership.io/containers-vs-config-managemen...</a>