I don't care for the tone of this article, but there are a lot of valid points here.<p>The natted networking is problemmatic, the file system is slow (particularly with many image layers), and to get good performance, you have to give up all of the isolation abilities.<p>To get good network performance, you have to use net=host; to get good disk performance, you have to mount and write to host volumes. To increase visibility, you have to use host pids.<p>I have a lot of hope that Docker will get more performance to go with the awesome isolation. It's a useful tool in the proper circumstances, but it requires a lot of forethought and information to use well.
A lot of the (performance) problems are probably caused by the device mapper backend being used on CentOS... I have no performance issues at all running in production on Ubuntu 14.04 using aufs.<p>> The slowness of Docker is a big pain. Build and deployment procedures are not predictable.<p>I have the absolute opposite experience. My build & deployment has become very predictable, and I build pretty much all the images I use myself.<p>> Data belongs onto the filesystem but not into a container that can neither be cloneed in an easy way nor incrementally backed up in a reasonable way. Containers are for software, not for data.<p>Volumes are for data. And sure they can be backed up or cloned, just launch a container that only backs up mounted volumes with --volumes-from whatever stuff you want to back up. Containers should be stateless, and yes, volumes need more attention.<p>This whole thing smells like "I tried this tool and I don't fully understand it so I'll write a rant blog about it". Containers force you to think differently about software deployment. I started using Docker very early-on and I have never regretted putting it in production since the 0.6x days.