Ignoring the obviously opinionated cruft and hyper-aggressive uber geek disdain, which appears to make up about 70% of this post, there are still one or two actual statements worth examining. Fwiw I run a small site, fifteen or so instances, and we've been using Docker in our deployment for about a year now.<p>> Lets say you want to build multiple images of a single repo, for example a second image which contains debugging tools, but both using the same base requirements. Docker does not support this.<p>Of course it does. It appears that it doesn't support it the way you think it should, but to say that you can't do it is misleading. A base image, and two images that pull from it with the different requirements will solve the problem. You apparently don't like that solution, but that is not the same thing as not having a solution.<p>> there is no ability to extend a Dockerfile<p>Yeah, this would be nice. Maybe they will add it. But it is hardly... not even close to... a make-or-break feature. Honestly I think you might just need to refactor your stuff, or perhaps Docker just isn't a fit for what you're doing.<p>> using sub directories will break build context and prevent you using ADD/COPY<p>You mean if you include a bunch of stuff in subdirectories that you don't want uploaded to the demon. Again, man, not even close to make or break. You really need to log gigabytes to a subdirectory in your build context? There's _no other way_ you could set that up? We create gigs of logs too, but most of them are events that go to logstash and get indexed into ES. Our file-based logs go to mount points outside the container. We do have images we build using context, where we ADD or COPY multi-gigabyte static data files. Seems to work fine.<p>> and you cannot use env vars at build time to conditionally change instructions<p>No, you can't. I'm not sure I would want to. I like the fact that the Dockerfile is a declarative and static description of the dependencies for a deployment. I don't think I want to have to debug conditional evaluation at build time. There are other ways to solve those problems, like refactoring your images.<p>> Our hacky workaround was to create a base image, two environment specific images and some Makefile automation which involved renaming and sed replacement. There are also some unexpected "features" which lead to env $HOME disappearing, resulting in unhelpful error messages. Absolutely disgusting.<p>First of all, what exactly is hacky about having a base image and two environment-specific images? I don't know what sort of makefile automation you're talking about, but we do some environment specific sed manipulation of configs at build time, and in some cases at container launch time. Sometimes that makes more sense than having two different versions of the container just to have a very slight change to the config.<p>Secondly... absolutely disgusting? Is that the sort of language you regularly use in technical writing? Oh, hey, look at the third paragraph: "If you expect anything positive from Docker, or its maintainers, then you're shit outta luck." I guess it is. The strike-out font was a nice touch, man. "I don't really mean this, but you can't help reading it!" Nobody's ever done that before.<p>> These problems are caused by the poor architectural design of Docker as a whole, enforcing linear instruction execution even in situations where it is entirely inappropriate<p>You're not talking about linear instruction execution. You're talking about grouping instructions into commited layers. I would much prefer the proposed LAYER command to conditional execution or branching, which is what I assume you mean by non-linear in your comment. But I don't find this to be a serious problem either. That seems to be a pattern with this post: in a year of using Docker to containerize all our services - in-house python code, Django, redis, logstash, elasticsearch, postgresql - I haven't run into these issues that are deal breakers for you. Again, you might want to try to refactor and simplify some of your image builds. It's better to have a few simpler containers talking to each other than to try to cram a complex multi-service deployment into one. But then, I don't know what you're doing, and maybe it's just not suited for containers. You seem to have a strong preference for VMs anyway, so do that.<p>> However the Docker Hub implementation is flawed for several reasons. Dockerfile does not support multiple FROM instructions (per #3378, #5714 and #5726), meaning you can only inherit from a single image.<p>This whole post is like a laundry list of Absolutely Critical Things Nobody Ever Needed. I can't imagine a situation in which you'd absolutely have to be able to inherit from multiple images. If you have that situation I would agree it's an indicator Docker won't work the way you currently want to do things. I do agree with you about the occasional speed issues on the hub. But they're giving it to lots of people for free, and to me for a ridiculously low price. If I need better performance I can always run my own registry.<p>> There are some specific use cases in which containerisation is the correct approach, but unless you can explain precisely why in your use case, then you should probably be using a hypervisor instead.<p>There are some specific use cases in which virtualization is the correct approach, but unless you can explain precisely why in your use case, then you should probably be using containers instead.<p>See what I did there?<p>> If your development workflow is sane, then you will already understand that Docker is unnecessary.<p>I do like to read even-handed, unbiased reviews of technologies like Docker, even when I already use them. I like to have my world view challenged with an exposition of solid critical points. Maybe someone will write an article like that.