<i>"Say we want to deploy a web service as four containers running off the “httpd” image....This is simple to ask for but deceptively hard to actually make happen."</i><p>...and yet, would take almost no work to set up using a non-dockerized workflow. I don't understand why so many people are putting themselves through this. It's becoming common to hear people from dinky little startups going down dark rabbit holes trying to build their infrastructure like Google -- but it's a total distraction!<p>If you're running a service with a small number of machines, don't do any of this. Architect a sensible multi-AZ deployment (i.e. a cluster of 2-3 machines, an ELB, a VPN, and a firewall/bastion server), spin up instances by hand, and upgrade things as needed. Create AMIs for your machine classes, and get yourself used to working with a sensible upgrade schedule. Doing this for a small number of machines (e.g. N <= 25) won't take an appreciable amount of your time.<p>Once you start to have more machines than that, you'll probably also have the resources to get someone who knows what they're doing to set up more "magical" automated management schemes. Don't bury yourself in unnecessary complexity just because it's the hot new tech of the moment.
Our own experience with ECS has been similarly negative.
While this was 6 months ago, I am not aware of significant improvement.<p>In general, the whole thing feels rushed and duct taped together.
Networking model (inherited from Docker) doesn't play nice with ELB.<p>- The built-in AWS tools for monitoring are not container aware.<p>- We've had multiple occurrences of the ECS daemon dying.<p>- Very little visibility into the progress of deploys. The API/console will report something as "running", when it in fact was still loading up.<p>If you watch their videos, they promise integration with Marathon, but if you look at the code, it's in "proof of concept" stage.<p>At this stage, GCE is significantly ahead of AWS.
Out of the box, you get a top notch container story, logging and monitoring.
>> All of this works without requiring that we install or operate our own container scheduler system like Mesos, Kubernetes, Docker Swarm or Core OS Fleet.<p>You can use kubernetes without installing and operating it yourself on Google Cloud PLatform too, it's called Google Container Engine but it's k8s under the hood.<p>My experience with ECS is very brief, as contrasted with several months working daily with k8s. My first impressions of ECS was that it is cobbled together from a bunch of existing AWS services, and as such it requires you to get far more involved in the various APIs than kubernetes on GCP does.<p>Overall kubernetes feels more like a cohesive abstraction because that's what it is. ECS by comparison feels like a solution pieced together out of Amazon's existing proprietary parts because that's what it is. I'm sure they will be improving this.
The biggest missed opportunity of ECS that AWS missed was completely hiding the complexity and concerns of managing EC2 instances. If you use ECS you have to deal with the complexity of both and manage those resources. The dream being sold by the container industry is that you don't really care about the machines you're containers are running on. ElasticBeanstalk gets this right because they hide that concern.<p>ECS is akin to selling git on top of SVN. It doesn't really make sense.<p>Bryan Cantrill gave a talk about the craziness of running containers on VMs at <a href="https://www.youtube.com/watch?v=coFIEH3vXPw" rel="nofollow">https://www.youtube.com/watch?v=coFIEH3vXPw</a>.
Author here.<p>I'd love to compare more notes with everyone about deploying to ECS.<p>If you want to play with an ECS cluster, `convox install` is a free and open source tool that sets everything up in minutes. Little to no AWS knowledge required. <a href="https://convox.com/docs/overview/" rel="nofollow">https://convox.com/docs/overview/</a>
ECS is order of magnitude more complex than almost anything I had experienced with (taking out Marathon).<p>I found ElasticBeanstalk to me MUCH simpler for Docker based deployments. It's using ECS behind the scenes for multi-container instance.<p>You essentially deploy a zip with with a Dockerrun.json metadata file and it handles everything for you, including rolling deploys based on your auto scaling groups.<p>I never found the ECS definitions of Task and the rest really intuitive to work with.
I've generally enjoyed my time with ECS and I think it's pretty likely it will be a strong-contender/the-winner of the docker wars, because net-net if you're starting from scratch without experience and a skilled ops team it's the easiest to Standup.<p>But I definitely agree that at this stage there's some oddly rough edges. I'm glad to hear logging is a bit better. But I would love if that were just solved by default. Similarly build/deploy. I think a teeny bit more standardization and UI would make that a ton easier.<p>Overall though I'm still bullish. I put together a sample terraform config that is the skeleton for a basic rails app in ECS <a href="https://github.com/jdwyah/rails-docker-ecs-datadog-traceview-terraform" rel="nofollow">https://github.com/jdwyah/rails-docker-ecs-datadog-traceview...</a>
I found ECS to be quite good honestly (recent experience).<p>I have some multi-server deploys running with load balancers, rolling restarts, SSL, private registries and more.<p>In fact, I like it so much that I created an end to end course that teaches you all about using roughly half a dozen AWS resources to deploy and scale a web application with ECS.<p>Details can be found here (it's an online course taken at your own pace and costs $20):<p><a href="https://www.udemy.com/scaling-docker-on-aws/?couponCode=HN_20" rel="nofollow">https://www.udemy.com/scaling-docker-on-aws/?couponCode=HN_2...</a><p>The example application is a multi-service rails app that uses postgres, redis and sidekiq but you can follow along without needing any rails experience.
>All of this works without requiring that we install or operate our own container scheduler system like Mesos, Kubernetes, Docker Swarm or Core OS Fleet.<p>vs<p>>We need to bring and configure our own instances, load balancers, logging, monitoring and Docker registry. We probably also want some tools to build Docker images, create Task Definitions, and to create and update Tasks Services.<p>Doesn't sound like much of a win then. That sounds annoying. I just set up mesos/dcos on AWS, and it sounds like the same amount of effort, only now I've got a platform-independent solution with great UI and cli, along with load balancers and routing. Is ECS worth the effort?
The first time I heard/read about ECS, I thought it's going to be like Lambda for containers.<p>Amazon took the managing instances, managing containers, selected the hardest to comprehend parts and wrapped with with a cumbersome JSON definition.<p>Marathon takes a much simpler approach, but it lacks the control over the underlying cluster.<p>ECS should be a combination of the marathon with control over the underlying cluster. I deploy a container and I don't want to worry about anything else after that.<p>For now, I'd seriously focus on HTTP facing services. With simple health checks rules for scaling.
Don't rule out the Azure Container Service, which is based on the open source DC/OS[1] and is, arguably, the most robust and proven container system.<p>[1] <a href="http://dcos.io" rel="nofollow">http://dcos.io</a>