I've been using docker for a couple of months, but we have only just begun experimenting with actual deployment in a test environment on ec2. Right now we use it primarily as configuration/dependency management. We're a small team and it seems to make setup easier, at least so far. Two examples: the first is a log sink container, in which we run redis + logstash. The container exposes the redis and es/kibana ports, and the run command maps these to the host instance. Setting up a new log server means launching an instance, and then pulling and starting the container. The second example is elasticsearch. We have a container set up to have cluster and host data injected into it by the run command, so we pull the container, start it, and it joins the designated cluster. The thing I like about this is the declarative specification of the dependencies, and the ease of spinning up a new instance. As I say, just experimenting so far, and I don't know how optimal all of this is yet, so would love any feedback.<p>One last quick thought on internal discovery. A method we're playing with on ec2 is to use tags. On startup a container can use a python script and boto to pull the list of running instances within a region that have certain tags and tag values. So we can tag an instance as an es cluster member, for example, and our indexer script can find all the running es nodes and choose one to connect to. We can use other tags to specify exposed ports and other information. Again, just messing around and still not sure of the optimal approach for our small group, but these are some interesting possibilities.