The discovery mechanism looks clumsy to me.<p>There's no way we'd rely on a public discovery service, for example. If we're going to hardcode configuration information - such as the URL of a discovery service that may or may not be up or reachable, we might as well hardcode the addresses of a few peers.<p>And running a second etcd cluster to bring up the main one seems pointless. Either it's turtles all the way down, or you then need to hardcode config information for the second cluster, in which case it serves little purpose.<p>I'd rather have a mechanism where each peer takes a list of possible peers and tries to connect, with a method for deciding when there is quorum to elect an initial leader and start allowing writes (that's easy enough by introducing a config option to decide if a peer is "blessed" to be part of the initial leadership election, and how many blessed peers must be connected to have quorum - just needs to be enough to form a majority of blessed peers to prevent more than one subset from electing a leader before they manage to connect)<p>Am I missing something?
Discovery looks interesting. Can it be used for a client to discover a cluster?<p>I've been digging into Docker link containers a bit.<p>I'm not entirely comfortable with how they work,and I'm not really sure why. The only thing I can put my finger on is that I feel like discovery is a separate concern to deployment. But at the same time they are so closely linked o can understand why Docker needs to tackle it.<p>Is there a way etcd can work better with Docker links? Maybe it could automatically read/write Docker published ENV variables or something? Though I don't think that will quite work across physical machines without some additional work.
I'm curious to know more about the garbage collection of stale peers.<p>- AFAIK, etcd is built on RAFT, which relies on a 'joint majority' method of transitioning cluster membership. Are there any issues forming agreement on what the new membership should be when it's unclear what nodes are still supposed to be part of the cluster?
- In the land of ZooKeeper, cluster configurations are typically very very stable, so tracking membership takes very little information. Is etcd targeted to more dynamic environments where the garbage generated by entering and leaving nodes is significant?
Has anyone checked out serf <a href="http://www.serfdom.io/" rel="nofollow">http://www.serfdom.io/</a> if yes what are the pros and cons of etcd and serf?
This is a pure marketing story pushing a bad solution.<p>Hiera, a simple hierarchal property distribution system using a backend of Zookeeper plus puppet or chef is far superior. Etcd is the PHP of configuration management.