Interesting timing. I was playing with Etcd just this morning. I'm glad there's some more options in this space. I haven't been happy with any setup so far.<p>Doozer (<a href="https://github.com/ha/doozerd" rel="nofollow">https://github.com/ha/doozerd</a>) got
me excited. It's small, fast, and written in Go. Unfortunately, its development seems quiet and fragmented. Its
lack of TTL-style values made it a pain to do a distributed lock service without having a sweeper for cleaning up dead locks.<p>Zookeeper (<a href="http://zookeeper.apache.org" rel="nofollow">http://zookeeper.apache.org</a>) is much more fully featured and
mature, but felt way too heavy compared to my nimble Go stack. Installing
and maintaining a JVM just for Zookeeper made me uncomfortable.<p>Etcd is interesting. It has TTLs, it's small and fast, easy to pick up/learn, and it's in active
development (and it's tied with CoreOS and Docker, so it's bound to get some
reflected love).
We reviewed the current options at one point (Zookeper, Doozer and something from Netflix) but ended up just using DNS (Route 53) for our config. This however looks great, and supports nested keys (trees) whereas DNS comes up short in that, and other aspects.
How is 1000s of writes a second fast? Especially for a key/value store? Also, why is it returning redundant information?<p><pre><code> $ curl -L http://127.0.0.1:4002/v1/keys/foo
{"action":"GET","key":"/foo","value":"bar","index":5}
</code></pre>
You already know the action and the key; all it needs to return is "bar" and the index, though even the index might be unnecessary. While i'm at it, why is 'curl' returning JSON? I don't know a lot of Unix commands that take JSON input.<p>While i'm throwing around my valueless opinions, this thing is wholly uncomfortable and over-engineered for its supposed purpose. Further complications from unnecessary requirements like the Raft protocol (what the fuck does autonomous distribution of resource management have to do with sharing configuration?!? s'like building X.509 into Telnet) make this thing's hinges groan from feature bloat.<p>Yet more blather: Why do you have to configure each host's address and a unique port? Isn't etcd supposed to support automatic service discovery? Zeroconf (among many, many others) has had this working for years and it's not hard to use the existing open-source implementations. And why is HTTPS an advanced use?
I do think this looks interesting, but the second bullet point makes me cringe (my emphasis):<p>- Secure: <i>optional</i> SSL client cert authentication<p>Reading the section on SSL in the guide makes me feel slightly better, although I'm worried that it doesn't mention anything about revoking certificates and/or online status checking.<p>Perhaps we need a (better?) library for TLS-PSK and/or TLS+kerberos for these types of uses of HTTPS? That or a compact stand-alone CA that simplifies certificate management and enrolment to the point where it is both usable, deployable and <i>reasonably</i> secure.<p>I'm guessing a compact "master CA"-service that only deals with maintaining (optionally) off-line root certs, that only certify intermediary on-line CA(s), that deal with enrolment and revocation of service and client (ie: "principal") certs.<p>Of course, at that point, you've pretty much created a kerberos work-a-like on top of TLS (for some extra spiffy-ness, set the intermediary CAs to issue certs with 10minute life-times...) -- and I'm not sure if such a system would really be better than just using kerberos in the first place...<p>[edit: formatting]<p>Maybe the ease of interop with other rest/http-based services and clients would be worth it -- maybe not.