Glad to see this point at the end:<p>"16. Have I 'got around' or 'beaten' the CAP theorem?<p>No. You might have designed a system that is not heavily affected by it. That's good."<p>Our thoughts on CAP and how we've dealt with it while building a distributed truly ACID database might also be interesting to some: <a href="http://foundationdb.com/white-papers/the-cap-theorem/" rel="nofollow">http://foundationdb.com/white-papers/the-cap-theorem/</a>
Pardon my naivete, but why isn't this obvious?<p>Of course two systems can only be consistent if they can communicate, so you have to either sacrifice availability until the partition is resolved, or give up on consistency.
I'm sorry, but I can't resist: Isn't the cap theorem irrelevant, because true network partitions never happen in the real world? If a link fails, an administrator will fix it eventually. With any system implementing ACK packets (tcp is one example) a link that fails but is then fixed is the same as a very slow link.
interesting FAQ...i like the idea of bringing this info together.<p>i've found there are lots of more-common things that cause partitions in practice than equipment-in-the-middle failures. human errors are probably the biggest: network configuration changes, fresh bugs in your own software - or in your dependencies, etc.<p>also, while a network might be asynchronous, there's usually a limit to how long a message can be delayed in practice. ...the limit might be how much memory you have to queue up messages...or perhaps how long your client-side software (or your end-user) is willing to wait for a message when a dialog is more complex than request/response.<p>when designing distributed software, i've found that it's helpful to ask: <i>when</i> (not if) X process/server/cluster/data-center fails or becomes unreachable - temporarily or forever - how should the rest of my system respond?<p>so, perhaps the most important take-away from the FAQ for designers is #13: that C and A are "spectrums" that you tune to meet your own requirements when the various failure scenarios happen.
"A partition is when the network fails to deliver some messages to one or more nodes by losing them (not by delaying them - eventual delivery is not a partition)."<p>That part is confusing to me. Doesn't the term partition have another meaning in distributed system design? For instance, consistent hashing "partitions" keys to multiple nodes. I haven't heard partition as a term describing dataloss.
Didn't Nathan Marz debunk the CAP theorem just last year? <a href="http://nathanmarz.com/blog/how-to-beat-the-cap-theorem.html" rel="nofollow">http://nathanmarz.com/blog/how-to-beat-the-cap-theorem.html</a>