Summary: Systems designed to be consistent are bad, because if they break, they will be broken.<p>At least, that's what I'm parsing this as. The author argues that because eventually consistent systems are designed to have inconsistency as a normal event, they can recover from inconsistency -- but this completely ignores the fact that large consistent systems (or at least, those which are designed well) also check for inconsistencies so that fail(ing|ed) nodes can be killed.<p>Maybe I'm missing something: Did anyone see something profound in this article?