><i>"G2 returns v0 to our client after the client had already written
v1 to G1. This is inconsistent."</i><p>This is absolutely true <i>IF AND ONLY IF</i> client and servers (and data) have no notion of <i>time</i>...<p>If client and servers are say, synchronized to a central clock, and do include a timestamp with any data updated, and include timestamps with any response/"done" messages and check those timestamps for accuracy (the client should perform timestamp checking as well as the servers), and if the client checks timestamps of response messages from different servers, it could then check different servers that it would later connect with, to see if they had been updated appropriately or not.<p>If later-connected-to-servers that should have been updated were not appropriately updated, then the client (if client and server are exchanging data and timestamp information) could decide how to proceed at that point.<p>If a later-connected-to-server was determined to be inconsistent, then depending upon how the client code was written, it could do many things to mitigate the problem. It could notify other servers that it knows about of data inconsistency on the inconsistent server(s), for example...<p>Hmm, now that I think about it, on any distributed database, for any given row of data, there should not be one, but TWO timestamp fields, one for the time that the data was updated on the first server it was updated on, i.e., the one the client connected to.<p>The second timestamp field would be for the time that a secondary given distributed server received the data and processed it...<p>Those two fields could be separated by mere nanoseconds of time (if distributed servers are tightly coupled), but it could be up to weeks if a secondary server was knocked offline for a long enough time period...<p>But see, if the client software is software engineered properly, and contains the client's history, then the client (or main server, or server multiplexing proxy) can check the veracity of any arbitrary distributed server it connects to by asking it to "read me back this data", getting its timestamps and comparing with the local copies...<p>All of that coupled with databases that do not delete/overwrite old records when data is updated, but rather keep a log of all updates with a timestamp, aka "append-only" aka "immutable" database, such as InfluxDB, Apache Cassandra, CouchDB, SQL Server Temporal Tables, (and I think that RethinkDB may have done something like that in the past) should equate to, at the very least, the client being able to know if a given server in a distributed system was updated properly (is consistent), or not...<p>If it were engineered properly, the client itself could determine what to do under such anomalous conditions...<p>It could even take it upon itself to update the server properly <i>IF</i> it contained a copy of the correct most up-to-date data <i>AND</i> had the authority to do so... which wouldn't be the case for some types of applications (i.e., banking, due to necessary security constraints), but might be the case for other types of applications (i.e., a distributed social media app, where the client is posting to the user's own account and has total permission to update any of the user's data as needed...)<p>Maybe a better question to ask when dealing with the CAP theorem is not so much "is it true", so much as "what kind of distributed database requires consistency such that the CAP theorem is required?" (i.e., banking), and "what kind of distributed database doesn't require consistency such that the CAP theorem isn't required?" (i.e., distributed social media over a plethora of distributed servers)...<p>If we see that CAP is required in only some specifc types of database/application/database contexts, then perhaps those specific databases/applications/distributed systems -- should be individually separated from non-CAP requiring ones...<p>This being said, I think Michael Stonebraker is a brilliant Computer Scientist, and I have tremendous respect for him and the CAP Theorem...