While I appreciate how thorough the article is, it's a bit of a strawman. Pretty much everyone who makes use of a relational database in a professional capacity has to be aware of what transaction isolation level they're using, make their own choice about what to use, and then do things like acquire explicit update locks or do optimistic locking in order to ensure data integrity. But that doesn't mean that the ACID properties are useless merely because you have to do that; it might mean you have to think about a few things more than you'd like to, but it's still a different world than trying to mimic ACID properties in a NoSQL database, and there are still fairly hard guarantees about things like consistency that you get with other isolation levels. For example, with read committed or snapshot isolation, I still have transactionality and can be sure that if I issue a sequence of 10 updates in a single transaction and then commit it, any other single query is either seeing the results of all 10 or of none of them. That's an important guarantee in many situations, and it's a guarantee that I can use to make decisions about how I structure my application logic.<p>The author of the post basically seems to treat any isolation level below serializability as some sort of sham perpetrated on the development community, and that's not the case: they're still useful, and they're still something that you can use to build the sorts of application guarantees you want. The mere fact that pretty much every database vendor gives you a choice as to what isolation level to use should be a pretty obvious clue that there's no one-size-fits-all answer there, so harping on the non-existence of a serializable isolation level is somewhat missing the point.