Non-deterministic transaction ordering is well reviewed in [2]. Summary: DBs may reorder transactions because getting read/write sets, locks, or I/O timeouts will block some transactions, but others can proceed. This way average throughput is higher.<p>Calvin [4], reviewed in the Morning Paper [3], builds on [1] & argues for a deterministic ordering of transactions. It's advantages are:<p>- no two phase protocol, which is I/O heavy<p>- no logging of physical layout of CRUD changes made to disk by disk. Need only log transaction inputs<p>- Calvin's sequencing and scheduling layer can be bolted onto any CRUD API over storage<p>I would draw reader's attention to [4, pg4]:<p>"Calvin divides time into 10-millisecond epochs during which every machine’s sequencer component collects transaction requests
from clients. At the end of each epoch, all requests that have arrived at a sequencer node are compiled into a batch. This is the
point at which replication of transactional inputs (discussed below) occurs."<p>So whereas it avoids other problems, the implication is that no transaction can complete in than the less epoch time. That'd be too bad, because an all in memory DB or KV can't be stalled by disk ... I rather hoped Calvin could be considered for low-latency KV store given I could bolt storage underneath it.<p>Did I miss something?<p>Note that unless one falls back to OCC, or goes VoltDb and support transactions on single partitions only, there is no way to avoid locking in deterministic transaction scheduling because that's single core work.<p>[1] https://github.com/yaledb/calvin<p>[2] http://www.cs.umd.edu/~abadi/papers/determinism-vldb10.pdf<p>[3] https://blog.acolyer.org/2019/03/29/calvin-fast-distributed-transactions-for-partitioned-database-systems/<p>[4] http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf