Watching this talk has so far (I'm halfway through, and now giving up) been very disappointing, primarily because many of the features and implementation details ascribed to "traditional databases" are not true of the common modern SQL databases, and almost none of them are true of PostgreSQL. As an initial trivial example, many database systems allow you to store arrays. In the case of PostgreSQL, you can have quite complex data types, from dictionaries and trees to JSON, or even whatever else you want to come up with, as it is a runtime extensible system.<p>However, it really gets much deeper than these kinds of surface details. As a much more bothersome example that is quite fundamental to the point he seems to be taking with this talk, at about 15:30 he seriously says "in general, that is an update-in-place model", and then has multiple slides about the problems of this data storage model. Yet, <i>modern databases don't do this.</i> Even <i>MySQL</i> doesn't do this (anymore). Instead, modern databases use MVCC, which involves storing all historical versions of the data for at least some time; in PostgreSQL, this could be a very long time (when a manual VACUUM occurs; if you want to store things forever, this can be arranged ;P).<p><a href="http://en.wikipedia.org/wiki/Multiversion_concurrency_control" rel="nofollow">http://en.wikipedia.org/wiki/Multiversion_concurrency_contro...</a><p>This MVCC model thereby directly solves one of the key problems he spends quite a bit of time at the beginning of his talk attempting to motivate: that multiple round-trips to the server are unable to get cohesive state; in actuality, you can easily get consistent state from these multiple queries, as within a single transaction (which, for the record, is very cheap under MVCC if you are just reading things) almost all modern databases (Oracle, PostgreSQL, MySQL...) will give you an immutable snapshot of what the database looked like when you started your transaction. The situation is actually only getting better and more efficient (I recommend looking at PostgreSQL 9.2's serializable snapshot isolation).<p>At ~20:00, he then describes the storage model he is proposing, and keys in on how important storing time is in a database; the point is also made that storing a timestamp isn't enough: that the goal should be to store a transaction identifier... but again, this is how PostgreSQL already stores its data: every version (as again: it doesn't delete data the way Rich believes it does) stores the transaction range that it is valid for. The only difference between existing SQL solutions and Rich's ideal is that it happens per row instead of per individual field (which could easily be modeled, and is simply less efficient).<p>Now, the point he makes at ~24:00 actually has some merit: that you can't easily look up this information using the presented interfaces of databases. However, if I wanted to hack that feature into PostgreSQL, it would be quite simple, as the fundamental data model is already what he wants: so much so that the indexes are still indexing the dead data, so I could not only provide a hacked up feature to query the past but I could actually do so efficiently. Talking about transactions is even already simple: you can get the identifier of a transaction using txid_current() (and look up other running transactions if you must using info tables; the aforementioned per-row transaction visibility range is even already accessible as magic xmin and xmax columns on every table).