<i>One of the things that Dan touched on was that MongoDB behaves well when the working set of data exceeds available physical RAM.... Which is to say, query performance is excellent when the database is small enough to keep entirely in RAM, but when the data blows past RAM, the performance then plateaus, limited only by your disk I/O subsystem. This is preferable and familiar; there’s no massive drop (or crash) in performance, there’s only a nice plateau of response at the bound-by-disk condition.</i><p>Just curious, but anyone aware of which of the other document-based datastores (CouchDB, Tokyo etc) don't behave as sanely when the dataset size is greater than RAM size (and/or why this is)? Their "part one" article mentions that this type of behavior was a requirement for them, and that it eliminated some of these other datastores.