I still maintain that the existence of in memory databases has two main sources: scalability bottlenecks in GC, and storage latency falling behind network latency and staying there.<p>If general purpose programming languages could store the data efficiently in main memory, the feature set of in memory databases is not so high that you can’t roll your own incrementally. But your GC times are going to go nuts, and you’ll go off the rails.<p>If the speed of light governed data access, you’d collect your data locally and let the operating system decide which hot paths to keep in memory versus storage.<p>The last time network was faster than disk was the 1980’s, and we got things like process migration systems (Sprite). Those evaporated once the pendulum swung back.