Andy Pavlo talks about this in his class at CMU. You shouldn’t expect to get better performance by running a disk-optimized storage engine on memory, because you’re still paying all the overhead of locks and pages to work around the latency of disk, even though that latency no longer exists. Instead, you have to build a new, simpler storage engine that skips all the bookkeeping of a disk-oriented storage engine.<p><a href="https://youtu.be/a70jRWLjQFk" rel="nofollow">https://youtu.be/a70jRWLjQFk</a>
Jesus Christ, this is insane. Almost a Terabyte of 12.6 Gbps reads? I have a bunch of geospatial entity resolution workloads that I could absolutely smash with this. For way cheaper than the fat mem instances.
Stupid question. What are the use cases for such massively fast write speeds?<p>If you are storing data to disk at that speed, you fill even the biggest optane drives in a couple of minutes. So it would be an application where you need to overwrite a huge amount of data over and over again.
You can efficiently read 256 Byte granular data (4 cache lines) with Optane Memory (due to checksums). I think it makes much more sense to read/write fine granular changes for instance at least align pages to 64 or 256 Bytes instead of 4kb pages, where you often times first of all write too much data and secondly you pollute the caches with probably unnecessary data. There's a paper about how to add cache line aligned mini-pages (16 cache-lines): <a href="https://db.in.tum.de/~leis/papers/nvm.pdf" rel="nofollow">https://db.in.tum.de/~leis/papers/nvm.pdf</a>