The IT industry as a whole still hasn't quite internalised that servers now have dramatically worse I/O performance than the endpoints they are serving.<p>For example, a project I'm working on right now is a small data warehouse (~100GB). The cloud VM it is running on provides only 5,000 IOPS with a relatively high latency (>1ms).<p>The laptops that pull data from it all have M.2 drives with 200K IOPS, 0.05ms latency, and gigabytes per second of read bandwidth.<p>It's <i>dramatically</i> faster to just zip up the DB, download it, and then manipulate it locally. This includes the download time!<p>The cheapest cloud instance that even begins to outperform local compute is about $30K/month, and would be blown out of the water by this new Samsung drive anyway. I don't know what it would cost to exceed 15GB/s read bandwidth... but I'm guessing: "Call us".<p>Back in the Good Old Days, PCs and laptops would have a single 5400 RPM drive with <i>maybe</i> 200 IOPS and servers would have a RAID at a minimum. Typically they'd have many 10K or 15K RPM drives, often with a memory or flash cache. The client-to-server performance ratio was at least 1-to-10, typically much higher. Now it's more like 10-to-1 the other way, and sometimes as bad as 1000-to-1.