[Knee jerk reaction]
Be careful now. Be very careful.<p>> Godbolt’s Law — if any single optimisation makes a routine run two or more times faster, then you’ve broken the code.<p><<a href="https://xania.org/200504/godbolt's-law" rel="nofollow noreferrer">https://xania.org/200504/godbolt's-law</a>>
Mark is one of the world's top experts on practical MySQL performance at scale, having spent a huge amount of time optimizing MySQL at Google and Facebook. There's a question in this thread about whether this has real world impact... yes, if Mark noticed it, yes, yes it does. This will materially improve many common workloads for InnoDB.
4x perf on inserts... Those kinds of post make me both scared and depressed by the state of our industry. I feel like we're all hitting rocks to make fire in a cave.
> I assume this problem was limited to InnoDB because I did not see problems with MyRocks.<p>This would certainly explain why InnoDB is insanely slow at loading mysqldumps compared to MyISAM - we hit a wall in some systems and were unable to switch because of it. There's tons of questions online about how to speed this up, people were generally aware of the problem but assumed it was because InnoDB is more reliable with the data (like with foreign keys) or something about how it structures the data on disk that couldn't be changed.
Is someone more experienced able to shed light on how these benchmarks compare to real world use? Writes tend to be pretty resource intensive, is 4X faster going to show up as 2-4x faster writes on production environments?
Inserts ought to be able to run at the storage mediums write speed.<p>Ie. If I insert 1 million records of 1 kilobyte each. And my SSD can do 1GB/s of writes, then I should be able to do it in 1 second.<p>How close are we to that?<p>Transactions shouldn't slow this down (it's possible to write all the data to new areas of diak, and then the final 'commit' is a metadata update to say those new areas are active).<p>Indexes might slow it down depending on the index design. But it's possible to design an index where updates are coalesced, and therefore the changed parts of the index are only written once as a single bulk write. Assuming the inde. Size is only 10% of the data size, that's a 10% slowdown.