Has anyone measured the "write amplification" rate of LevelDB? I have noticed small writes cause lots of disk writes (which is an issue on SSD) but haven't measured it with actual numbers yet.
Is it not possible to re-write a B-Tree once it doubles in size?<p>For example, suppose you start off with a B-Tree with size 1000. Once it reaches 2000, re-write it. This way, the query performance remains good. Next time it reaches 4000, re-write it again, then at 8000, and so on... This way, you get good query speeds even in the presence of random inserts.<p>You need not re-write the B-Tree on the 2000'th or 4000'th insert. Instead, you can start the re-writing process when the size of the B-Tree reaches O(n/log n). This way, for the last O(log n) values, every time one value is inserted, we copy O(log n) values from the old tree to the new one. When we are done, we have an almost sorted list of values that we inserted in the new B-Tree!