Some context: A few years ago, there was a paper from Google (<a href="https://dl.acm.org/doi/10.1145/3183713.3196909" rel="nofollow">https://dl.acm.org/doi/10.1145/3183713.3196909</a>) that made learned data structures popular for a while. They started from the idea that indexes such as B-trees approximate an increasing function with one-sided error. By using that perspective and allowing two-sided error, they were able to make the index very small (and consequently quite fast).<p>Many data structure researchers got interested in the idea and developed a number of improvements. The PGM-index is one of those. Its main idea is to use piecewise linear approximations (that can be built in a single quick pass over the data) instead of the machine learning black box the Google paper was using.
This is a major practical advance from the succinct data structure community.<p>This community has produced so many brilliant results in the past years. But, they work in the shadows. Since the rise of interest in neural network methods, I've often described their work as "machine learning where epsilon goes to 0." It's not sexy, but it is extremely useful.<p>For instance, Ferragina previously helped to develop the FM-index that enabled the sequence alignment algorithms used for the primary analysis of short genomic reads (100-250bp). These tools were simply transformative, because they reduced the amount of memory required to write genome mappers by orders of magnitude, allowing the construction of full-text indexes of the genome on what was then (~2009) commodity hardware.
I don't get it. I've implemented B-trees. The majority of space the used by a B-tree is the data itself. Each N-ary leaf of the tree is a basically a vector of data with maybe some bookkeeping at the ends. The leaves are more than half of the tree.<p>Sure, you can compress the data. But that depends on the data, completely random data can't be compress. Other data can be. But a point blank 83x space claim seems bizarre - or it's comparing to a very inefficient implementation of a B-tree.<p>Edit: It seems the 83x claim is a product of the HN submission. I could not find it on the page. But even the page should say something like "a compressed index that allows full speed look-up" (akin to succinct data structures) and then it would make sense.
Their slides <a href="https://pgm.di.unipi.it/slides-pgm-index-vldb.pdf" rel="nofollow">https://pgm.di.unipi.it/slides-pgm-index-vldb.pdf</a> about PGM index, page 21.<p>They stop at page size of 1024 <i>bytes</i> - that indicates they are tested in-memory situation. And, which is worse, their compression ratio advantage almost halves when block size is doubled. Thus, what about B-tree with blocks of 16K or even 256K?<p>Also, what about log-structured merge trees where bigger levels can use bigger pages and, which is quite important, these bigger levels can be constructed using (partial) data scan. These bigger levels can (and should) be immutable, which enables simple byte slicing of keys and RLE compression.<p>So, where's a comparison with more or less contemporary data structures and algorithms? Why beat half a century old data structure using settings of said data structure that favors your approach?<p>My former colleague once said "give your baseline some love and it will surprise you". I see no love for B-trees in the PGM work.
Hello everyone. I'm Giorgio, the co-author of the PGM-index paper together with Paolo Ferragina.<p>First of all, I'd like to thank @hbrundage for sharing our work here and also all those interested in it. I'll do my best to answer any doubt in this thread.<p>Also, I'd like to mention two other related papers:<p>- "Why are learned indexes so effective?" presented at ICML 20, and co-authored with Paolo Ferragina and Fabrizio Lillo.<p>PDF, slides and video: <a href="http://pages.di.unipi.it/vinciguerra/publication/learned-indexes-effectiveness/" rel="nofollow">http://pages.di.unipi.it/vinciguerra/publication/learned-ind...</a><p>TL;DR: In the VLDB 20 paper, we proved a (rather pessimistic) statement that "the PGM-index has the same worst-case query and space bounds of B-trees". Here, we show that actually, under some general assumptions on the input data, the PGM-index improves the space bounds of B-trees from O(n/B) to O(n/B^2) with high probability, where B is the disk page size.<p>- "A 'learned' approach to quicken and compress rank/select dictionaries" presented at ALENEX 21, and co-authored with Antonio Boffa and Paolo Ferragina.<p>PDF and code: <a href="http://pages.di.unipi.it/vinciguerra/publication/learned-rank-select/" rel="nofollow">http://pages.di.unipi.it/vinciguerra/publication/learned-ran...</a><p>TL;DR: You can use piecewise linear approximations to compress not only the index but the data too! We present a compressed bitvector/container supporting efficient rank and select queries, which is competitive with several well-established implementations of succinct data structures.
The slides:
<a href="https://pgm.di.unipi.it/slides-pgm-index-vldb.pdf" rel="nofollow">https://pgm.di.unipi.it/slides-pgm-index-vldb.pdf</a><p>It seems they are only talking about compressing the index (keys) not the values.<p>Also, the slides seem to imply the keys need to be set in sorted order? That way their memory locations will be in increasing order too. That’s quite an important limitation, that means the index is read-only in practice once populated. Though it may still be useful in some cases.<p>Did I misunderstand?
Only watched the video, was disappointed by <a href="https://youtu.be/gCKJ29RaggU?t=408" rel="nofollow">https://youtu.be/gCKJ29RaggU?t=408</a> , where they are comparing against tiny b*tree page sizes that nothing uses any more - 4k, 16k and 64k are way more common
Why use learning when you can fit?<p><a href="http://databasearchitects.blogspot.com/2019/05/why-use-learning-when-you-can-fit.html" rel="nofollow">http://databasearchitects.blogspot.com/2019/05/why-use-learn...</a>
How would one (very roughly) approximate what this index does in terms of big-O notation for time and space? Is it the same as a b-tree in time but with linearly less space?
For a detailed study of learned indexes, see this work: <a href="https://vldb.org/pvldb/vol14/p1-marcus.pdf" rel="nofollow">https://vldb.org/pvldb/vol14/p1-marcus.pdf</a><p>All code is available in open source: <a href="https://github.com/learnedsystems/SOSD" rel="nofollow">https://github.com/learnedsystems/SOSD</a>
I've already thought about the idea of making statistics to optimize access time, so I guess this a viable implementation to do it correctly.<p>That's pretty amazing... I can somehow imagine this tech landing on every modern computer, allowing users to search for anything that is on their machine.
Many devs are probably familiar with perfect hashes as the gperf tool seems omnipresent on Linux machines. Is this a related concept? The learning part makes me suspect so but the slopes and interpolation part makes me doubt it.
This is interesting. Could this be adapted to store 2D data, like how a quadtree is a 2D range tree? (If you link me to a paper / pseudocode for that, I could implement it.) I imagine it would be useful in GIS, gaming, etc.
Also, a learned index from Microsoft: <a href="https://github.com/microsoft/ALEX" rel="nofollow">https://github.com/microsoft/ALEX</a>