Aguri tries, which are bounded-sized radix trees with an LRU aggregation rule of joining sibling nodes.<p>The motivating use case is tracking metrics for IP addresses. You insert individual IP addresses into the tree, and when the tree fills, it starts rolling nodes for /32s into /31s, /30s, &c. Eventually, you get a picture of ranges of IP addresses in the data. That's neat, because, of course, IP packets themselves don't tell you anything about what subnets their IP addresses belong to.<p>The goofy thing I've done with them is apply them to memory addresses, so that I can collect individual pointers and bubble them up into allocation ranges, without instrumenting allocators.
I like when a simple data structure works in harmony with a simple algorithm to get really good performance. Like the algorithm would be completely mediocre if you replaced the data structure with something naive, but the data structure "unlocks" it and makes it fast:<p>- union-find data structure in Kruskal's minimum-spanning-tree algorithm, or for labeling connected components in binary images<p>- min-heap in Dijkstra's single-source-shortest-paths algorithm<p>- trie in the Apriori frequent-itemset data mining algorithm<p>Spatial partition data structures like octrees and kd-trees are really cool. Useful for anything that simulates physical space like games, graphics, CAD systems. Never implemented one myself though.<p>Least favorite is the red-black tree. It has nice theoretical properties but is so ugly and complicated. Most times I've thought about using one, I ended up choosing a hash table, trie, or sorted array instead. If I ever really need all its properties, I'll go all the way and use a b-tree for its cache friendliness.
A simple data structure (tree), but the process of laying it out for visualization (the Reingold-Tilmann algorithm) is more complex than you'd think.<p>But otherwise, I am the datastructure equivalent of a Blubber programmer :(.