My understanding of the word "quick" in the title is that this code is quick to write. But then this brings up the question of why. Why do we prioritize code that is easy to write, which presumably means it needs to be written over and over again. Is this a reflection of the sad state of C library packaging?<p>A quick perusal of the hashmap landscape[0] in C++ shows a vibrant ecosystem where newer and faster implementations appear every year or so.<p>[0]: <a href="https://martin.ankerl.com/2022/08/27/hashmap-bench-01/" rel="nofollow">https://martin.ankerl.com/2022/08/27/hashmap-bench-01/</a>
The hash table is still doing a string compare per bucket.<p>The best technique I have seen is to have an open addressing hash table whose keys are 64 bit hashes and values are pointers.<p>Then you layer on top of that to make a string dictionary. The value of the hash table is a linked list chain of key value pairs where the key is a string.<p>In the vast majority of cases you never touch the chain. Only when two keys have identical 64 bit hashes (not bucket indices) do you need to disambiguate with string equality.<p>This design is much faster and can be easily reconfigured for key types which are not strings, with no changes to the underlying hash table.