There was a fantastic benchmark of C and C++ hash tables doing the rounds a few weeks ago, it's pretty fun reading: <a href="https://jacksonallan.github.io/c_cpp_hash_tables_benchmark/" rel="nofollow">https://jacksonallan.github.io/c_cpp_hash_tables_benchmark/</a>.<p>Unless I really didn't want to introduce dependencies, or reduce code size, I think I'd use an off the shelf hash table implementation these days. It's still a fun exercise building your own though.
I really like the API design of libjudy for general hash table like interfaces. It prevents you from having to hash a key twice just to do "check if key is not present, then set value, otherwise leave original value in hash." The pattern also meshes better with the way iterators naturally work.<p>Also, in terms of this table, if you add the computed key hash value to the stored hash entry, you can check that the value of the hashes match before you do the full strcmp. If you have a weak hash you might also get a benefit from checking that the first characters match before calling the full strcmp.<p>It would also make rehashing easier since you already have the full key available to you and don't have to use your internal set function to move entries into the new table. In the posted implementation the big-O semantics of a rehash are worst case.<p>Anyways.. "man hsearch.3" if you want something cheap and easy.
I was noodling in this area recently, trying to speed up some code similar to the tr utility:<p><pre><code> $ echo abcdef |tr abc ghi
ghidef
</code></pre>
For an eight-bit character set, I found that building an array to map every character improved on linear search, even for short replacement strings and relatively short input strings.<p>There isn't as easy a win for Unicode, so I played with some simple hash tables. Although the conventional wisdom is to use the high-order bits of a hash function's output, FNV-1a is not so good for short inputs of one or two bytes. (I used the 32-bit variant.) It was better just to use the low-order bits.
Enjoyable article and thorough run through. I didn't read all of it, but it really took me back to learning 'C' in comp sci in the early 90s. Great times.
> but it is non-ideal that I’m only allowing half the range of size_t.<p>I am fairly certain that in C it's actually impossible to have an object whose size is larger than half of the range of size_t unless ptrdiff_t is wider than size_t, which normally isn't. Unless, of course, C standard decided to make subtracting two valid pointers into the same array (or one past the end) a potential UB, just because.
Discussed at the time:<p><i>How to implement a hash table in C</i> - <a href="https://news.ycombinator.com/item?id=26590234">https://news.ycombinator.com/item?id=26590234</a> - March 2021 (156 comments)
In case anyone here is not already familiar with it, gperf is a perfect hash function generator.<p><a href="https://www.gnu.org/software/gperf/manual/gperf.html" rel="nofollow">https://www.gnu.org/software/gperf/manual/gperf.html</a>
Open hash tables are typically the way to go in C. They drop the added complexity of managing a forward singly list by turning getters and setters typically into what is an array operation
might be neat if there were acceleration wins to be had with instructions for computing hash ints from machine ints and strings.<p>or even better, a full fledged linear probing hash lookup instruction.
I played around with C++ when I was at university. Then never touched it again. So, with a grin I stumble over things like<p>"void* ht_get(...)"<p>Wait. What? A void pointer? Interesting... I have no clue.<p>I like articles like these. For someone not familiar with C it's a perfect level. In terms of explanation and the code itself.