TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A Tale of Two Optimisations

22 pointsby gjfover 3 years ago

10 comments

powersnailover 3 years ago
&gt; What bothered me about the original implementation was the lookup table. Even though I knew they’d be cached, I still thought the memory accesses might have a detrimental affect on performance.<p>The encoding lookup table is an array of four chars. I&#x27;d be surprised if accessing such an array has a detrimental effect on <i>any</i> program.<p>I also wonder why the graphs say &quot;Encode &#x2F; Decode&quot;, as if you are combining the performance of the encoding function and the decoding. Have you considered separating them?<p>It would also help reproducibility if you include your compiler, versions, and flags. You mentioned that you&#x27;ve turned off all optimizations, but I wonder why.<p>&quot;O0&quot; would certainly produces a lot of jumps for the switch. But O2 is certainly going to eliminate those jumps. In fact, with O2, gcc11 seems to produce identical code for switch and lookup table.<p><a href="https:&#x2F;&#x2F;godbolt.org&#x2F;z&#x2F;jdx645MsY" rel="nofollow">https:&#x2F;&#x2F;godbolt.org&#x2F;z&#x2F;jdx645MsY</a>
评论 #28859877 未加载
nathellover 3 years ago
Interesting approach with the polynomials! I wonder whether the performance of that one can be improved – while the coefficients aren’t integer, they become integer when multiplied by 3. So I wonder whether evaluating a whole-coefficient polynomial (using Horner’s scheme) and dividing by 3 at the end (which the compiler might optimize to a multiplication) might provide a speed boost:<p><pre><code> unsigned char poly_encode_2(const unsigned char dibit) { return (27 + x * (14 + x * (-18 + x * 7))) &#x2F; 3; } </code></pre> Haven’t measured.
germanjoeyover 3 years ago
&quot;whitespacer&quot; reminds me of Damian Conway&#x27;s classic &quot;Acme::Bleach&quot; perl module...<p><a href="https:&#x2F;&#x2F;metacpan.org&#x2F;pod&#x2F;Acme::Bleach" rel="nofollow">https:&#x2F;&#x2F;metacpan.org&#x2F;pod&#x2F;Acme::Bleach</a>
nanisover 3 years ago
When I first saw this, I thought it ought to be possible to express the encoding&#x2F;decoding in simple computations, but I only just got a chance to try things out[1].<p>There are many things I&#x27;d fiddle with in the original source. I noticed that since I first saw this post, someone even contributed an AVX2 implementation[2]. I stayed strictly in the standard C land:<p>This function converts a given character to the corresponding half-nibble (or dibit, don&#x27;t really know what to call a pair of bits):<p><pre><code> uint8_t ws_to_half_nibble(uint8_t ws) { return ((ws &gt;&gt; 1) &amp; 3) | (ws &gt;&gt; 4) | (ws &gt;&gt; 5); } </code></pre> Encoding a half-nibble to one of the four whitespace characters can be done using:<p><pre><code> uint8_t half_nibble_to_ws(uint8_t b) { const uint8_t b0 = b &amp; 1; const uint8_t b1 = b &amp; 2; return &#x27;\t&#x27; + b0 + 2 * b1 + 18 * (b0 &amp; (b1 &gt;&gt; 1)); } </code></pre> I haven&#x27;t actually had a chance to investigate any performance gains, but this answers the question of whether we can replace the lookup tables with simple functions with no branching.<p>[1]: <a href="https:&#x2F;&#x2F;www.nu42.com&#x2F;2021&#x2F;10&#x2F;another-optimization-tale.html" rel="nofollow">https:&#x2F;&#x2F;www.nu42.com&#x2F;2021&#x2F;10&#x2F;another-optimization-tale.html</a><p>[2]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28859061" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28859061</a>
kardosover 3 years ago
Horner&#x27;s method is known to reduce the number of operations. Maybe the coefficients will turn out to be more favourable with this method?<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Horner%27s_method" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Horner%27s_method</a>
评论 #28859565 未加载
ridiculous_fishover 3 years ago
I would expect the switch implementation to not emit any branches. Example:<p><a href="https:&#x2F;&#x2F;godbolt.org&#x2F;z&#x2F;Gox7nYfxP" rel="nofollow">https:&#x2F;&#x2F;godbolt.org&#x2F;z&#x2F;Gox7nYfxP</a><p>So it is surprising that OP saw branch mispredictions. What compiler and flags were used?
spaetzleesserover 3 years ago
This brings back fond memories of the time when I did video processing code. So much more fun doing this kind of work on self contained code versus server side stuff dealing with dozens of misbehaving external APIs.
Const-meover 3 years ago
I wonder how does it compare to BMI2 + AVX2 version:<p><a href="https:&#x2F;&#x2F;godbolt.org&#x2F;z&#x2F;xcT3exenr" rel="nofollow">https:&#x2F;&#x2F;godbolt.org&#x2F;z&#x2F;xcT3exenr</a><p>The code doesn’t look great because no inlining.<p>In reality, if you have many bytes in the buffer and calling these functions in a loop, compilers should inline the function, loading all these magic vectors outside the loop. This way it won’t be any memory loads, except to fetch the source data.
lifthrasiirover 3 years ago
While I applaud the efforts to document the optimization processes, they are fundamentally slow. The easiest and portable optimization would be converting 8 full bits at a time:<p><pre><code> for (int i = 0; i &lt; 256; ++i) { const char c[4] = { encode_lookup_tbl[i &gt;&gt; 0 &amp; 3], encode_lookup_tbl[i &gt;&gt; 2 &amp; 3], encode_lookup_tbl[i &gt;&gt; 4 &amp; 3], encode_lookup_tbl[i &gt;&gt; 6 &amp; 3], }; encode_lookup_tbl2[i] = *(uint32_t*)c; } for (x = 0; x &lt; bytes; x++) { ((uint32_t*)ws_out)[x] = encode_lookup_tbl2[bytes_in[x]]; } </code></pre> Note that the resulting encode_lookup_tbl2 depends on the endianness, but as long as we make the table on demand it can remain portable. Also `ws_out` might have to be aligned for the potability (unaligned memory access can be slow at best and trap at worst) but it is easy to fulfill when those functions are only used internally. In my measurement this alone made it ~3.5x faster.<p>The next obvious approach is to use SIMD. Pretty much every modern CPU gives you 16-byte shuffle, which can be used to convert 4 bits at a time. (EDIT: This turned out to be not very correct, see below.) I tried to write a quick example but it takes quite a bit of time to write and debug [1] ;-) (I may update this post if I get a working version.)<p>Lastly, it might be possible to use mmap to eliminate the intermediate buffer at all. I think the actual performance gain with and without mmap can vary much and you need to profile it, but it&#x27;s worth considering.<p>[1] I realized that there is no bytewise shift operator in x86 SIMD (_mm_srli_epi8). Oops!<p>---<p>Added later: So the following is my crude AVX2 implementation. It is 20% faster than the bytewise version and 10% faster than the original SSSE3 version. It is not that satisfactory though because it had to do a lot of unpacking to get bytes into correct positions (I&#x27;m not a great SIMD coder after all).<p><pre><code> const __m256i MASK = _mm256_set1_epi8(0xf); _Alignas(32) char c[32]; for (int i = 0; i &lt; 32; ++i) c[i] = encode_lookup_tbl[i &amp; 3]; const __m256i MAPLO = _mm256_load_si256((void*)c); for (int i = 0; i &lt; 32; ++i) c[i] = encode_lookup_tbl[i &gt;&gt; 2 &amp; 3]; const __m256i MAPHI = _mm256_load_si256((void*)c); for (x = 0; x &lt; bytes; x += 32) { &#x2F;&#x2F; abcdefghijklmnop ABCDEFGHIJKLMNOP __m256i block = _mm256_loadu_si256((void*)(bytes_in + x)); &#x2F;&#x2F; ah bh ch dh ... mh nh oh ph __m256i hi = _mm256_and_si256(_mm256_srli_epi16(block, 4), MASK); &#x2F;&#x2F; al bl cl dl ... ml nl ol pl __m256i lo = _mm256_and_si256(block, MASK); &#x2F;&#x2F; MAPLO[al] ... MAPLO[pl] &#x2F;&#x2F; MAPHI[al] ... MAPLO[pl] &#x2F;&#x2F; MAPLO[ah] ... MAPHI[ph] &#x2F;&#x2F; MAPHI[ah] ... MAPHI[ph] __m256i mapped1 = _mm256_shuffle_epi8(MAPLO, lo); __m256i mapped2 = _mm256_shuffle_epi8(MAPHI, lo); __m256i mapped3 = _mm256_shuffle_epi8(MAPLO, hi); __m256i mapped4 = _mm256_shuffle_epi8(MAPHI, hi); &#x2F;&#x2F; MAPLO[al] MAPLO[ah] ... MAPLO[hl] MAPLO[hh] &#x2F;&#x2F; MAPLO[il] MAPLO[ih] ... MAPLO[pl] MAPLO[ph] &#x2F;&#x2F; MAPHI[al] MAPHI[ah] ... MAPHI[hl] MAPHI[hh] &#x2F;&#x2F; MAPHI[il] MAPHI[ih] ... MAPHI[pl] MAPHI[ph] __m256i shuf1 = _mm256_unpacklo_epi8(mapped1, mapped3); __m256i shuf2 = _mm256_unpackhi_epi8(mapped1, mapped3); __m256i shuf3 = _mm256_unpacklo_epi8(mapped2, mapped4); __m256i shuf4 = _mm256_unpackhi_epi8(mapped2, mapped4); &#x2F;&#x2F; MAPLO[al] MAPHI[al] MAPLO[ah] MAPHI[ah] ... MAPLO[dl] MAPHI[dl] MAPLO[dh] MAPHI[dh] etc. &#x2F;&#x2F; bytewise: &#x2F;&#x2F; aaaabbbbccccdddd AAAABBBBCCCCDDDD &#x2F;&#x2F; eeeeffffgggghhhh EEEEFFFFGGGGHHHH &#x2F;&#x2F; iiiijjjjkkkkllll IIIIJJJJKKKKLLLL &#x2F;&#x2F; mmmmnnnnoooopppp MMMMNNNNOOOOPPPP __m256i out1 = _mm256_unpacklo_epi8(shuf1, shuf3); __m256i out2 = _mm256_unpackhi_epi8(shuf1, shuf3); __m256i out3 = _mm256_unpacklo_epi8(shuf2, shuf4); __m256i out4 = _mm256_unpackhi_epi8(shuf2, shuf4); _mm_store_si128((void*)(ws_out + x * 4), _mm256_extracti128_si256(out1, 0)); _mm_store_si128((void*)(ws_out + x * 4 + 16), _mm256_extracti128_si256(out2, 0)); _mm_store_si128((void*)(ws_out + x * 4 + 32), _mm256_extracti128_si256(out3, 0)); _mm_store_si128((void*)(ws_out + x * 4 + 48), _mm256_extracti128_si256(out4, 0)); _mm_store_si128((void*)(ws_out + x * 4 + 64), _mm256_extracti128_si256(out1, 1)); _mm_store_si128((void*)(ws_out + x * 4 + 80), _mm256_extracti128_si256(out2, 1)); _mm_store_si128((void*)(ws_out + x * 4 + 96), _mm256_extracti128_si256(out3, 1)); _mm_store_si128((void*)(ws_out + x * 4 + 112), _mm256_extracti128_si256(out4, 1)); }</code></pre>
评论 #28860014 未加载
d23over 3 years ago
Cool post! Thanks for submitting. I would never in a million years have thought to try out a polynomial fit for something like this. Unsurprisingly, it was slower, but still cool to even see you thinking that way at all.
评论 #28859163 未加载