TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Bit Test and Reset vs. Compilers

36 pointsby gbrown_over 3 years ago

4 comments

tzsover 3 years ago
Most architectures seem to have an instruction for finding the leftmost set bit in a word [1] and most major C&#x2F;C++ compilers&#x2F;libraries have a function for it [2].<p>I wonder how using that would compare?<p>You&#x27;d find the leftmost bit, invoke the func_true for that, and clear the leftmost bit.<p>Then loop finding the leftmost bit, invoking the func_false for the bits between the current leftmost bit and the previous leftmost bit, func_true for the leftmost bit, and then clear that bit.<p>I wouldn&#x27;t expect it to be dramatically different, but it is trading test and clear on the off bits and a conditional branch off of the result for a loop calling func_false a predetermined number of times. One of those options might be a little faster than the other.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Find_first_set#Hardware_support" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Find_first_set#Hardware_suppor...</a><p>[2] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Find_first_set#Tool_and_library_support" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Find_first_set#Tool_and_librar...</a>
评论 #29667976 未加载
评论 #29667912 未加载
RantyDaveover 3 years ago
Ummm, no. I ran the routine through clang 13 with &#x27;-O&#x27; and didn&#x27;t get an explicit compare.
tyfighterover 3 years ago
Why not use x86&#x27;s BSF or BSR instructions? They&#x27;ve been around since 386.
评论 #29666675 未加载
ncmncmover 3 years ago
This is a very artificial sort of task. You have to visit all bits down to the least significant 1 bit, and ignore the rest? In designing the system, since the assignment of bits to meaning is probably arbitrary, it is smarter to assign them in the way that is more convenient and less error-prone to code against, arranging so the 1 bits may be visited least-significant first.<p>But, OK.<p>(It is probably important, here, to call more attention to the fact that shifting a <i>signed</i> value left until its sign changes would be Undefined Behavior, making your optimizer likely to do something crazy. That is why the code sample must cast a shifted uint64_t to an int64_t before comparing to zero; that conversion is well-defined. Reader be advised!)<p>In the final, indexing, example, the code the compiler generates is to shift the word right 63 places, instead of checking the sign, to make the array index. You might as well do that in the source code, instead of coding a check of the sign bit that the compiler will convert to what you really meant anyway.<p>The final trick of indexing an array to get the right function to call is a good one, and could be a lot faster if the bit patterns are close to random. Branch predictors are very good at teasing out patterns, but the penalty for guessing wrong is very high. A possibly better way, particularly on clang, would look more like<p><pre><code> (mask &gt;&gt; 63 ? func_true : func_false)(i); </code></pre> Modern chips have a &quot;cmov&quot; instruction that is likely to be better than indexing into an array, with its a three-cycle trip out to L1 cache: on a modern chip, that is enough time to execute 18 other instructions!<p>In the loop, there is no point in comparing i to zero, because you are going to break out when the mask hits zero anyway. So, that is just wasted code; deleting it would eliminate the useless &quot;dec %ebp; js 108&quot; instructions. That branch is reliably predicted, so mostly just costs cache footprint, not cycles. But the micro-operations cache is very small, and if the extra instructions make you blow your micro-op cache, that could be pretty expensive.<p>Finally, and <i>most important</i>: the number of instructions is an extremely poor way to estimate performance. Often, many instructions take less time to run than a few, different instructions. So, to make a smart choice about performance, or to present <i>actually meaningful</i> results in your blog post, you need to set up a micro-benchmark and actually see what really happens, and not just guess by counting lines. There are extremely good micro-benchmark tools available free online. Google&#x27;s is as good as any. Or you could paste the code into Godbolt.org and benchmark it there.<p>So, in the end, this <i>might</i> be best:<p><pre><code> void loop_v4(uint64_t mask) { for (int i=63; mask; mask &lt;&lt;= 1, --i) (mask &gt;&gt; 63 ? func_true : func_false)(i); } </code></pre> or possibly<p><pre><code> (mask &amp; ~(-1ull&gt;&gt;1) ? func_true : func_false)(i); </code></pre> or<p><pre><code> ((int64_t)mask &lt; 0 ? func_true : func_false)(i); </code></pre> But again, you need to measure (which I didn&#x27;t do either). Results might be very, very different between x86 and ARM. Of course the results will depend on what happens in func_true and in func_false, too. And, you might find this is altogether the wrong place spend your optimizing attention.
评论 #29672796 未加载