TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why doesn't GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)? (2013)

227 pointsby Doublonabout 11 years ago

14 comments

Khaineabout 11 years ago
Completely relevant podcast discussion <a href="http://edgecasesshow.com/083-floating-point-numbers-are-a-leaky-abstraction.html" rel="nofollow">http:&#x2F;&#x2F;edgecasesshow.com&#x2F;083-floating-point-numbers-are-a-le...</a>
评论 #7479695 未加载
tikhonjabout 11 years ago
Because, of course, floating point addition and multiplication is not associative. This turns out <i>surprisingly</i> easy to demonstrate:<p><pre><code> 0.1 + (0.2 + 0.3) = 0.6 0.1 + 0.2 + 0.3 = 0.6000000000000001 </code></pre> and the same for multiplication:<p><pre><code> 0.1 * 0.2 * 0.3 = 6.000000000000001e-3 0.1 * (0.2 * 0.3) = 6.0e-3 </code></pre> It actually isn&#x27;t &quot;surprising&quot; if you understand how the format works. It essentially uses scientific notation but in binary, with a set number of bits for both the mantissa and the exponent as well as a few changes thrown in for better behavior at its limits (like denormalization). This means that it can&#x27;t directly express numbers which are very easy to write in decimal form, like 0.1, just like we can&#x27;t express 1&#x2F;3 as a finite decimal. It&#x27;s designed to manage this as well as possible with the small number of bits at its disposal, but we still inevitably run into these issues.<p>Of course, most programmers only have a vague idea of how floating point numbers work. (I&#x27;m certainly among them!) It&#x27;s very easy to run into problems. And even with a better understanding of the format, it&#x27;s still very difficult to predict exactly what will happen in more complex expressions.<p>A really cool aside is that there are some relatively new toys we can use to model floating point numbers in interesting ways. In particular, several SMT solvers including Z3[1] now support a &quot;theory of floating point&quot; which lets us exhaustively verify and analyze programs that use floating point numbers. I haven&#x27;t seen any applications taking advantage of this directly, but I personally find it very exciting and will probably try using it for debugging the next time I have to work with numeric code.<p>A little while back, there was an article about how you can test floating point functions by enumerating every single 32-bit float. This is a great way of thinking! However, people were right to point out that this does not really scale when you have more than one float input or if you want to talk about doubles. This is why SMT solvers supporting floating point numbers is so exciting: it makes this sort of approach practical even for programs that use lots of doubles. So you <i>can</i> test every single double or every single pair of doubles or more, just by being clever about how you do it.<p>I haven&#x27;t tried using the floating point theories, so I have no idea how they scale. However, I suspect they are not significantly worse than normal bitvectors (ie signed&#x2F;unsigned integers). And those scale really well to larger sizes or multiple variables. Assuming the FP support scales even a fraction as well, this should be enough to practically verify pretty non-trivial functions!<p>[1]: <a href="http://z3.codeplex.com/" rel="nofollow">http:&#x2F;&#x2F;z3.codeplex.com&#x2F;</a>
评论 #7479862 未加载
评论 #7480147 未加载
评论 #7480201 未加载
评论 #7480151 未加载
评论 #7483304 未加载
评论 #7483647 未加载
评论 #7479686 未加载
octo_tabout 11 years ago
Basically:<p>floating point is <i>hard</i>. Programmers get it wrong, compilers get it right.<p>(normally).
评论 #7479681 未加载
评论 #7479708 未加载
评论 #7481090 未加载
sheetjsabout 11 years ago
Always a great read: &quot;What Every Computer Scientist Should Know About Floating-Point Arithmetic&quot;<p>PDF rendering: <a href="http://www.cse.msu.edu/~cse320/Documents/FloatingPoint.pdf" rel="nofollow">http:&#x2F;&#x2F;www.cse.msu.edu&#x2F;~cse320&#x2F;Documents&#x2F;FloatingPoint.pdf</a><p>HTML rendering: <a href="http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html" rel="nofollow">http:&#x2F;&#x2F;docs.oracle.com&#x2F;cd&#x2F;E19957-01&#x2F;806-3568&#x2F;ncg_goldberg.ht...</a>
TeMPOraLabout 11 years ago
One thing I always wondered about is <i>why</i> are we using floating point arithmetic at all, instead of fixed point math with explicitly specified ranges (say, &quot;here I need 20 bits for the integer part and 44 for the fractional part&quot;)? What is the practical value of having a floating point that would justify dealing with all that complexity and conceptual problems they introduce?
评论 #7480486 未加载
评论 #7481298 未加载
评论 #7480505 未加载
评论 #7485724 未加载
pkulakabout 11 years ago
&gt; Um... you know that a<i>a</i>a<i>a</i>a<i>a and (a</i>a<i>a)</i>(a<i>a</i>a) are not the same with floating point numbers, don&#x27;t you?<p>Why would so many people upvote such a condescending comment?
评论 #7481529 未加载
d23about 11 years ago
Random sidenote, but I&#x27;ve never seen an SO answer be upvoted in realtime until now. Didn&#x27;t even know they programmed such functionality.
评论 #7482221 未加载
willvarfarabout 11 years ago
Dup:<p>Previously on HN <a href="https://news.ycombinator.com/item?id=2684254" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=2684254</a>
评论 #7479611 未加载
评论 #7480010 未加载
username42about 11 years ago
The non associativity of addition is obvious, but for the multiplication, I understand why it does not give always the same answer, but I do not see why a change of the order could change the accuracy.
评论 #7479995 未加载
评论 #7481145 未加载
ww2about 11 years ago
How would a functional language compiler deal with this case? like GHC?
评论 #7480533 未加载
评论 #7480258 未加载
pygy_about 11 years ago
I already knew about the lack of associativity of IEEE floats, but TIL that `pow(x,n)` is more precise than chained multiplications.<p>Good to know.
评论 #7480502 未加载
评论 #7480572 未加载
jevinskieabout 11 years ago
There is a discussion from today and yesterday on llvm-dev that deals with floating point guarantees and optimizations: <a href="http://thread.gmane.org/gmane.comp.compilers.clang.devel/35858/" rel="nofollow">http:&#x2F;&#x2F;thread.gmane.org&#x2F;gmane.comp.compilers.clang.devel&#x2F;358...</a>
woodchuck64about 11 years ago
floating point in short: computer representation of scientific notation(1) with sign bit, exponent(2) and coefficient(3) crammed in the same word.<p>1. <a href="http://en.wikipedia.org/wiki/Scientific_notation" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Scientific_notation</a><p>2. base 2, biased by 127, 8 bits (IEEE float)<p>3. base 2, implicit leading 1, 23 bits (IEEE float)<p><a href="http://en.wikipedia.org/wiki/Single-precision_floating-point_format" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Single-precision_floating-point...</a>
picomancerabout 11 years ago
It actually does, if a is an integer.