TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why is Math.sin(Math.PI/2) returning an exact value but not Math.sin(Math.PI*2)?

3 pointsby shivajikobardan9 months ago
System.out.println(Math.PI); &#x2F;&#x2F;3.141592653589793<p>System.out.println(Math.sin(Math.PI));&#x2F;&#x2F;1.2246467991473532E-16<p>Yes sin(180d) is 0 and the above value is near to 0.<p>System.out.println(Math.sin(2 *<p>Math.PI));&#x2F;&#x2F;-2.4492935982947064E-16<p>Yes sin(360d) is 0 and the above value is near to 0.<p>However,<p>Math.sin(Math.PI&#x2F;2) return 1.0<p>Now, I don&#x27;t get why this returns 1.0 and why not a number near to 1 but not exactly 1?<p>Math.sin(Math.PI&#x2F;6) returns 0.5<p>Same question here because Math.PI is not exact, why is its division so exact?<p>This is in java

2 comments

Someone9 months ago
&gt; Math.sin(Math.PI&#x2F;2) return 1.0 &gt; Now, I don&#x27;t get why this returns 1.0 and why not a number near to 1 but not exactly 1?<p>‘Accident’ of the implementation. When you feed <i>Math.sin</i> that IEEE float that’s close to <i>π ÷ 2</i> it returns an approximation of its sine. In this case, the rounded result of <i>1.0</i>, a result that you seem to think is an exact result.<p>&gt; Same question here because Math.PI is not exact, why is its division so exact?<p>It’s not that computing <i>Math.PI ÷ 6</i> is exact, but that then computing the sine of that value produces ½, which isn’t “so exact”, as technically, the bit pattern for <i>Math.PI ÷ 6</i> doesn’t stand for <i>π ÷ 6</i>, and could have been computed in another way.<p>By the way, reading <a href="https:&#x2F;&#x2F;github.com&#x2F;openjdk&#x2F;jdk&#x2F;blob&#x2F;bd4160cea8b6b0fcf0507199ed76a12f5d0aaba9&#x2F;src&#x2F;java.base&#x2F;share&#x2F;classes&#x2F;java&#x2F;lang&#x2F;Math.java#L39">https:&#x2F;&#x2F;github.com&#x2F;openjdk&#x2F;jdk&#x2F;blob&#x2F;bd4160cea8b6b0fcf0507199...</a>, these results may vary between Java implementations, possibly even between runs (when a method gets called a lot, it may get optimized more, and start using a different implementation)
评论 #41264523 未加载
WCSTombs9 months ago
I&#x27;m not a Java expert, but I&#x27;m fairly certain this is just a feature of roundoff error and how it propagates through computations, and how floating-point values are represented.<p><pre><code> sin(pi&#x2F;2 + x) = cos(x) = 1 - x^2&#x2F;2 + {lower-order terms} </code></pre> When you compute Math.sin(Math.PI&#x2F;2), Math.PI&#x2F;2 is computed as pi&#x2F;2 (exact) plus a small error term, x, which is on the order of 1e-16 or so. Math.sin(Math.PI&#x2F;2) is then computed as<p><pre><code> sin(pi&#x2F;2 + x), </code></pre> which is about 1 - x^2&#x2F;2. But since x is on the order of 1e-16, x^2&#x2F;2 is on the order of 1e-32, and the closest double-precision value to 1 - x^2&#x2F;2 can easily be seen to be exactly 1.0.<p>On the other hand,<p><pre><code> sin(2*pi + x) = sin(x) = x + {lower-order terms} </code></pre> When you compute Math.sin(2*Math.PI), 2*Math.PI is computed as 2*pi (exact) plus again a small error term x which is on the order of 1e-16, and Math.sin(2*Math.PI) is very close to that error term x. The closest double-precision value to x is much closer to x than zero is, because x is already very small and the <i>relative</i> approximation error in its best double-precision approximation is on the order of 1e-16. Thus, you get a small but nonzero result that is very close to the original approximation error of 2*Math.PI versus the mathematical 2*pi.<p>To fully follow that argument above, you need to know a few things about numerical computation:<p>- <i>Most</i> (but not all) computational environments implement some version of the IEEE-754 standard, and I&#x27;m assuming this is also true of Java.<p>- IEEE-754 requires that all the basic operations, including trigonometric functions, behave as if the exact result was computed and then rounded to the nearest double-precision value.<p>- In IEEE-754, floating point numbers are essentially represented in scientific notation, or more precisely as<p><pre><code> +&#x2F;-(1.XXXX...) * 2^E </code></pre> where the 1.XXXX... is a binary number with a fixed number of binary digits after the binary point, and E is an integer represented using a fixed number of bits. (For double-precision, I believe the XXXX... has 52 bits, and E has 11 bits.)
评论 #41264739 未加载
评论 #41264771 未加载