TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Implementing Cosine in C from Scratch (2020)

220 点作者 alraj将近 2 年前

36 条评论

jpfr将近 2 年前
The Taylor expansion locally fits a polynom based on the n first derivatives. If you want to find the &quot;best&quot; nth-degree polynom to approximate the sine function, functional analysis gives the tools for solving that optimization in closed form.<p>By selecting an appropriate norm (in function space) you can minimize either the maximum error or the error integral over some range (e.g. the 0-\pi range).<p>Here&#x27;s a video on the subject. You might want to watch earlier ones also for more context.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=tMlKZZf2Kac&amp;list=PLdkTDauaUnQpzuOCZyUUZc0lxf4-PXNR5&amp;index=28">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=tMlKZZf2Kac&amp;list=PLdkTDauaUn...</a><p>Full disclosure, this is my university lecture on optimization that was recorded during Covid.
评论 #36195543 未加载
azhenley将近 2 年前
No need for archive.org, my website moved a few years ago: <a href="https:&#x2F;&#x2F;austinhenley.com&#x2F;blog&#x2F;cosine.html" rel="nofollow">https:&#x2F;&#x2F;austinhenley.com&#x2F;blog&#x2F;cosine.html</a>
评论 #36199114 未加载
评论 #36196112 未加载
评论 #36198692 未加载
amiga386将近 2 年前
One of the things I loved most about reading kernel and libc (or rather, libm) sources was the floating-point and FP emulation code. I thought it was immensely cool to see what others just expected an FPU instruction to do, someone had not only written out in C, but also wrote comments explaining the mathematics (rendered in ASCII), often with numerical analysis about error propagation, accuracy, etc.<p>Some examples:<p><a href="https:&#x2F;&#x2F;sourceware.org&#x2F;git&#x2F;?p=newlib-cygwin.git;a=blob;f=newlib&#x2F;libm&#x2F;math&#x2F;e_pow.c" rel="nofollow">https:&#x2F;&#x2F;sourceware.org&#x2F;git&#x2F;?p=newlib-cygwin.git;a=blob;f=new...</a><p><a href="https:&#x2F;&#x2F;git.kernel.org&#x2F;pub&#x2F;scm&#x2F;linux&#x2F;kernel&#x2F;git&#x2F;torvalds&#x2F;linux.git&#x2F;tree&#x2F;arch&#x2F;x86&#x2F;math-emu&#x2F;poly_sin.c" rel="nofollow">https:&#x2F;&#x2F;git.kernel.org&#x2F;pub&#x2F;scm&#x2F;linux&#x2F;kernel&#x2F;git&#x2F;torvalds&#x2F;lin...</a><p><a href="https:&#x2F;&#x2F;git.kernel.org&#x2F;pub&#x2F;scm&#x2F;linux&#x2F;kernel&#x2F;git&#x2F;torvalds&#x2F;linux.git&#x2F;tree&#x2F;arch&#x2F;x86&#x2F;math-emu&#x2F;README" rel="nofollow">https:&#x2F;&#x2F;git.kernel.org&#x2F;pub&#x2F;scm&#x2F;linux&#x2F;kernel&#x2F;git&#x2F;torvalds&#x2F;lin...</a>
lntue将近 2 年前
So in the implementation of <i>cos_table_*_LERP</i>, you did technically 2 step range reduction:<p>1. Reduce <i>x = x mod 2*pi</i><p>2. Reduce <i>index = floor(x &#x2F; 10^-n)</i>, and <i>i - index = 10^n * (x mod 10^-n)</i><p>With limited input range and required precision as in the tests, you can combine these 2 range reduction steps:<p>1. Choose the reduced range as power of 2 instead of power of 10 for cheaper modulus operation, let say <i>2^-N = 2^-7</i>.<p>2. Avoid the division in <i>modd(x, CONST_2PI)</i> by multiplying by <i>2^N &#x2F; pi</i>.<p>3. Avoid the round trip <i>double -&gt; int -&gt; double</i> by using the <i>floor</i> function &#x2F; instruction.<p>Here is the updated version of <i>cos_table_*_LERP</i> which should have higher throughput and lower latency:<p><pre><code> double cos_table_128_LERP(double x) { x = fabs(x); double prod = x * TWO_TO_SEVEN_OVER_PI; double id = floor(prod); double x = prod - id; &#x2F;* after this step, 0 &lt;= x &lt; 1 *&#x2F; int i = ((int)id) &amp; 0xFF; &#x2F;* i = id mod 2^8 *&#x2F; return lerp(x, COS_TABLE[i], COS_TABLE[i + 1]); } </code></pre> You can also optimize <i>lerp</i> a bit more with the formula:<p><pre><code> lerp(w, v1, v2) = (1 - w) * v1 + w * v2 = w * (v2 - v1) + v1 </code></pre> We do employ this range reduction strategy in a more accurate way for trig functions in LLVM libc:<p><pre><code> - with FMA instructions: https:&#x2F;&#x2F;github.com&#x2F;llvm&#x2F;llvm-project&#x2F;blob&#x2F;main&#x2F;libc&#x2F;src&#x2F;math&#x2F;generic&#x2F;range_reduction_fma.h - without FMA instructions: https:&#x2F;&#x2F;github.com&#x2F;llvm&#x2F;llvm-project&#x2F;blob&#x2F;main&#x2F;libc&#x2F;src&#x2F;math&#x2F;generic&#x2F;range_reduction.h</code></pre>
sampo将近 2 年前
&gt; cosine repeats every 2pi<p>&gt; We can do better since the cosine values are equivalent every multiple of pi, except that the sign flips.<p>There is one more step to take: Each π&#x2F;2 long segment has identical shape, they are just pointing up or down (like you already noticed), and left or right. So you can reduce you basic domain down to not just 0...π, but to 0...π&#x2F;2.
评论 #36195203 未加载
评论 #36198546 未加载
评论 #36200438 未加载
评论 #36195093 未加载
femto将近 2 年前
If doing signal processing, an optimisation is to use an N-bit integer to represent the range 0 to 2.pi. Some example points in the mapping: 0-&gt;0, 2^(N-2)-&gt;pi&#x2F;2, 2^(N-1)-&gt;pi, 2^N(wraps to 0)-&gt;2.pi(wraps to 0).<p>If your lookup table has an M-bit index, the index to the lookup table is calculated with: index = (unsigned)theta&gt;&gt;(N-M), where theta in the N-bit integer representing the angle.<p>The fractional part, which can be used for interpolation, is: theta &amp; ((1&lt;&lt;(N-M))-1).<p>If you choose M=N=word size (16 bits is often nice), the angle can be used directly as an index. With 16-bits accuracy is typically good enough without interpolation and the entire trig operation reduces to an array indexing operation.<p>This minimises the overhead of converting an angle to a table index.
评论 #36199709 未加载
评论 #36196744 未加载
xorvoid将近 2 年前
How good do you need it? Lol.<p>This is the approximation that I used in for the animated sinwave example for SectorC:<p>y ~= 100 + (x*(157 - x)) &gt;&gt; 7<p><a href="https:&#x2F;&#x2F;github.com&#x2F;xorvoid&#x2F;sectorc&#x2F;blob&#x2F;main&#x2F;examples&#x2F;sinwave.c">https:&#x2F;&#x2F;github.com&#x2F;xorvoid&#x2F;sectorc&#x2F;blob&#x2F;main&#x2F;examples&#x2F;sinwav...</a>
评论 #36207114 未加载
JKCalhoun将近 2 年前
The author&#x27;s Taylor Series looked salvageable to me. The range between -Pi&#x2F;2 and Pi&#x2F;2 looked fine. That useable range can be re-used to serve all other portions of the circle.<p>Add a conditional that applies (1-result) for some angles and you&#x27;re golden.
amadio将近 2 年前
Taylor expansion is usually not so good for this, you&#x27;ll fare better with either Legendre or Chebyshev polynomials. Robin Green has some excellent material on the subject, which I am linking below.<p>Faster Math Functions: <a href="https:&#x2F;&#x2F;basesandframes.files.wordpress.com&#x2F;2016&#x2F;05&#x2F;fast-math-functions_p1.pdf" rel="nofollow">https:&#x2F;&#x2F;basesandframes.files.wordpress.com&#x2F;2016&#x2F;05&#x2F;fast-math...</a> <a href="https:&#x2F;&#x2F;basesandframes.files.wordpress.com&#x2F;2016&#x2F;05&#x2F;fast-math-functions_p2.pdf" rel="nofollow">https:&#x2F;&#x2F;basesandframes.files.wordpress.com&#x2F;2016&#x2F;05&#x2F;fast-math...</a><p>Even faster math functions GDC 2020: <a href="https:&#x2F;&#x2F;gdcvault.com&#x2F;play&#x2F;1026734&#x2F;Math-in-Game-Development-Summit" rel="nofollow">https:&#x2F;&#x2F;gdcvault.com&#x2F;play&#x2F;1026734&#x2F;Math-in-Game-Development-S...</a>
eschneider将近 2 年前
Trig functions are where accuracy and performance go to die. Accumulated error is a thing, so when optimizing always consider exactly how you&#x27;re going to be using those functions so you make the &#x27;right&#x27; tradeoffs for your application. One size definitely doesn&#x27;t fit all here, so test and experiment.
midjji将近 2 年前
A good idea is to not to compute the values of cos from 0-2pi, but further reduce the range, using cos(a) = cos(-a), and cos(2a) = 1-2cos(a), or cos(a+pi&#x2F;4) =...<p>So we really only ever need to be able to compute cos in the range 0-pi&#x2F;4.<p>Then for further accuracy we can do the taylor expansion around pi&#x2F;8. (or other approximations)<p>finally the number of terms for a fixed accuracy varies with the distance from pi&#x2F;8,
评论 #36199713 未加载
pacaro将近 2 年前
I went through the same exercise implementing trig functions for scheme in webassembly...<p>It was a rabbit hole for sure<p><a href="https:&#x2F;&#x2F;github.com&#x2F;PollRobots&#x2F;scheme&#x2F;blob&#x2F;main&#x2F;scheme.wasm&#x2F;src&#x2F;library&#x2F;trig.wat#L369">https:&#x2F;&#x2F;github.com&#x2F;PollRobots&#x2F;scheme&#x2F;blob&#x2F;main&#x2F;scheme.wasm&#x2F;s...</a><p>[Edit: above is just the range from 0-π&#x2F;4, the branch cuts for cos are at <a href="https:&#x2F;&#x2F;github.com&#x2F;PollRobots&#x2F;scheme&#x2F;blob&#x2F;main&#x2F;scheme.wasm&#x2F;src&#x2F;library&#x2F;trig.wat#L266">https:&#x2F;&#x2F;github.com&#x2F;PollRobots&#x2F;scheme&#x2F;blob&#x2F;main&#x2F;scheme.wasm&#x2F;s...</a> ]
mistercow将近 2 年前
This is a fine introduction to how you even approach the problem of computing transcendental functions.<p>It would have been nice to have some discussion of accuracy requirements rather than just eyeballing it and saying “more accuracy than I need”. This is a spot where inexperienced devs can easily get tangled up. An attitude like “ten digits? That’s so many! I’m only making a game, after all” is exactly the sort of thing that gets you into trouble if you start accumulating errors over time, and this is particularly easy to do with trig functions.
dm319将近 2 年前
The answer to Cos(1.57079632) (in radians) will give you a clue as to how your calculator does it.<p>See here [0]<p>[0] <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;calculators&#x2F;comments&#x2F;126st95&#x2F;cos157079632_is_a_bit_of_a_torture_test&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;calculators&#x2F;comments&#x2F;126st95&#x2F;cos157...</a>
评论 #36204668 未加载
评论 #36200862 未加载
fargle将近 2 年前
It&#x27;s unusual, outside x86, for these functions to be hardware accelerated. So just about every libm has to do it in software.<p>Nearly all libm implementations around are based off Sun&#x27;s fdlibm from the 80&#x27;s-90&#x27;s, including a bunch of the variants cited below, *bsds, etc. They are basically updated slightly, but you can see their heritage in the structure of the code. The original is found on netlib these days: <a href="https:&#x2F;&#x2F;www.netlib.org&#x2F;fdlibm&#x2F;k_cos.c" rel="nofollow">https:&#x2F;&#x2F;www.netlib.org&#x2F;fdlibm&#x2F;k_cos.c</a><p>14th order polynomial, but only uses ~7 terms. It&#x27;s supposed to have error &lt; 1 ULP. For fdlibm, it&#x27;s pretty readable compared to some of the other fun ones. I seem to remember sqrt being a bit gnarly.
sojuz151将近 2 年前
I would say the benchmarks with lookup tables are bad. In the benchmark cpu will keep the table in cache but in a real program this cache would have to be shared with rest of the code&#x2F;data. This would either kill the performance of cosine or rest of the app
mvcalder将近 2 年前
If you like this sort of thing there’s a great book: Methods and Programs for Mathematical Functions by Stephen Moshier. It covers the computation of all sorts of special functions. The code is available in the cephes library but the book may be out of print.
评论 #36199455 未加载
dang将近 2 年前
Related:<p><i>Implementing Cosine in C from Scratch (2020)</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30844872" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30844872</a> - March 2022 (134 comments)<p><i>Implementing cosine in C from scratch</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=23893325" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=23893325</a> - July 2020 (20 comments)
lntue将近 2 年前
So in the implementation of `cos_table_<i>_LERP`, you did technically 2 step range reduction:<p>1. Reduce x = x mod 2</i>pi<p>2. Reduce index = 10^n * (x &#x2F; 10^-n), and i - index = 10^n * (x mod 10^-n)<p>With limited input range and required precision as in the tests, you can combine these 2 range reduction steps:<p>1. Choose the reduced range as power of 2 instead of power of 10 for cheaper modulus operation, let say `2^-N = 2^-7`.<p>2. Avoid the division in `modd(x, CONST_2PI)` by multiplying by `2^N &#x2F; pi`.<p>3. Avoid the round trip `double -&gt; int -&gt; double` by using the `floor` function &#x2F; instruction.<p>Here is the updated version of `cos_table_<i>_LERP` which should have higher throughput and lower latency:<p>``` double cos_table_128_LERP(double x) { x = fabs(x); double prod = x </i> TWO_TO_SEVEN_OVER_PI;<p><pre><code> }</code></pre> ```
xipix将近 2 年前
Nice. I&#x27;d love to see how this changes when you have SIMD and multiple cosines to compute in parallel. Also when you have to compute sine and cosine simultaneously which is often the case, and then you may be more interested in polar error than cartesian error.
评论 #36195508 未加载
londons_explore将近 2 年前
Note that you can combine the lookup table with the taylor series expansion...<p>And you can even use the same lookup table for each. That means with 2 table lookups in a 32 entry table, a single multiply and add (and a few bit shifts), you can get ~9 bits of precision in your result, which is fine for most uses. It also probably makes a sin operation take ~1 clock cycle on most superscalar architectures, as long as your workload has sufficient instruction parallelism.<p>Note that a smaller table typically works out faster because 32 entries fit in the cache, whereas repeated random entry into a 1024 entry table would probably kick a bunch of other stuff out of the cache that you wanted.
fallingfrog将近 2 年前
Why not do a table, but also store a table of the 1st derivative (which, for cosine, you could use the same table again but shifted)?<p>Then, you could do a 2nd order fit like a spline instead of a straight line between your table values.<p>Betcha it would be crazy fast.
Aardwolf将近 2 年前
&quot; In all my time of coding, there has only been one situation in which I used cosine: games, games, games.<p><pre><code> function move(obj) { obj.x += obj.speed * Math.sin(obj.rotation); obj.y += obj.speed * Math.cos(obj.rotation); }</code></pre> &quot;<p>Why not store the velocity as a 2D vector instead? Then you still have to use cos&#x2F;sin to compute this vector, but at least you don&#x27;t need it every frame, plus often you don&#x27;t need to use cos&#x2F;sin to compute this vector either since forces that act on the velocity themselves can have an x and y component you can directly add to it
评论 #36195271 未加载
Technotroll将近 2 年前
Looks to me like you could compute the top half, and then just repeat the rest as a kind of mirror function, that repeats with some set translation. Am I wrong here?
评论 #36199539 未加载
rjmunro将近 2 年前
Can you get some better accuracy by noting that cos(x) === sin(π&#x2F;2 - x) and using e.g. the taylor expansion for sin when π&#x2F;4 &lt; x &lt; 3π&#x2F;4?
Const-me将近 2 年前
I once did that as well: <a href="https:&#x2F;&#x2F;github.com&#x2F;Const-me&#x2F;AvxMath&#x2F;blob&#x2F;master&#x2F;AvxMath&#x2F;AvxMathTrig.cpp">https:&#x2F;&#x2F;github.com&#x2F;Const-me&#x2F;AvxMath&#x2F;blob&#x2F;master&#x2F;AvxMath&#x2F;AvxM...</a><p>The method is different, and the OP hasn’t mentioned it — high-degree minimax polynomial approximation.
throwawaaarrgh将近 2 年前
Strange. This article was easy to read, simple formating, not driven by trends or weird internet culture. The weren&#x27;t any bombastic claims, aggrandizing statements or dramatic opinions. Just interesting information presented clearly without being verbose. Almost like it was written by an adult. How did this end up on HN?
评论 #36198557 未加载
adeon将近 2 年前
I like how there&#x27;s lots of replies showing different ways to do this, improve it, additional nuances and viewpoints etc. lots of smart people.<p>If I ever want better solutions for some programming problem, I&#x27;ll write a post about it and try to get it in frontpage of HN :)
machina_ex_deus将近 2 年前
I would&#x27;ve done it using complex exponentiation. And the exponential function is very easy to estimate fast: instead of Taylor series, use:<p>e^x=(1+x&#x2F;n)^n<p>pick n=2^N, and that&#x27;s just a bit shift and N repeated multiplications. Probably much faster and accurate.
aidenn0将近 2 年前
Marz&#x27;s taylor series implementation as-is is significantly faster (~60% the runtime) than the glibc implementation at 6 terms on My Machine(tm), rather than about the same as in TFA. I haven&#x27;t compared to LERP lookup table yet though.
londons_explore将近 2 年前
OP&#x27;s lookup table benchmarking is bad. The &quot;modd(x, CONST_2PI);&quot; at the top will dominate, by far, the runtime.<p>Anyone who wants performance measures angles using n bit fixed point math, mapping 0 to 2*pi as 0 to (2^n)-1.
Helenarttr将近 2 年前
That was so amazing Information. <a href="https:&#x2F;&#x2F;www.ballsportspro.com&#x2F;how-does-a-pickleball-ladder-work&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.ballsportspro.com&#x2F;how-does-a-pickleball-ladder-w...</a>
charlieyu1将近 2 年前
Related:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35381968" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35381968</a> Cosine Implementation in C<p>Much better approximation with only 7 terms
082349872349872将近 2 年前
Niklaus Wirth also has some cosine code (probably meant more for software floating point, or fp FPGA blocks); I don&#x27;t know how they compare with these approximations but his seem to be within 1e-6 of python&#x27;s math.cos ...<p><a href="https:&#x2F;&#x2F;people.inf.ethz.ch&#x2F;wirth&#x2F;ProjectOberon&#x2F;Sources&#x2F;Math.Mod.txt" rel="nofollow">https:&#x2F;&#x2F;people.inf.ethz.ch&#x2F;wirth&#x2F;ProjectOberon&#x2F;Sources&#x2F;Math....</a>
omgmajk将近 2 年前
&gt;Maybe don&#x27;t stare at that for too long...<p>I sure did stare at that for way too long.
dang将近 2 年前
Url changed from <a href="https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20210513043002&#x2F;http:&#x2F;&#x2F;web.eecs.utk.edu&#x2F;~azh&#x2F;blog&#x2F;cosine.html" rel="nofollow">https:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20210513043002&#x2F;http:&#x2F;&#x2F;web.eecs.u...</a>, which points to this.