I recently learned how Doom was ported to the SNES. It's quite impressive. The SNES hardware was nowhere near fast enough to do all the trig calculations needed for the game but cartridge based games had a trick up their sleeve: they could include actual hardware <i>inside the cart</i> that the game code could make use of. It was more expensive but if you expected to sell a boatload of copies, it could be worth it. However, even using extra hardware wasn't enough in this case. So they pre-calculated lookup tables for sine, cosine, tangent etc. for every angle at the necessary precision. They were helped by the fact that the game resolution in this case was fairly low.<p>If you're interested, you can peruse the C code that was used to generate the tables. Here's the file for sine/cosine:<p><a href="https://github.com/RandalLinden/DOOM-FX/blob/master/source/mksin.c">https://github.com/RandalLinden/DOOM-FX/blob/master/source/m...</a>
This made me realize that trigonometric functions are not deterministic across different CPU architectures, OS, and programming languages (floating point precision aside).<p>E.g. I would assume that Math.sin(x) returns the same thing in NodeJS on Windows and Mac/M1, but it turns out it is necessarily so.
<a href="https://stackoverflow.com/questions/74074312/standard-math-functions-reproducibility-on-different-cpus" rel="nofollow">https://stackoverflow.com/questions/74074312/standard-math-f...</a>
It's much clearer if you read one of the source code of the libm.<p>Plan 9: <a href="https://9p.io/sources/plan9/sys/src/libc/port/sin.c" rel="nofollow">https://9p.io/sources/plan9/sys/src/libc/port/sin.c</a><p>Freebsd: <a href="https://cgit.freebsd.org/src/tree/lib/msun/src/k_sin.c" rel="nofollow">https://cgit.freebsd.org/src/tree/lib/msun/src/k_sin.c</a>
P.J. Plauger's _The Standard C Library_ provides an implementation for all functions in the (then) C standard: <a href="https://www.amazon.com/Standard-Library-P-J-Plauger/dp/0138380120?ref_=ast_author_dp&dib=eyJ2IjoiMSJ9.J2FZMVfhMnUjy3nORwaNfJ39GKhZvMa1t-YBXfQeaEgAuQf63AYkxWWCauQjjBeo9Z3_OrNF4PZDHa-l_tdvzR3ooYzJBGyfigUwXLxNHiszIfSYtPsgIzjEKpUmRBVOSMlrnXBoG26XMFM6WGfX6gSXoSJCch-ZygQNZo-OuqAfqvHlSY4NZBnBxp3ORQySq32fkwuWqv516zBqZMCmp6fPUdbJG1rJgG9Z5yJxIc8.Kz9qxgL-6ZhzUqEeAARs4sTCGFg8uFGlXxRP21iGrRc&dib_tag=AUTHOR" rel="nofollow">https://www.amazon.com/Standard-Library-P-J-Plauger/dp/01383...</a>
After reducing the interval, you don't want to use the Taylor series as you're building an approximation that's really good in 0 but not so good moving away from 0.
It's better to use an interpolating polynomial (Chebychev comes to mind) over the whole target interval.
I've seen the third order Taylor series used, but with the coefficients calculated at various offsets for a quarter wave. So you lookuo where you are in the quarter wave, then look up the 3 or 4 cofficients. This keeps the error somewhat bounded as the size of X is a small so the series does not diverge too much.
You might also find this video interesting:<p>"Finding the BEST sine function for Nintendo 64"<p><a href="https://www.youtube.com/watch?v=xFKFoGiGlXQ" rel="nofollow">https://www.youtube.com/watch?v=xFKFoGiGlXQ</a>
Anyone experienced with the Remez algorithm mentioned at the end of the article?<p>The degree-9 polynomial, said to be a thousand times better than the original Taylor approximation in maximum error, also appears to be very close to the Taylor series in the first place.<p>Rounding the Taylor coefficients to 6 digits after the decimal:<p>1/3! = 0.166667<p>1/5! = 0.008333<p>1/7! = 0.000198<p>1/9! = 0.000027(56)<p>The first 2 are exact, the third is 5 digits only (so 0.000190), and the fourth is more different starting from the 6th digit (0.000026019).<p>The delta in the 9-th order is expected if you were to truncate the Taylor series starting from the 11th order to infinity (+ x^11 / 11! - x^13/13! ...).
I blogged my adventure of implementing cosine from scratch and how others have done it:<p><a href="https://austinhenley.com/blog/cosine.html" rel="nofollow">https://austinhenley.com/blog/cosine.html</a>
This was my first use of open source, around 1978. I wondered how calculators and computers did trig functions, and was also using Unix V7. We had a large disk and kept the source on line. So I was able to find this: <a href="https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/libm/sin.c" rel="nofollow">https://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/libm/s...</a> and from there this book: <a href="https://www.biblio.com/book/computer-approximations-john-f-hart-e/d/1504828236" rel="nofollow">https://www.biblio.com/book/computer-approximations-john-f-h...</a>
<a href="http://steve.hollasch.net/cgindex/math/inccos.html" rel="nofollow">http://steve.hollasch.net/cgindex/math/inccos.html</a> is a great technique if you need a fast integer approximation for some some arbitrary sampling interval (i.e. motor control)
Just using the 32 entry LUT and the small-angle approximations (sin(x) = x, cos(x) = 1-x^2/2) lets you calculate sin(x) within +/- 0.00015, which is rather impressive for something that can be done quickly by hand.<p>If you use cos(x) = 1 then you are still accurate to within 1%<p>[edit]<p>I think I also found an error in TFA? It seems like picking the "best" N would allow r to be in the range -pi/32 < r < pi/32, which makes the 3rd order Taylor series have an error of 4e-12, significantly better than the error range for 0 < r < pi/16
CORDIC is how it's usually done in hardware (and FPGAs).<p><a href="https://en.wikipedia.org/wiki/CORDIC" rel="nofollow">https://en.wikipedia.org/wiki/CORDIC</a>
I was truly amazed when my high school computer science teacher expanded the sine function into a Taylor series for us to implement in class.<p>Couldn't wrap my head around the concept until the topic was revisited in college, but idea was there and helped me understand the tradeoffs brought by "fast math" libraries I was using during high school.
What’s the best way to calculate it by hand?<p>I’m brushing up my math basics (I graduated CS while dodging the math requirements) and it frustrates me that in trig I need to remember values at all. The values such as sqrt(2)/2 make sense but how hard is it to calculate sin(5 degrees) by hand?
I remember seeing the source code for a version of SpaceWar! running on an Adage Graphics Terminal (a one's complement machine) around 1970 that used a precomputed sine table.<p>I wonder what the first program was that ever used a precomputed trig table.
The same way you can eat an elephant: one byte at a time.<p>Any calculating job can be undertaken by a proper Turing machine. You just have to keep in mind the old triangle of usage: Cost, capability and speed.<p>If a human can calculate a sine, so can any full-blown computer.
I thought modern CPUs since the late 1980's used a lookup table for trig/transcendental functions. Is the LUT just an expansion of the polynomial? I never really understood how FPUs worked...
Ignorant question:<p>Given the ridiculous number of transistors and so on we can use in CPUs and GPUs nowadays how feasible is a relatively huge trig lookup table burned into rom?