One important point that the article doesn't touch on is determinism. rsqrtps is implemented differently on different CPUs, and so if having reproducible floating point results is a requirement (there's a lot of use-cases for this), you simply cannot use it. Your only remaining option is to use a totally IEEE-754 compliant algorithm that is guaranteed to work the same on every CPU that implements IEEE-754 floats, and for that there's still no better approach than using the Q_rsqrt idea, of course with some modifications for the "modern age".
I wonder to what extent the Newton-Rhapson strategy plays nicer with big fancy reordering/pipelining/superscalar chips. It has more little instructions to shuffle around, so my gut says it should be beneficial, but the gut can be misleading for this kind of stuff.<p>Also,<p>-funsafe-math-optimizations<p>Fun, safe math optimizations should be turned on by default! ;)
The benchmark should not average the values but take the lowest.<p>I would not write a better explanation than Daniel Lemire on his blog:<p><a href="https://lemire.me/blog/2023/04/06/are-your-memory-bound-benchmarking-timings-normally-distributed/" rel="nofollow">https://lemire.me/blog/2023/04/06/are-your-memory-bound-benc...</a>
A colleague and I were once discussing the fast inverse square root and joked that we need to make a (neural net) activation function that uses an inverse square root as an excuse to use the fast inverse square root. At any rate, I did come up with an activation function that is very similar to Swish/GELU but uses an inverse square root:<p><a href="https://twitter.com/danieldekok/status/1484898130441166853?s=61&t=D_7PZsTJCaa6-HzOReAhwg" rel="nofollow">https://twitter.com/danieldekok/status/1484898130441166853?s...</a><p>It's quite a bit cheaper, because it doesn't need expensive elementary functions like exp or erf.<p>(We did add it to Thinc: <a href="https://thinc.ai/docs/api-layers#dish" rel="nofollow">https://thinc.ai/docs/api-layers#dish</a>)
It's a shame that -fno-math-errno isn't the default. It pessimizes many common operations, as can be seen in the article. Also e.g. a simple sqrt() call like<p>#include <math.h><p>double mysqrt(double d) {
return sqrt(d);
}<p>with and without -fno-math-errno: <a href="https://godbolt.org/z/bvrz9r8ce" rel="nofollow">https://godbolt.org/z/bvrz9r8ce</a><p>One can see that with -fno-math-errno the function can be entirely inlined. But if errno is enabled, it has to first check whether the input is negative, and in that case call the libc sqrt() function which sets errno.<p>As for why it's not the default, I guess it's historical. The errno approach was common back in the days before IEEE 754 with its exception model provided another way.<p>E.g. for glibc: <a href="https://man7.org/linux/man-pages/man7/math_error.7.html" rel="nofollow">https://man7.org/linux/man-pages/man7/math_error.7.html</a><p>Musl libc, being newer, does away with that and never sets errno in libm functions: <a href="https://wiki.musl-libc.org/mathematical-library.html" rel="nofollow">https://wiki.musl-libc.org/mathematical-library.html</a>
I've always wondered why they did the casting rather than a union like:<p><pre><code> float my_rsqrt( float number )
{
float x2;
union {
float y;
long i;
} u;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
u.y = number;
u.i = 0x5f3759df - ( u.i >> 1 ); // what the fuck?
u.y = u.y * ( threehalfs - ( x2 * u.y * u.y ) ); // 1st iteration
return u.y;
}
</code></pre>
Were unions not supported by the compilers back then?
Awesome write up, a lot of effort must have gone into this.<p>I believe the benchmark program outputs the wrong units? It should be picoseconds (ps) instead of femtoseconds (fs)?
This would never pass the code review today. "Why are you optimizing this?" "Why use a magic constant?" "Optimization is evil!"