TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Implementing the Exponential Function

215 pointsby hackernewsn00balmost 5 years ago

20 comments

kensalmost 5 years ago
A couple other ways to compute the exponential function: The Intel 8087 coprocessor used CORDIC. The chip contained a ROM with the values of log2(1+2^-n) for various n. These constants allowed the exponential to be rapidly computed with shifts and adds.<p>The Sinclair Scientific calculator was notable for cramming transcendental functions into a chip designed as a 4-function calculator, so they took some severe shortcuts. Exponentials were based on computing 0.99^n through repeated multiplication. Multiplying by 0.99 is super-cheap in BCD since you shift by two digits and subtract. To compute 10^x, it computes .99^(-229.15*x). Slow and inaccurate, but easy to implement.
评论 #23672759 未加载
benrbrayalmost 5 years ago
Great post! The author touches on Chebychev polynomials, which are the basis for quite a few numerical analysis tricks, including conjugate gradient [1], accelerated gradient descent [2], and Chebychev semi-iterative methods (which find the best combination of past iterates to use during an optimization procedure; sadly I can&#x27;t find a good reference).<p>There are a number of facts&#x2F;folklore about Chebychev polynomials that seem to go stated without proof in a lot of papers, so a few years ago I wrote up some notes [3,4] with the most common claims. Maybe someone will find them useful!<p>[1] (p35) <a href="http:&#x2F;&#x2F;www.cs.cmu.edu&#x2F;~quake-papers&#x2F;painless-conjugate-gradient.pdf" rel="nofollow">http:&#x2F;&#x2F;www.cs.cmu.edu&#x2F;~quake-papers&#x2F;painless-conjugate-gradi...</a><p>[2] <a href="http:&#x2F;&#x2F;blog.mrtz.org&#x2F;2013&#x2F;09&#x2F;07&#x2F;the-zen-of-gradient-descent.html" rel="nofollow">http:&#x2F;&#x2F;blog.mrtz.org&#x2F;2013&#x2F;09&#x2F;07&#x2F;the-zen-of-gradient-descent....</a><p>[3] <a href="https:&#x2F;&#x2F;benrbray.com&#x2F;static&#x2F;notes&#x2F;chebychev-polynomials_dec16.pdf" rel="nofollow">https:&#x2F;&#x2F;benrbray.com&#x2F;static&#x2F;notes&#x2F;chebychev-polynomials_dec1...</a><p>[4] <a href="https:&#x2F;&#x2F;benrbray.com&#x2F;static&#x2F;notes&#x2F;minimax-approx_nov16.pdf" rel="nofollow">https:&#x2F;&#x2F;benrbray.com&#x2F;static&#x2F;notes&#x2F;minimax-approx_nov16.pdf</a>
评论 #23675388 未加载
dvtalmost 5 years ago
This is an awesome article! One of my favorite SO answers I remember researching dealt with the Padé approximation of tanh[1] (which I found was significantly better than the Taylor expansion), but the caveat was that it only worked within a very narrow neighborhood.<p>I will say that the article didn&#x27;t really touch on techniques that minimize IEEE 754 subnormal[2] performance impact, which is a very interesting problem in itself. A lot of real-life implementations of something like e^x will have various clever ways to avoid subnormals.<p>[1] <a href="https:&#x2F;&#x2F;stackoverflow.com&#x2F;a&#x2F;6118100&#x2F;243613" rel="nofollow">https:&#x2F;&#x2F;stackoverflow.com&#x2F;a&#x2F;6118100&#x2F;243613</a><p>[2] <a href="https:&#x2F;&#x2F;mathworld.wolfram.com&#x2F;SubnormalNumber.html" rel="nofollow">https:&#x2F;&#x2F;mathworld.wolfram.com&#x2F;SubnormalNumber.html</a>
评论 #23672215 未加载
评论 #23673520 未加载
评论 #23672057 未加载
dietricheppalmost 5 years ago
I recently implemented the exponential function for a sound synthesizer toolkit I’m working on. The method I used was not mentioned in the article, so I’ll explain it here.<p>I used the Remez algorithm, modified to minimize equivalent input error. This algorithm lets you find a polynomial with the smallest maximum error. You start with a set of X coordinates, and create a polynomial which oscillates up and down around the target function, so p(x0) = f(x0) + E, p(x1) = f(x1) - E, p(x2) = f(x2) + E, etc.<p>You then iterate by replacing the set of x values with the x values which have maximum error. This will converge to a polynomial with smallest maximum error.<p>From my experiments, you can use a fairly low order approximation for musical applications. Used to calculate frequencies of musical notes, 2nd order is within 3.0 cents, and 3rd order is within 0.13 cents.
评论 #23672230 未加载
评论 #23672909 未加载
评论 #23671837 未加载
评论 #23673540 未加载
nimishalmost 5 years ago
Need to be careful about confusing smooth (infinitely differentiable) and analytic (= to Taylor series) functions.<p>&gt; Any function with this property can be uniquely represented as a Taylor series expansion “centered” at a<p>It&#x27;s fairly easy to construct tame looking functions where this isn&#x27;t true.
评论 #23672333 未加载
评论 #23672474 未加载
m4r35n357almost 5 years ago
[Aside on Taylor Series] The section on recurrence relations is the basis of a little-known ODE solving technique sometimes known as the Taylor Series Method (occasionally Differential Transform). It allows an &quot;extension&quot; of Euler&#x27;s Method to arbitrary order (using arbitrary precision arithmetic), by _calculating_ the higher derivatives rather than approximating them as in RK4 and friends.<p>Here is an open access link to just one of many papers on the subject: <a href="https:&#x2F;&#x2F;projecteuclid.org&#x2F;euclid.em&#x2F;1120145574" rel="nofollow">https:&#x2F;&#x2F;projecteuclid.org&#x2F;euclid.em&#x2F;1120145574</a><p>It includes a link to the actual software they used.<p>[EDIT] Obviously applying the technique above to dx&#x2F;dt = x reproduces the Taylor Series in the article!
philzookalmost 5 years ago
Really interesting article!<p>Some other resources you may find interesting<p><a href="https:&#x2F;&#x2F;fpbench.org&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;fpbench.org&#x2F;index.html</a> - I think I saw a talk at FPTalks that felt related to your topic &quot;Creating correctly rounded math libraries for real number approximations&quot;<p><a href="https:&#x2F;&#x2F;fpbench.org&#x2F;community.html" rel="nofollow">https:&#x2F;&#x2F;fpbench.org&#x2F;community.html</a><p><a href="https:&#x2F;&#x2F;hal-ens-lyon.archives-ouvertes.fr&#x2F;ensl-01529804" rel="nofollow">https:&#x2F;&#x2F;hal-ens-lyon.archives-ouvertes.fr&#x2F;ensl-01529804</a> - Correctly Rounded libm<p><a href="https:&#x2F;&#x2F;www.mpfr.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.mpfr.org&#x2F;</a> - GNU Multi precision floating point<p><a href="http:&#x2F;&#x2F;sollya.gforge.inria.fr&#x2F;" rel="nofollow">http:&#x2F;&#x2F;sollya.gforge.inria.fr&#x2F;</a> - A tool for floating point code development
nullcalmost 5 years ago
A collaborator and I recently had fun code golfing integer implementations of floor(106*log2(x)) and floor(log2(x!)):<p><a href="https:&#x2F;&#x2F;github.com&#x2F;sipa&#x2F;minisketch&#x2F;blob&#x2F;master&#x2F;src&#x2F;false_positives.h#L20" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sipa&#x2F;minisketch&#x2F;blob&#x2F;master&#x2F;src&#x2F;false_pos...</a>
nayukialmost 5 years ago
Excellent article; it&#x27;s very detailed in the mathematical explanations and code.<p>I was interested in the exponential function too, but approached it from the standpoint of arbitrary-precision arithmetic. My explanation is an order of magnitude shorter, but of course my code is a lot slower than a typical floating-point algorithm. I offer a way to compute correctly rounded exponentials without resorting to a big math package like Mathematica&#x2F;Maple&#x2F;etc. <a href="https:&#x2F;&#x2F;www.nayuki.io&#x2F;page&#x2F;approximating-eulers-number-correctly" rel="nofollow">https:&#x2F;&#x2F;www.nayuki.io&#x2F;page&#x2F;approximating-eulers-number-corre...</a>
defaultcompanyalmost 5 years ago
&quot;We can see the values are relatively densely clustered near 0 and increasingly sparse the further you move away from the origin. As another example, half of all 32-bit floating point numbers reside in the real interval [-1,1]. &quot;<p>I knew this in general but this specific statistic was eye-opening to me. Systems like OpenGL normalize coordinates so they only use values in this [-1,1] range. That means they effectively lose half the expressive power of the 32 bits. Are there other machine representations of real numbers which would be better for that use case? i.e. restricted to that range and evenly distributed?
评论 #23683462 未加载
olliejalmost 5 years ago
Ugh I had to implement all the transcendental functions a while ago. It was miserable, especially dealing with intermediate rounding, etc<p>I don’t recommend it.<p>That said I liked that there’s a specific exp-1 function (otherwise you drop most precision).
评论 #23672642 未加载
xelxebaralmost 5 years ago
Enjoying the article. Looks like there&#x27;s a math typo at the tail of the &quot;Precision Bound&quot; section:<p><pre><code> $$ \log\left(\frac{x e}{n}\right) \leq \frac{1}{n}\log(\eta) $$ </code></pre> is not equivalent to<p><pre><code> $$ \frac{xe}{n} \leq \eta^{-n} $$ </code></pre> The latter should be<p><pre><code> $$ \frac{xe}{n} \leq \eta^{1&#x2F;n} $$ </code></pre> and the &quot;$x^{-n}$&quot; in the following paragraph should be changed accordingly.<p>Certainly understandable flub, though. Working with logarithms, it&#x27;s easy to mix up minus sign vs. 1&#x2F;n conversions.
评论 #23677232 未加载
chilleealmost 5 years ago
There&#x27;s 2 cool things related to floating point error I learned recently.<p>First, `x + x + x + x + x == 5<i>x`. This is true for values up to 5, but is not true for 6. Proving this is a fun exercise, but is a little bit painful and has a lot of cases.<p>Second, SMT solvers can actually prove stuff like this relatively easy! In Z3py, one can prove this in 4 lines of code.<p>from z3 import </i> x = FP(&#x27;x&#x27;, FPSort(8, 24)) set_default_rounding_mode(RNE()) solve(5*x != x + x+ x+ x + x)
nwallinalmost 5 years ago
IMHO it&#x27;s a bit cheeky to use exp2 from the standard library. It&#x27;s likely got all the same math exp does. Fortunately, if you know that your exponent is an integer, there&#x27;s a constant time version. It&#x27;s 5 instructions: add, and, shift, mov, ret.<p><a href="https:&#x2F;&#x2F;godbolt.org&#x2F;z&#x2F;KNyoVd" rel="nofollow">https:&#x2F;&#x2F;godbolt.org&#x2F;z&#x2F;KNyoVd</a>
评论 #23673106 未加载
评论 #23675756 未加载
widdmaalmost 5 years ago
Nice article, thanks. For anyone interested in the topic, I highly recommend Trefethen&#x27;s Approximation Theory and Approximation Practice. It&#x27;s approachable, intuitive, and fun while still covering a lot of technical detail.
评论 #23705494 未加载
riverlongalmost 5 years ago
Great read and stunning web design (short of the small italics in the abstract that are a little harder to read). A beautiful first post, certainly.
emmanueloga_almost 5 years ago
Few miscellaneous meta nitpicks:<p>* The article looks beautiful, but am I the only one that hates left-centered content? Wide monitors are the norm these days, and reading left centered content literally hurts my neck (need to resize manually to center the content, approximately.<p>* Why would someone make the effort to write such detailed analysis and conceal their identity? I just don&#x27;t get what&#x27;s the objective of not providing any sort of provenance... just a gripe of mine.
sakexalmost 5 years ago
I was exactly looking for that today, thanks
BernardTatinalmost 5 years ago
Very good article! Thanks!
nxpnsvalmost 5 years ago
This is a great read!