TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Chebyshev approximation and how it can help (2012)

140 点作者 y04nn11 个月前

10 条评论

programjames11 个月前
Some other useful things about Chebyshev approximations:<p>1. You can use a Fourier transform to get the coefficients in O(n log n) time.<p>2. So, multiplying two approximations only takes O(n log n) time.<p>3. Also, adding, integrating, or taking the derivative only take O(n) time.<p>This is why chebfun&#x2F;chebpy can run so fast while magically finding roots&#x2F;derivatives&#x2F;etc. A couple other interesting facts:<p>1. Remember the double-angle formula? There&#x27;s a more general recursion for the Chebyshev polynomials:<p>\[ T_n(x) = 2x T_{n-1}(x) - T_{n-2}. \]<p>So, e.g.<p>\[ T_2(cos(theta)) = cos(2*theta) = 2cos(theta)^2 - 1 = 2cos(theta)T_1(cos(theta)) - T_0(cos(theta)) \]<p>2. Computers actually use this recursion to calculate sines and cosines! So, it&#x27;s a little inefficient to code your Chebyshev polynomials using `math.sin`.<p>3. Using generating functions, you can get a closed form for T_n(x) that only takes O(log n) time to calculate. (Note: assuming you count multiplications as constant. However, you actually need O(log n) bits to accurately represent x, so it&#x27;s more accurately O((log n)^2 log log n).)
评论 #40615844 未加载
guyomes11 个月前
If you are ready to spend some precomputation time to compute a good approximation, you can use the Remez algorithm [1]. It is implemented in the Sollya library for machine precision [2,3]. It has notably been used to implement the Core Math library [4] to provide correct rounding for the math functions in the libc library.<p>[1]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Remez_algorithm" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Remez_algorithm</a><p>[2] : <a href="https:&#x2F;&#x2F;www.sollya.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.sollya.org&#x2F;</a><p>[3]: <a href="https:&#x2F;&#x2F;www.sollya.org&#x2F;sollya-weekly&#x2F;sollya.php" rel="nofollow">https:&#x2F;&#x2F;www.sollya.org&#x2F;sollya-weekly&#x2F;sollya.php</a><p>[4]: <a href="https:&#x2F;&#x2F;core-math.gitlabpages.inria.fr&#x2F;" rel="nofollow">https:&#x2F;&#x2F;core-math.gitlabpages.inria.fr&#x2F;</a>
alleycat500011 个月前
A great book on this subject is Approximation Theory and Approximation Practice:<p><a href="https:&#x2F;&#x2F;people.maths.ox.ac.uk&#x2F;trefethen&#x2F;ATAP&#x2F;" rel="nofollow">https:&#x2F;&#x2F;people.maths.ox.ac.uk&#x2F;trefethen&#x2F;ATAP&#x2F;</a><p>Also chebfun!<p><a href="https:&#x2F;&#x2F;www.chebfun.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.chebfun.org&#x2F;</a>
herodotus11 个月前
This is so strange: a few days ago I commented on an HN post (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40582712">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40582712</a>) about when &quot;programmer&quot; became an acknowledged job title, and, in my comment, mentioned how I used a Chebyshev approximation followed by two iterations of Newton&#x27;s method to compute sqrt, and then today this article shows up with exactly that use case!<p>I wrote that code (to compute sqrt 2) in 1974 or 1975 in IBM 360 Assembler. I used a conditional macro constant that increased the number of iterations of Newton&#x27;s from 2 to 3 just in case the client wanted double precision.
kragen11 个月前
chebyshev approximations are fucking awesome, but this article gives too short shrift to table lookup; it does go a bit beyond nearest-neighbor interpolation to linear interpolation, and correctly points out that this gives you error that is quadratic in the distance from the x-coordinate of the nearest table entry (and therefore worst-case error quadratic in your point spacing), and that this gives you half the error of the fifth-order chebyshev approximation. it says that this is a &#x27;rare case&#x27;, but in fact you will always get a lower error from table lookup if you use enough points. it&#x27;s just that with only linear interpolation, the number of points rapidly becomes impractical<p>as i understand it, other commonly-used strategies include spline interpolation (using second-, third-, or even fourth-order interpolation, requiring respectively three, four, and five multiplications, which can be done concurrently) and, in suitable cases like this example, newton iteration from an initial table-lookup guess<p>unlike the piecewise-taylor approach outlined early in the article, spline interpolation only requires storing a tiny amount more data than simple nearest-neighbor table lookup (potentially three more points for fourth-order interpolation, so a 256-entry table becomes 259 entries)<p>on a different topic, i think it&#x27;s easy to find embedded dsp applications where the easiest solution uses fourier transforms, which usually do require high-precision floating point. machine vision, radio communication, musical applications, etc.<p>incidentally, if you find yourself in a situation where you actually need the taylor expansion of √(1+<i>x</i>) or √(½+<i>x</i>) or something, and you don&#x27;t want to do a bunch of pencil-and-paper algebra (or don&#x27;t trust yourself), pari&#x2F;gp has your back:<p><pre><code> ? sqrt(1+x) + O(x^5) %5 = 1 + 1&#x2F;2*x - 1&#x2F;8*x^2 + 1&#x2F;16*x^3 - 5&#x2F;128*x^4 + O(x^5) ? sqrt(1+x) + O(x^7) %7 = 1 + 1&#x2F;2*x - 1&#x2F;8*x^2 + 1&#x2F;16*x^3 - 5&#x2F;128*x^4 + 7&#x2F;256*x^5 - 21&#x2F;1024*x^6 + O(x^7) ? sqrt(1&#x2F;2+x) + O(x^5) %6 = 0.70710678118654752440084436210484903928 + 0.70710678118654752440084436210484903928*x - 0.35355339059327376220042218105242451964*x^2 + 0.35355339059327376220042218105242451964*x^3 - 0.44194173824159220275052772631553064955*x^4 + O(x^5)</code></pre>
richrichie11 个月前
As Boyd says in his book on Chebyshev Methods: when in doubt use Chebyshev polynomials.<p>I use Chebyshev polynomials extensively in finance and have tried problems like MNIST with Chebyshev and they get close to CNNs in accuracy.<p>ApproxFun Julia package pretty cool for Chebyshev work:<p><a href="https:&#x2F;&#x2F;juliaapproximation.github.io&#x2F;ApproxFun.jl&#x2F;latest&#x2F;" rel="nofollow">https:&#x2F;&#x2F;juliaapproximation.github.io&#x2F;ApproxFun.jl&#x2F;latest&#x2F;</a>
评论 #40618616 未加载
dang11 个月前
Related:<p><i>Chebyshev Approximation</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10115336">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10115336</a> - Aug 2015 (60 comments)
archermarks11 个月前
Nice article, thanks for sharing!
inamberclad11 个月前
Once more, this is _exactly_ why Ada has arbitrary precision decimal arithmetic. One merely needs to specify<p>type Result is range -100 .. 100 delta 0.0001;<p>and the compiler will figure out how to give you fast math with only the accuracy and resolution that you need!
评论 #40615491 未加载
评论 #40616344 未加载
bryan011 个月前
2012
评论 #40615081 未加载