TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Easy Way To Solve Equations In Python

48 pointsby barakstoutover 12 years ago

4 comments

dmlorenzettiover 12 years ago
I hate to bash on this article, but the number of commenters calling it "nice" makes me want to warn that the implementations are not really very good.<p>Except for the bisection method, all of these implementations take an argument specifying the number of iterations to run. In most cases, the only way to terminate in fewer iterations is by hitting an "exact" root, i.e., calculating the residual as exactly zero. This is poor practice for a number of reasons. First, in practice it's pretty rare for a method to find an exact zero. Second, once a method has converged to the numerical precision of the machine, making more iterations just wastes flops. So a much better approach is to specify a solution tolerance (as shown with the bisection method). Even better is to provide absolute and relative tolerances, and to choose those values based on either the domain requirements, or on the machine characteristics. Dennis &#38; Schnabel's excellent "Numerical Methods for Unconstrained Optimization and Nonlinear Equations" has a good discussion on choosing convergence tolerances.<p>This dependence on iteration counts to terminate, by the way, is probably why the author equates low iteration counts with greater accuracy. But in fact these methods don't vary in their intrinsic accuracy, rather, they vary in their order of convergence.<p>Another example of poor practice is in the bisection method implementation. One generally should not bisect an interval using c = (a+b)/2, because the nature of finite-precision arithmetic means there is no guarantee that c will lie between a and b, even if the machine can represent numbers between a and b. A better approach is to ensure a &#60; b, then to set c = a + (b-a)/2. This expression is much less subject to roundoff errors.
评论 #5067909 未加载
nnnnnnnninnnnnnover 12 years ago
See <a href="http://sympy.org/en/index.html" rel="nofollow">http://sympy.org/en/index.html</a>. I've been using this for a while.
评论 #5067595 未加载
评论 #5067335 未加载
评论 #5068119 未加载
inetseeover 12 years ago
I was fairly impressed by this article until I noticed that the author was using Python 2.6. Isn't Python 2.7 the version usually used by those who aren't ready to make the leap to Python 3?<p>Then I noticed that there was no mention at all of the various ways of using the R language with Python (RPy, etc.).<p>The easy way to solve equations in Python (or any other language for that matter) would be to take full advantage of the work that other bright people have done.
评论 #5067554 未加载
评论 #5067285 未加载
评论 #5068760 未加载
评论 #5067856 未加载
NamTafover 12 years ago
I wrote this a while ago during uni to help cement the various numerical analysis processes covered in my classes in my head. It's not formal or well-written or covers all of the detail perfectly because it was largely for my own use. However, some of you may find it interesting I guess:<p><a href="http://www.overclockers.com.au/wiki/Numerical_Analysis" rel="nofollow">http://www.overclockers.com.au/wiki/Numerical_Analysis</a><p>I offer no Good Science or Good Writing warranty on it ;)