A lot of undergraduate math programs in the US start with unnecessarily hard calculus classes "weed-outs" which is unfortunate, since it discourages students who might have pursued mathematics otherwise. I can say from personal experience that Calculus II was my worst math grade, I fared much better in rigorous and challenging classes like real analysis or differential topology. To do well in elementary calculus one has to seemingly practice integration techniques in various permutations for hours, and honestly for what future purpose I cannot say.<p>EDIT: I finally understood calculus after taking introduction to real analysis, and it was amazing because for the first time all the hand-waving disappeared and could be replaced with rock-solid arguments and increasing levels of abstraction (starting from the very definition of what the real numbers are). This is also important because functions can get very pathological[0][1][2]<p>[0] <a href="https://en.wikipedia.org/wiki/Weierstrass_function" rel="nofollow">https://en.wikipedia.org/wiki/Weierstrass_function</a> (continuous everywhere but differentiable nowhere)<p>[1] <a href="https://en.wikipedia.org/wiki/Cantor_function" rel="nofollow">https://en.wikipedia.org/wiki/Cantor_function</a> (derivative is zero almost everywhere but f(x) goes from 0 to 1)<p>[2] <a href="https://en.wikipedia.org/wiki/Thomae's_function" rel="nofollow">https://en.wikipedia.org/wiki/Thomae's_function</a> (continuous at irrationals but discontinuous on rationals)
This reminds me of how much I struggled with integral calc in college. My textbook (Stewart) had a table of integrals containing 120 forms that you'd need to solve the problems in the book, and looking through them, the calculations seem so insurmountable.<p>Like, <a href="https://www.wolframalpha.com/input/?i=integrate+u%5En+sqrt%28a+%2B+bu%29+du" rel="nofollow">https://www.wolframalpha.com/input/?i=integrate+u%5En+sqrt%2...</a><p>I looked at that and realize that I'd have no future as a physicist and switched to CS.
Funny story, I was a huge fan boi of Feynman's and read everything I could about him. I used this technique to integrate the "extra credit" problem on my freshman final, which I turned in with about 20 minutes to spare, and the professor accused me of cheating by knowing the answer ahead of time. When I showed him the steps and explained the origins he allowed that perhaps I hadn't cheated, but was disappointed I had used a technique that wasn't taught in class so that was somewhat unfair to the other students.<p>Not my best professor.
One thing I've often wondered: Is there a reason to learn all of these methods in the modern era, when Wolfram Alpha or Mathematica can apply hundreds of methods automatically?<p>It's good to understand things conceptually. But once I got the concept of integrals as the area under a curve, it felt like a lot of grunt work to learn so many tactics for solving them. But most of my focus has been on computers rather than pure mathematics. For pure math, it probably makes more sense to learn as many different methods as possible.
This is fantastic. I’ve tried several times to understand this idea over the years, with no success. This clearly expressed the idea in only a few minutes.<p>One question: It mentions that Wolfram alpha will fail on integrals that this trick can work for. Is that just because it will time out (we need more compute) or is the trick difficult to automate?
If you don't have the background to get this, here's a quick tutorial on integrals <a href="https://cognicull.com/en/1dc797za" rel="nofollow">https://cognicull.com/en/1dc797za</a> , and it would be cool if cognicull included this Feynman's method in their ontology.
Interesting example they chose there. It requires no fewer than three deus ex machinas: apply this trick, then apply this very particular substitution, then find another substitution. If you encountered this integral in the wild without knowing whether/how it can be solved, yes you might have tried the trick, but you wouldn't realize that it had gotten you anywhere. Would you continue on and find the just-so substitutions, or would you backtrack and try something else?<p>It's great that we have the tricks we have, but at the same time most nontrivial integrals are just impenetrable regardless. Any demonstration of integration techniques you find will be on an integrand that is amenable to these techniques, and will only show the straight path to the solution, not the process of finding that path. I hate integrals!
The first integral can be solved replacing the integrand with a series of sort. Notice that the expression inside the logarithm has zeros at<p><pre><code> $\alpha = e^{\pm ix}$
</code></pre>
So we can rewrite the function we're integrating as<p><pre><code> $log((\alpha - e^{ix})(\alpha - e^{-ix}))$
</code></pre>
which is just<p><pre><code> $2log(\alpha) + log(1 - \frac{e^{ix}}{\alpha}) + log(1 - \frac{e^{-ix}}{\alpha})$
</code></pre>
Using<p><pre><code> $log(1 - x) = -\sum_{n=1}^{\infty} \frac{x^n}{n}$
</code></pre>
We get<p><pre><code> $2log(\alpha) - \sum\frac{e^{inx}}{n\alpha^{n}}-\sum\frac{e^{-inx}}{n\alpha^{n}}$
</code></pre>
which is just<p><pre><code> $2log(\alpha) -2\sum \frac{cos(nx)}{n\alpha^n}$
</code></pre>
The integral of the second half of this involves a $sin(nx)$ term which will evaluate to zero for all values of \alpha at 0 and \pi.<p>Leaving just the integral of $2log(\alpha)$ which is just
$2\pi log(\alpha)$
the funny thing is that the calculation in this article misses the point, especially in the Feynman context.
First, beyond all trickery, the log(alpha) answer might suggest that something bad happens at alpha=0 . What makes this integral interesting is that it is equal to zero identically for alpha<1 .<p>The reason, of course, is that this integral is not randomly chosen -- it represents the two-dimensional coulomb potential (log(r)) of the sphere (circle) of radius 1 at distance alpha from the center. By when point alpha is inside the circle , the potential is constant (or zero -- no force) . When alpha is outside, the potential is log(r) as if all the mass of a circle is at its center. The expression under the log in the integral is just (square of ) the distance between the point alpha and point on a unit circle.<p>beyond tricks -- the physical reason for the singular behavior of this integral is gauss theorem for coulomb potential .
so no magic.
A think I've been wondering is why is integration harder than differentiation. The latter can be done almost mechanically, as long as your primitive functions are "nice", but the former often requires cleverness like what's show in the article.<p>I mean, sure, we have simpler rules for dfferentiation, but _why_?<p>I sometimes wonder if it's differentiation is P and integration is NP (for the restricted case of functions where the primitives are "nice")
It worked for this one because we knew the answer beforehand and the best approach. Its not like we can generalize this. Change some of the terms and poof unsolvable
Actually, this method was already used by Leibniz, although it was not that common at Feynman's time.
<a href="https://en.wikipedia.org/wiki/Leibniz_integral_rule" rel="nofollow">https://en.wikipedia.org/wiki/Leibniz_integral_rule</a>
I don't really understand the part from how did the author jumps from -pi*(1+a^2)/(1-a^2) to df/da = 2pi/a. Anyone knows how the author did it?