One of the fun ones, and this may or may not be covered by the link, is generating the "cumulants" for probability density functions. The basic idea in (basic, continuous) probability theory is that you have these things called "random variables" X,Y,Z which now need to be thought of taking on values with various probabilities; and the way we do that is to use calculus (where the d- in "dx" is a special notation coming from the word "difference"; "dx" means "a tiny little change in x"). The basic idea is that there is some <i>joint density</i> j(x, y, z) such that the probability that simultaneously x < X < x + dx, AND y < Y < y + dy, AND z < Z < z + dz, is simply given by j(x, y, z) dx dy dz.<p>With a bit of calculation you conclude that if we want to analyze Z = X + Y, then the probability-density for Z alone must be h(z) = ∫ dx j(x, z - x); there are a couple ways to do this but my favorite is to use Z = X + Y as a schematic to write a Dirac δ-function j(x, y, z) = j(x, y) δ(z - x - y), then the standard "what is the probability-density for Z alone? integrate over both x and y!" rule comes into play. But you can also reason it out by hand, if you want.<p>Anyway, then you come up with this cool definition of independence; X and Y are independent if j(x, y) factors nicely into f(x) g(y); independent probabilities tend to multiply. So that's cool.<p>Now here's where the Fourier transform comes in; consider using a convention where we denote the Fourier transform and with the same function-name but using square brackets rather than parentheses for its application,<p>f[q] = ℱ{p → q} f(p) = ∫ dp exp(-i p q) f(p).<p>When we Fourier-transform h(z) for Z = X + Y we get:<p>h[a] = ∫ dz exp(-i a z) h(z) = ∫ dx dy exp(-i a (x + y)) j(x, y)<p>and if the two are independent random variables we find that h[a] = f[a] g[a]; the sum of independent random variables has a product of Fourier transforms precisely because the above sum takes the form of a convolution.<p>Taking a logarithm, we have that the χ(a) = log h[a] = log f[a] + log g[a] and any properties of the logarithm of the Fourier transform of the density function must be additive amongst independent random variables. And this gives the <i>cumulants</i> of the random variables: pseudo-linear expressions (linear in independent random variables f(X + Y) = f(X) + f(Y), with f(k X) = q(k) * f(X) for some fixed q) which characterize the different "moments" of the random variables in question. In fact if we look carefully at the definition h[a] = ∫ dz exp(-i a z) h(z) we see that the variable P = k Z, having distribution b(p) = h(p/k) / k, generates b[a] = h[k a].<p>Calculating the zeroth, χ(0) = log h(0) = log 1 = 0. No biggie. Continuing the Maclaurin expansion gives χ'(a) = h'(a)/h(a) and at a=0 that's -i E(Z). We knew that the expectation value was linear, so no surprise there. The next term is (-i)^2 [h''(a) h(a) - h'(a) h'(a)]/[h(a) h(a)] which works out to just E(Z^2) - E(Z)^2, the familiar expression for the variance; so variance is pseudolinear (in this case q(k) = k^2). The point is, then you can just go on. There are in fact infinitely many pseudolinear cumulants which scale like q(k) = k^n, coming from this Maclaurin series, with a pattern based in fact entirely on the pattern of derivatives that comes out of log(f(x)).<p>As a freebie you have a proof of the central limit theorem. Take N independent identically-distributed random variables with characteristic functions χ(a), call them each X_i, form their sum S = sum_i X_i / N, and from the properties we've already seen in this comment its characteristic function must be ψ(a) = N χ(a / N). Do the Maclaurin series to 2nd order and ignore the latter terms for large N; you get convergence to a parabola, which means that the Fourier transform of the probability density is... a Gaussian! And the inverse Fourier transform of that Gaussian is just another Gaussian, so for large N the result must be a normal random variable with mean μ and variance σ²/N, which of course is what they must be, because means are linear and variances are pseudolinear.