And the extent to which the expectation of the function of the random variable exceeds the function of the expectation of the random variable depends on the variable’s variability (or variance), as can be seen eg by a Taylor expansion around the expectation.<p>That’s the reason why <i>linear (or affine)</i> financial derivatives (such as forwards) can be priced without using volatility as an input, while products <i>with convexity</i> (such as options) require volatility as an input.<p>(Side note: I think Delta One desks should rename to Gamma Zero…)
The proof of Young’s inequality is pretty neat but has the „magically think of taking a log of an arbitrary expression which happens to work” step. But it clarifies why the reciprocals of exponents have to sum up to 1: they are interpreted as probabilities when calculating expected value.<p>Here’s how I like to conceptualise it: bounding mixed variable product by sum of single variable terms is useful. Logarithms change multiplication to addition. Jensen’s inequality lifts addition from the argument of a convex function outside. Compose.
A very natural explanation of "wikipedia proof 2" for differentiable functions seems to be missing:<p>By linearity of expectation, both sides are linear in f, and for linear f we have equality. Let's subtract the linear function whose graph is the tangent hyperplane to f at E(X). By above, this does not change the validity of the inequality. But now the left hand side is 0, and right hand side is non-negative by convexity, so we are done.<p>It's also now clear what the difference of the two sides is -- it's the expectation of the gap between f(X) an and the value of the tangent plane at X.<p>Now in general replace tangent hyperplane with graph of a subderivative, to recover what wiki says.
A simpler definition of a convex function f is f(x) = sup { l(x) | l <= f where l is linear }.<p>If l <= f is linear then E[f(X)] >= E[l(X)] = l(E[X]). Taking the sup shows E[f(X)] >= f(E[X]).