TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Revisiting the Classics: Jensen's Inequality (2023)

89 pointsby cpp_frog9 months ago

4 comments

FabHK9 months ago
And the extent to which the expectation of the function of the random variable exceeds the function of the expectation of the random variable depends on the variable’s variability (or variance), as can be seen eg by a Taylor expansion around the expectation.<p>That’s the reason why <i>linear (or affine)</i> financial derivatives (such as forwards) can be priced without using volatility as an input, while products <i>with convexity</i> (such as options) require volatility as an input.<p>(Side note: I think Delta One desks should rename to Gamma Zero…)
thehappyfellow9 months ago
The proof of Young’s inequality is pretty neat but has the „magically think of taking a log of an arbitrary expression which happens to work” step. But it clarifies why the reciprocals of exponents have to sum up to 1: they are interpreted as probabilities when calculating expected value.<p>Here’s how I like to conceptualise it: bounding mixed variable product by sum of single variable terms is useful. Logarithms change multiplication to addition. Jensen’s inequality lifts addition from the argument of a convex function outside. Compose.
评论 #41314704 未加载
maxmininflect9 months ago
A very natural explanation of &quot;wikipedia proof 2&quot; for differentiable functions seems to be missing:<p>By linearity of expectation, both sides are linear in f, and for linear f we have equality. Let&#x27;s subtract the linear function whose graph is the tangent hyperplane to f at E(X). By above, this does not change the validity of the inequality. But now the left hand side is 0, and right hand side is non-negative by convexity, so we are done.<p>It&#x27;s also now clear what the difference of the two sides is -- it&#x27;s the expectation of the gap between f(X) an and the value of the tangent plane at X.<p>Now in general replace tangent hyperplane with graph of a subderivative, to recover what wiki says.
keithalewis9 months ago
A simpler definition of a convex function f is f(x) = sup { l(x) | l &lt;= f where l is linear }.<p>If l &lt;= f is linear then E[f(X)] &gt;= E[l(X)] = l(E[X]). Taking the sup shows E[f(X)] &gt;= f(E[X]).