TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Square roots and maxima

121 pointsby surprisetalk6 months ago

5 comments

dahart6 months ago
Either I haven’t seen this before, or forgot it, but it’s surprising because I use the sum of independent uniform variables every once in a while — the sum of two vars is a tent function, the sum of three is a smooth piecewise quadratic lump, and the sum of many tends toward a normal distribution. And the distribution is easy calculated as the convolution of the input box functions (uniform variables). Looking it up just now I learned the sum of uniform variables is called an Irwin-Hall distribution (aka uniform sum distribution).<p>The min of two random vars has the opposite effect as the max does in this video. And now I’m curious - if we use the function definition of min&#x2F;max — the nth root of the sum of the nth powers of the arguments — there is a continuum from min to sum to max, right? Are there useful applications of this generalized distribution? Does it already have a name?
评论 #42282937 未加载
评论 #42287556 未加载
评论 #42283707 未加载
评论 #42283420 未加载
prof-dr-ir6 months ago
If X1...Xn are independently uniformly distributed between 0 and 1 then:<p>P(max(X1 ... Xn) &lt; x) =<p>P(X1 &lt; x and X2 &lt; x ... and Xn &lt; x) =<p>P(X1 &lt; x) P(X2 &lt; x) ... P(Xn &lt; x) =<p>x^n<p>Also,<p>P(X^{1&#x2F;n} &lt; x) = P(X &lt; x^n) = x^n<p>I guess I am just an old man yelling at clouds, but it seems <i>so</i> strange to me that one would bother checking this with a numerical simulation. Is this a common way to think about, or teach, mathematics to computer scientists?
评论 #42284289 未加载
评论 #42285398 未加载
评论 #42283936 未加载
评论 #42284698 未加载
评论 #42284546 未加载
评论 #42288521 未加载
评论 #42285019 未加载
keithalewis6 months ago
Front page material? P(max{X_1, X_2} &lt;= x) = P(X_1 &lt;= x, X_2 &lt;= x) = P(X_1 &lt;= x) P(X_2 &lt;= x) = xx. P(sqrt(X_3) &lt;= x) = P(X_3 &lt;= x^2) = x^2. It is late in the day when midgets cast long shadows.
评论 #42283323 未加载
gxs6 months ago
Just a side comment on what a great little video.<p>Short, to the point, and the illustrations&#x2F;animations actually helped convey the message.<p>Would be super cool if someone could recommend some social media account&#x2F;channel with collections of similar quality videos (for any field).
评论 #42285650 未加载
ndsipa_pomu6 months ago
Matt Parker&#x27;s video on Square Roots and Maxima: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ga9Qk38FaHM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ga9Qk38FaHM</a>
评论 #42282614 未加载