TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Kullback–Leibler divergence

177 pointsby dedalusover 1 year ago

12 comments

jwardenover 1 year ago
Here&#x27;s how I describe KL Divergence, building up from simple to complex concepts.<p>surprisal: how surprised I am when I learn the value of X<p><pre><code> Suprisal(x) = -log p(X=x) </code></pre> entropy: how surprised I expect to be<p><pre><code> H(p) = 𝔼_X -log p(X) = ∑_x p(X=x) * -log p(X=x) </code></pre> cross-entropy: how surprised I expect Bob to be (if Bob&#x27;s beliefs are q instead of p)<p><pre><code> H(p,q) = 𝔼_X -log q(X) = ∑_x p(X=x) * -log q(X=x) </code></pre> KL divergence: how much *more* surprised I expect Bob to be than me<p><pre><code> Dkl(p || q) = H(p,q) - H(p,p) = ∑_x p(X=x) * log p(X=x)&#x2F;q(X=x) </code></pre> information gain: how much less surprised I expect Bob to be if he knew that Y=y<p><pre><code> IG(q|Y=y) = Dkl(q(X|Y=y) || q(X)) </code></pre> mutual information: how much information I expect to gain about X from learning the value of Y<p><pre><code> I(X;Y) = 𝔼_Y IG(q|Y=y) 𝔼_Y Dkl(q(X|Y=y) || q(X))</code></pre>
评论 #37229617 未加载
评论 #37228671 未加载
评论 #37227089 未加载
golwengaudover 1 year ago
I found <a href="https:&#x2F;&#x2F;www.lesswrong.com&#x2F;posts&#x2F;no5jDTut5Byjqb4j5&#x2F;six-and-a-half-intuitions-for-kl-divergence" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.lesswrong.com&#x2F;posts&#x2F;no5jDTut5Byjqb4j5&#x2F;six-and-a-...</a> very helpful for getting intuition for what the K-L divergence is and why it&#x27;s useful. The six intuitions:<p><pre><code> 1. Expected surprise 2. Hypothesis testing 3. MLEs 4. Suboptimal coding 5a. Gambling games -- beating the house 5b. Gambling games -- gaming the lottery 6. Bregman divergence</code></pre>
tysam_andover 1 year ago
Here is the simplest way of explaining the KL divergence:<p>The KL divergence yields a concrete value that tells you how many actual bits of space on disk you will waste if you try to use an encoding table from one ZIP file of data to encode another ZIP file of data. It&#x27;s not just theoretical, this is exactly the type of task that it&#x27;s used for.<p>The closer the folders are to each other in content, the fewer wasted bits. So, we can use this to measure how similar two sets of information are, in a manner of speaking.<p>These &#x27;wasted bits&#x27; are also known as relative entropy, since entropy basically is a measure of how disordered something can be. The more disordered, the more possibilities we have to choose from, thus the more information possible.<p>Entropy does not guarantee that the information is usable. It only guarantees how much of this quantity we can get, much like pipes serving water. Yes, they will likely serve water, but you can accidentally have sludge come through instead. Still, their capacity is the same.<p>One thing to note is that with our ZIP files, if you use the encoding tables from one to encode the other, then you will end up with different relative entropy (i.e. our &#x27;wasted bits&#x27;) numbers than if you did the vice versa. This is because the KL is not what&#x27;s called symmetric. That is, it can have different meaning based upon which direction it goes.<p>Can you pull out a piece of paper, make yourself an example problem, and tease out an intuition as to why?
评论 #37228451 未加载
techwizrdover 1 year ago
We use KL-divergence to calculate how surprising a time-series anomaly is and rank them for aviation safety, e.g., give me a ranked list of the most surprising increases in a safety metric. It&#x27;s quite handy!
评论 #37225780 未加载
评论 #37228266 未加载
zerojamesover 1 year ago
I have used KL-divergence in authorship verification: <a href="https:&#x2F;&#x2F;github.com&#x2F;capjamesg&#x2F;pysurprisal&#x2F;blob&#x2F;main&#x2F;pysurprisal&#x2F;core.py#L5">https:&#x2F;&#x2F;github.com&#x2F;capjamesg&#x2F;pysurprisal&#x2F;blob&#x2F;main&#x2F;pysurpris...</a><p>My theory was: calculate entropy (&quot;surprisal&quot;) of used words in a language (in my case, from an NYT corpus), then calculate KL-divergence between a given prose and a collection of surprisals for different authors. The author to whom the prose had the highest KL-divergence was assumed to be the author. I think it has been used in stylometry a bit.
评论 #37222817 未加载
max_over 1 year ago
K-L Divergence is something that Keeps coming up in my research but I still don&#x27;t understand what it is.<p>Could someone give me a simple explanation as to what it&#x27;s is.<p>And also, what practical use cases does it have?
评论 #37224095 未加载
评论 #37222166 未加载
评论 #37224119 未加载
评论 #37223805 未加载
评论 #37223053 未加载
评论 #37222144 未加载
评论 #37222556 未加载
评论 #37223844 未加载
评论 #37222009 未加载
评论 #37222567 未加载
评论 #37225596 未加载
评论 #37222902 未加载
评论 #37228523 未加载
评论 #37223791 未加载
评论 #37230274 未加载
评论 #37222341 未加载
riemannzetaover 1 year ago
KL divergence has also been used to generalize the second law of thermodynamics for systems far from equilibrium:<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1508.02421" rel="nofollow noreferrer">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1508.02421</a><p>And to explain the relationship between the rate of evolution and evolutionary fitness:<p><a href="https:&#x2F;&#x2F;math.ucr.edu&#x2F;home&#x2F;baez&#x2F;bio_asu&#x2F;bio_asu_web.pdf" rel="nofollow noreferrer">https:&#x2F;&#x2F;math.ucr.edu&#x2F;home&#x2F;baez&#x2F;bio_asu&#x2F;bio_asu_web.pdf</a><p>The connection between all of these manifestations of KL divergence is that a system far from equilibrium contains more information (in the Shannon sense) than a system in equilibirum. That &quot;excess information&quot; is what drives fitness within some environment.
评论 #37228092 未加载
jszymborskiover 1 year ago
VAEs have made me both love and hate KLD. Goddamn mode collapse.
评论 #37223180 未加载
nravicover 1 year ago
IIRC (and in my experience) KL divergence doesn&#x27;t account for double counting. Wrote a paper where I ended up having to use a custom metric instead: <a href="https:&#x2F;&#x2F;digitalcommons.usu.edu&#x2F;cgi&#x2F;viewcontent.cgi?article=4438&amp;context=smallsat" rel="nofollow noreferrer">https:&#x2F;&#x2F;digitalcommons.usu.edu&#x2F;cgi&#x2F;viewcontent.cgi?article=4...</a>
评论 #37228108 未加载
mrv_asuraover 1 year ago
I learnt about KL Divergence recently and it was pretty cool to know that cross-entropy loss originated from KL Divergence. But could someone give me the cases where it is preferred to use Mean-squared Error loss vs Cross-entropy loss? Is there any merits or demerits of using either?
评论 #37222737 未加载
评论 #37223082 未加载
评论 #37223147 未加载
评论 #37222968 未加载
janalsncmover 1 year ago
Btw, KL divergence isn’t symmetrical. So D(P,Q) != D(Q,P). If you need a symmetrical version of it, you can use Jensen–Shannon divergence which is the mean of D(P,Q) and D(Q,P). Or if you only care about relative distances you can just use the sum.
评论 #37229972 未加载
ljlolelover 1 year ago
Equivalent to cross entropy for loss on NN