TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Confusion of Variational Autoencoders

52 点作者 jaan将近 9 年前

4 条评论

murbard2将近 9 年前
What puzzles me with the variational autoencoder is that there is no reason to expect the covariance of p(z|x) to be diagonal. This sounds like such a crude approximation that there ought to little benefits to even treat it as a distribution rather than a point mass. And yet it seems to do rather well (though not as well as GAN which do represent arbitrary distributions).
评论 #12296489 未加载
评论 #12296023 未加载
conjectures将近 9 年前
There were some nice things about this article. However I wouldn&#x27;t recommend it as a cure for confusion. E.g.<p>&gt; in mean-field variational inference, we have parameters for each datapoint ... In the variational autoencoder setting, we do amortized inference where there is a set of global parameters ...<p>Mean-field implies the variational posterior is modelled as factorising over the different latent variables involved. Some latent variables can be local (unique to a data point) and some can be global (shared across data points).
jayajay将近 9 年前
Recently, someone shared a link on Hacker News to this website: <a href="https:&#x2F;&#x2F;pomax.github.io&#x2F;nrGrammar&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pomax.github.io&#x2F;nrGrammar&#x2F;</a>. If you look carefully in section 1.1.4, which aims to visually compare the differences between the Hiragana and Katakana scripts, you can see that there is a &quot;logic&quot; in transitioning from a character in Hiragana to the same character in Katakana. In the same way, it seems that an autoencoder is capable of capturing this logic.
评论 #12295308 未加载
tiiualto超过 8 年前
Küll on keeruline, ma ähin ja puhin, aga ikka ei taipa mõhkugi!