TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Neural Networks, Manifolds, and Topology (2014)

129 点作者 flancian超过 6 年前

7 条评论

flancian超过 6 年前
Previously:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=7557964" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=7557964</a> <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9814114" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9814114</a><p>But not a lot of discussion over there.<p>The visualizations are great, and this basically blew my mind. I didn’t know of the manifold hypothesis until now.<p><pre><code> The manifold hypothesis is that natural data forms lower-dimensional manifolds in its embedding space. There are both theoretical and experimental reasons to believe this to be true. If you believe this, then the task of a classification algorithm is fundamentally to separate a bunch of tangled manifolds. </code></pre> My interpretation&#x2F;rephrasing: if you want to build a neural network that distinguishes cat and dog pictures, in the worst case that would seem to require a huge network with many nodes&#x2F;layers (say, the number being a function of the size of the image) rather than the number that seems to work reasonably well in practice (six or some other rather low constant number observed in reality). So the number of dimensions over which the “images” are potentially spread is huge, but it’d seem that in the real world one can rearrange the dog and cat images in a “shape” that then allows for relatively easy disentanglement by the neural network; and these shapes can probably be realized in much lower dimensions (in the example, six).<p>This could explain (for some definition of explain) the observed predictive power of relatively small neural networks.
评论 #19147507 未加载
评论 #19148699 未加载
评论 #19147309 未加载
评论 #19147161 未加载
评论 #19151758 未加载
datascientist超过 6 年前
Gunnar Carlsson will be teaching a related tutorial (&quot;Using topological data analysis to understand, build, and improve neural networks&quot;) on April 16th in New York City <a href="https:&#x2F;&#x2F;conferences.oreilly.com&#x2F;artificial-intelligence&#x2F;ai-ny&#x2F;public&#x2F;schedule&#x2F;detail&#x2F;73123" rel="nofollow">https:&#x2F;&#x2F;conferences.oreilly.com&#x2F;artificial-intelligence&#x2F;ai-n...</a>
yantrams超过 6 年前
This was the article that helped me get neural networks when I began studying them few years back. Interpreting them as a series of curvillinear coordinate transformations really helped me understand them better.<p>PS: There is a great introductory article on entropy on the blog that is worth checking out.
gdubs超过 6 年前
This is beautiful, and surprisingly approachable. Also, feels relevant to this recent conversation: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=18987211" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=18987211</a>
GlenTheMachine超过 6 年前
If anyone could point me to literature on k-nn neural networks (or the relationship, if any, between k-nn algorithms and basis function decomposition and&#x2F;or blind source separation) I’d be much obliged.
评论 #19153960 未加载
AlkurahCepheus超过 6 年前
<a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Yr1mOzC93xs" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Yr1mOzC93xs</a>
quenstionsasked超过 6 年前
Haven&#x27;t given it a lot of thought, but isn&#x27;t his vector field idea somewhat similar of an approach to neural ordinary differential equations?