TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Neural Networks, Manifolds, and Topology (2014)

129 pointsby flancianover 6 years ago

7 comments

flancianover 6 years ago
Previously:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=7557964" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=7557964</a> <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9814114" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=9814114</a><p>But not a lot of discussion over there.<p>The visualizations are great, and this basically blew my mind. I didn’t know of the manifold hypothesis until now.<p><pre><code> The manifold hypothesis is that natural data forms lower-dimensional manifolds in its embedding space. There are both theoretical and experimental reasons to believe this to be true. If you believe this, then the task of a classification algorithm is fundamentally to separate a bunch of tangled manifolds. </code></pre> My interpretation&#x2F;rephrasing: if you want to build a neural network that distinguishes cat and dog pictures, in the worst case that would seem to require a huge network with many nodes&#x2F;layers (say, the number being a function of the size of the image) rather than the number that seems to work reasonably well in practice (six or some other rather low constant number observed in reality). So the number of dimensions over which the “images” are potentially spread is huge, but it’d seem that in the real world one can rearrange the dog and cat images in a “shape” that then allows for relatively easy disentanglement by the neural network; and these shapes can probably be realized in much lower dimensions (in the example, six).<p>This could explain (for some definition of explain) the observed predictive power of relatively small neural networks.
评论 #19147507 未加载
评论 #19148699 未加载
评论 #19147309 未加载
评论 #19147161 未加载
评论 #19151758 未加载
datascientistover 6 years ago
Gunnar Carlsson will be teaching a related tutorial (&quot;Using topological data analysis to understand, build, and improve neural networks&quot;) on April 16th in New York City <a href="https:&#x2F;&#x2F;conferences.oreilly.com&#x2F;artificial-intelligence&#x2F;ai-ny&#x2F;public&#x2F;schedule&#x2F;detail&#x2F;73123" rel="nofollow">https:&#x2F;&#x2F;conferences.oreilly.com&#x2F;artificial-intelligence&#x2F;ai-n...</a>
yantramsover 6 years ago
This was the article that helped me get neural networks when I began studying them few years back. Interpreting them as a series of curvillinear coordinate transformations really helped me understand them better.<p>PS: There is a great introductory article on entropy on the blog that is worth checking out.
gdubsover 6 years ago
This is beautiful, and surprisingly approachable. Also, feels relevant to this recent conversation: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=18987211" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=18987211</a>
GlenTheMachineover 6 years ago
If anyone could point me to literature on k-nn neural networks (or the relationship, if any, between k-nn algorithms and basis function decomposition and&#x2F;or blind source separation) I’d be much obliged.
评论 #19153960 未加载
AlkurahCepheusover 6 years ago
<a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Yr1mOzC93xs" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=Yr1mOzC93xs</a>
quenstionsaskedover 6 years ago
Haven&#x27;t given it a lot of thought, but isn&#x27;t his vector field idea somewhat similar of an approach to neural ordinary differential equations?