TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Neural Algorithms and Computing Beyond Moore's Law

41 点作者 aidanrocke大约 6 年前

2 条评论

doctorpangloss大约 6 年前
It&#x27;s an interesting point of view, for sure.<p>&gt; There are cases of non-neural computer vision algorithms based on Bayesian inference principles,22 though it has been challenging to develop such models that can be trained as easily as deep learning networks.<p>Maybe. The paper he cites, &quot;Human-level concept learning through probabilistic program induction,&quot; has like &quot;On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches&quot; right in its abstract. One shot sounds like exactly the meaning of easy to train.<p>Maybe he meant <i>accelerated</i> training, but that kind of gets to the core of what&#x27;s flawed about this analysis. There&#x27;s no economic incentive to build specialized, accelerated hardware for one of Tenenbaum&#x27;s research projects, until there <i>is</i>. There&#x27;s enormous economic incentive to build video cards for games, which is what all those deep learning advances were predicated on, and it remains to be seen if there&#x27;s any economic incentive for TPUs or whatever specialized hardware he&#x27;s imagining, of whatever architecture.<p>Looking at computing architectures, like CPUs versus GPUs, the way he does is a post-hoc analysis, and nobody except those deep in the research community and paying attention to NVIDIA papers could have anticipated how GPUs would have affected research.<p>There isn&#x27;t, and there is, an architectural difference between CPUs and GPUs that matter. He&#x27;s sort of picking and choosing which architectural differences matter in a way that favor R&amp;D that he conceded has problems with &quot;falsifiability.&quot;<p>If anything, we need better, cheaper CPUs still! They&#x27;re still very useful for R&amp;D. I&#x27;d rather get a slow supercomputer built today than a nothing built tomorrow, if Tenenbaum is telling me he&#x27;s chasing something now and needs it now.<p>Conversely, why should we listen to R&amp;D people about problems that are fundamentally economically motivated? It would be a bad bet. We&#x27;re getting low power CPUs now not because the world is &quot;data oriented&quot; or whatever he&#x27;s saying, but because the iPhone is so fucking popular there&#x27;s immense demand for it. Ask Paul Otellini, an expert on the CPU business, what he thinks about that! So maybe we should actually be doubling down on the needs of consumer electronics manufacturers?
aidanrocke大约 6 年前
So if I had to write a tl;dr I would say that there are two economic incentives for neuromorphic computing:<p>1. Energy efficiency: More brain-like computers will eventually an energy efficiency comparable to the human brain. Consider that AlphaGo Zero used ~200 KiloWatts for learning Go vs. a human that uses about ~20 Watts for learning everything that humans can learn.<p>2. AI &amp; Neuroscience research: What better way to test neuroscience theory than to physically build a brain? As the saying goes, if you can&#x27;t build you don&#x27;t understand it.
评论 #19559927 未加载