TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The difficulty of computing stable and accurate neural networks

59 点作者 programd大约 3 年前

3 条评论

cs702大约 3 年前
Huh. I can read and understand the abstract and the introduction, but I can&#x27;t judge the work after a first pass. This is the kind of paper that <i>cannot</i> be easily skimmed, because it consists almost entirely of densely packed pages chock-full of highly abstract mathematical reasoning.<p>Not surprisingly, the authors are mathematicians. They claim to <i>prove</i> that<p>* there are well-conditioned problems for which suitable DNNs exist, but no training algorithm can find arbitrarily good approximations of those suitable DNNs;<p>* it&#x27;s possible to find approximations of those suitable DNNs only if we sacrifice digits of accuracy -- i.e., the approximations cannot be arbitrarily good; and<p>* there is a class of DNNs they propose, which they call &quot;fast iterative restated networks&quot; or FIRENETs, that solve undetermined systems of linear equations over the complex numbers, with a good blend of stability (robustness to adversarial samples) and accuracy (within the claimed theoretical limits).<p>Finally, the authors provide open-source code (A+ for doing that, but... Matlab!!??): <a href="https:&#x2F;&#x2F;www.github.com&#x2F;Comp-Foundations-and-Barriers-of-AI&#x2F;firenet" rel="nofollow">https:&#x2F;&#x2F;www.github.com&#x2F;Comp-Foundations-and-Barriers-of-AI&#x2F;f...</a><p>Does anyone else here understand the work better than me? I would love an informal explanation that appeals to intuition.
评论 #30756911 未加载
评论 #30761379 未加载
评论 #30756944 未加载
sjg007大约 3 年前
Interesting work. A DNN is after all something that compresses the input data and hopefully generalizes well over the domain. This &#x27;machine&#x27; itself will adhere to principles of Kolmogorov complexity etc..<p>Traditionally, empirical solutions in the NN literature to address instability are regularization and dropout.<p>Also, adding layers seems to improve things. The famous example is XOR which cannot be trained by a single layer NN.<p>How do the theoretical limitations in the paper relate to these? if at all...
kettleballroll大约 3 年前
PNAS is not exactly the venue of first choice for publishing AI research, so heuristically speaking, this article is likely not worth the reading time. Furthermore neither the abstract nor the first few words of the introduction give me any reason to read on. I&#x27;m going to assume this is irrelevant, until someone here can convince me it&#x27;s not.
评论 #30754523 未加载
评论 #30754088 未加载
评论 #30755274 未加载
评论 #30755054 未加载