Huh. I can read and understand the abstract and the introduction, but I can't judge the work after a first pass. This is the kind of paper that <i>cannot</i> be easily skimmed, because it consists almost entirely of densely packed pages chock-full of highly abstract mathematical reasoning.<p>Not surprisingly, the authors are mathematicians. They claim to <i>prove</i> that<p>* there are well-conditioned problems for which suitable DNNs exist, but no training algorithm can find arbitrarily good approximations of those suitable DNNs;<p>* it's possible to find approximations of those suitable DNNs only if we sacrifice digits of accuracy -- i.e., the approximations cannot be arbitrarily good; and<p>* there is a class of DNNs they propose, which they call "fast iterative restated networks" or FIRENETs, that solve undetermined systems of linear equations over the complex numbers, with a good blend of stability (robustness to adversarial samples) and accuracy (within the claimed theoretical limits).<p>Finally, the authors provide open-source code (A+ for doing that, but... Matlab!!??): <a href="https://www.github.com/Comp-Foundations-and-Barriers-of-AI/firenet" rel="nofollow">https://www.github.com/Comp-Foundations-and-Barriers-of-AI/f...</a><p>Does anyone else here understand the work better than me? I would love an informal explanation that appeals to intuition.