It seems like this can leave the reader with the wrong impression. Calculus really is "the mathematics of Newtonian physics". This is just "some mathematics that might help a bit in your intuitions of deep learning".<p>IE, Deep learning is fundamentally just about getting the mathematically simple but complex and multi-layerd "neural networks" to do stuff. Training them, testing them and deploying them. There are many intuitions about these things but there's no complete theory - some intuitions involve mathematical analogies and simplifications while other involve "folk knowledge" or large scale experiments. And that's not saying folks giving math about deep learning aren't proving real things. It's just they characterizing the whole or even a substantial part of such systems.<p>It's not surprising that a complex like a many-layered Relu network can't fully characterized or solved mathematically. You'd expect that of any arbitrarily complex algorithmic construct. Differential equations of many variables and arbitrary functions also can't have their solutions fully characterized.
Tangent, but has anyone taken Fast.ai or similar courses and transitioned into the Deep Learning/ML field without a MS/PhD? To be honest, I don't even know what 'doing ML/DL' looks like in practice, but I'm just curious if a lot of folks get in to the field without graduate degrees.
After skimming through the paper it's clear that the title should be read as "The Modern (Mathematics of Deep Learning)" and not my original parse which was "The (Modern Mathematics) of Deep Learning." Very different things.
Wake me up when 'deep learning' has independently created a language to communicate within a group of peers while under environmental pressure.<p>(and that language is co-expressive with human languages)
I believe that the curse of dimensionility doesn't apply here as we are optimizing the "universal apppriximator" of the "surface" of the possible real world function.