TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

DeepMath Conference 2020 – Conference on the Mathematical Theory of DNN's

139 pointsby wavelanderover 4 years ago

4 comments

a-nikolaevover 4 years ago
"Deep" is such a good prefix for all sorts of Deep Learning, Deep Math, Deep Thinking, Deep Engineering etc. Wonder if the networks were originally called Thick neural networks, would the ML/AI revolution as we know it still happened?
评论 #25016693 未加载
评论 #25023835 未加载
评论 #25016675 未加载
评论 #25021797 未加载
评论 #25018121 未加载
la_fayetteover 4 years ago
Is there any good reason why a fully-connected network needs more than one hidden layer? Theoretically, any non-linear function could be mapped into a fqn with only one hidden layer. Does deep has anything to do with fqns or only with cnns?
评论 #25019933 未加载
评论 #25027120 未加载
评论 #25023920 未加载
cosmic_apeover 4 years ago
This is like a workshop at a usual conference, no proceedings, right?
aborsyover 4 years ago
As soon as I saw the word Deep I stopped continuing.<p>Nnets have always been multi layer since they were invented. That’s the whole idea of progressive feature extraction, and the analogy with biological brain. Theoreticians referred to them properly as nnets or multilayer nnets. Later experimentalists simulated them, thanks to the availability of the computing resources, and experimentally verified that a multi layer nnet can be more efficient than a single layer one. They added superficial terms “deep” and “AI,” “singularity,” etc., which the media and tech industry amplified for obvious reasons.
评论 #25017062 未加载
评论 #25016791 未加载
评论 #25016901 未加载
评论 #25017112 未加载