TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

DeepMath Conference 2020 – Conference on the Mathematical Theory of DNN's

139 点作者 wavelander超过 4 年前

4 条评论

a-nikolaev超过 4 年前
"Deep" is such a good prefix for all sorts of Deep Learning, Deep Math, Deep Thinking, Deep Engineering etc. Wonder if the networks were originally called Thick neural networks, would the ML/AI revolution as we know it still happened?
评论 #25016693 未加载
评论 #25023835 未加载
评论 #25016675 未加载
评论 #25021797 未加载
评论 #25018121 未加载
la_fayette超过 4 年前
Is there any good reason why a fully-connected network needs more than one hidden layer? Theoretically, any non-linear function could be mapped into a fqn with only one hidden layer. Does deep has anything to do with fqns or only with cnns?
评论 #25019933 未加载
评论 #25027120 未加载
评论 #25023920 未加载
cosmic_ape超过 4 年前
This is like a workshop at a usual conference, no proceedings, right?
aborsy超过 4 年前
As soon as I saw the word Deep I stopped continuing.<p>Nnets have always been multi layer since they were invented. That’s the whole idea of progressive feature extraction, and the analogy with biological brain. Theoreticians referred to them properly as nnets or multilayer nnets. Later experimentalists simulated them, thanks to the availability of the computing resources, and experimentally verified that a multi layer nnet can be more efficient than a single layer one. They added superficial terms “deep” and “AI,” “singularity,” etc., which the media and tech industry amplified for obvious reasons.
评论 #25017062 未加载
评论 #25016791 未加载
评论 #25016901 未加载
评论 #25017112 未加载