TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

60 点作者 jweissman超过 8 年前

2 条评论

cs702超过 8 年前
In short, this architecture freezes the parameters and pathways used for previously learned tasks, and can learn new parameters and use new pathways for new tasks, with each new task learned faster than previous ones by leveraging all previously learned parameters and pathways (more efficient transfer learning).<p>It&#x27;s a <i>general</i> neural net architecture.<p>Very cool.
divbit超过 8 年前
&quot;During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation.&quot;<p>Trying to think of another &#x27;tournament&#x27; like process that would allow for a massive distributed network where each node already has a decent GPU, where something like this could be successfully run. Maybe someone could help me out here...
评论 #13676828 未加载