TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

60 pointsby jweissmanover 8 years ago

2 comments

cs702over 8 years ago
In short, this architecture freezes the parameters and pathways used for previously learned tasks, and can learn new parameters and use new pathways for new tasks, with each new task learned faster than previous ones by leveraging all previously learned parameters and pathways (more efficient transfer learning).<p>It&#x27;s a <i>general</i> neural net architecture.<p>Very cool.
divbitover 8 years ago
&quot;During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation.&quot;<p>Trying to think of another &#x27;tournament&#x27; like process that would allow for a massive distributed network where each node already has a decent GPU, where something like this could be successfully run. Maybe someone could help me out here...
评论 #13676828 未加载