TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Overcoming Catastrophic Forgetting in Neural Networks

123 pointsby RSchaefferabout 8 years ago

5 comments

itchyjunkabout 8 years ago
Great, understood a lot more than I had from the DeepMinds paper alone. Thought the mathematics was slightly beyond me, I got the gist of it still. Although this was talking about reinforcement learning in Atari, I was wondering if its works for other domain as well? supervised, unsupervised etc. If it does and say you have sparse data for task B but rich data for task A. Is this saying training first on A than transfer learning B makes it perform better on B? (As I type it, it's sounding like semi-supervised but it's not what I am trying to ask. :P) P.S: Pictures helped.
评论 #13912073 未加载
blueyesabout 8 years ago
Overcoming catastrophic forgetting is a genuine step toward strong AI.
评论 #13912657 未加载
mijoharasabout 8 years ago
Mathjax appears to be broken if you use https everywhere or just visit with https[0]. Just a note to RSchaeffer. Nice article.<p>[0] <a href="https:&#x2F;&#x2F;rylanschaeffer.github.io&#x2F;content&#x2F;research&#x2F;overcoming_catastrophic_forgetting&#x2F;main.html" rel="nofollow">https:&#x2F;&#x2F;rylanschaeffer.github.io&#x2F;content&#x2F;research&#x2F;overcoming...</a>
srtjstjsjabout 8 years ago
How does this compare, intuitively, to &quot;short-term -&gt; long-term memory transfer&quot;, where learned skills are stored in a subset of the neural network, and non-core details are forgotten?
tungstenoydabout 8 years ago
Someone should consider hiring this young man.