In short, this architecture freezes the parameters and pathways used for previously learned tasks, and can learn new parameters and use new pathways for new tasks, with each new task learned faster than previous ones by leveraging all previously learned parameters and pathways (more efficient transfer learning).<p>It's a <i>general</i> neural net architecture.<p>Very cool.
"During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation."<p>Trying to think of another 'tournament' like process that would allow for a massive distributed network where each node already has a decent GPU, where something like this could be successfully run. Maybe someone could help me out here...