Neural networks are merely generalized methods used to translate data. These translations occur by multiplying / adding / dividing / etc weights to the input then passing the output to the next node(s).<p>RNNs keep a memory of prior values such that you can pass on a “memory”.<p>At the end what this is doing is replacing components of the graph with mini-RNNs and pruning based on another network overseeing the first.<p>Having done quite a bit of work in this space I have a couple of thoughts.<p>What was the major advance in games? It was networks playing themselves.<p>Here we may want to do the same thing. Meta-learners need lots of “experience” (data) just like anything living.<p>This paper doesn’t dive too in-depth on the idea that the meta learner can be made general, but they can. There’s only so many problem types (classification, regression, generation, etc) and only so many data formats. Further, networks are fairly well defined, much more so than language.<p>This is the field of AutoML of anyones interested.