I was a little surprised at the headline, since I expected 'outperforms' to mean that it had better end-results, which is of course not the case. GP is just much faster due to it's relative simplicity and the results are close enough to those achieved with NN and deep learning.<p>> Finally, while generally matching the skill level of
controllers from neuro-evolution/deep learning, the genetic programming solutions evolved here are several orders of magnitude simpler, resulting in real-time operation at a fraction of the cost.<p>> Moreover, TPG solutions are particularly elegant,
thus supporting real-time operation without specialized hardware<p>This is the key takeaway and yet another reminder to not make deep learning the hammer for all your fuzzy problems.
One of the huge benefits of GPs over NNs is the ease of reverse engineering a GP tree compared to NN models. Its not effortless however. Its just not mathematically complex like NNs i.e. a programmer who isn't a mathematician can analyze GPs with a lot of patience<p>EDIT: I have found GPs to be relatively slow-to-very-slow. But very likely that is because of the lack of interest and development compared to NNs
Those are really old results. They should compare to this one: <a href="https://arxiv.org/pdf/1511.06581.pdf" rel="nofollow">https://arxiv.org/pdf/1511.06581.pdf</a>
This is super cool, but it doesn't outperform deep learning based RL methods.<p>In fact, I'm not sure how much more compute efficient than something like A3C it would be. That can produce 4x the score of DQN in a comparable number of hours (and on a CPU).
Slightly relevant, here's a state of the art drone AI built using genetic fuzzy systems: <a href="https://www.forbes.com/sites/jvchamary/2016/06/28/ai-drone/#50908d8b7081" rel="nofollow">https://www.forbes.com/sites/jvchamary/2016/06/28/ai-drone/#...</a>