Apart from NEAT/HyperNEAT there are also other approaches to neuroevolution (I think in this context it is referred to as "Evolutionary Neural Architecture Search" [0]). Evolution in general can be applied in different ways (e.g. optimizing the architecture, replacing training gradient descent etc.).<p>A while ago I co-authored a paper in this space [1] and released some code for interested folks [2].<p>[0]: <a href="https://arxiv.org/pdf/2008.10937.pdf" rel="nofollow">https://arxiv.org/pdf/2008.10937.pdf</a><p>[1]: <a href="https://arxiv.org/abs/1801.00119" rel="nofollow">https://arxiv.org/abs/1801.00119</a><p>[2]: <a href="https://gitlab.com/pkoperek/pytorch-dnn-evolution/-/tree/master" rel="nofollow">https://gitlab.com/pkoperek/pytorch-dnn-evolution/-/tree/mas...</a>
They seem to be similar to Gene Sher's TWEANNS - Topology and Weight Evolving Neural Networks - that I learned about in his 2012 book, "Handbook of Neuroevolution Through Erlang" (sure, it didn't pick up because Erlang ;))<p>Gene's specific implementation is DXNN (Discover and eXplore Neural Network) implemented in 2010 [1]<p>[1] <a href="https://arxiv.org/abs/1008.2412" rel="nofollow">https://arxiv.org/abs/1008.2412</a>
For the logical endpoint of this approach, check out FreeWire:<p><a href="https://github.com/noahtren/Freewire" rel="nofollow">https://github.com/noahtren/Freewire</a><p>You can experiment with freely wired neural networks without traditional layers.