I post a link to NEAT here about once a week.<p>The big problem is that NEAT can't leverage a GPU effectively at scale (arbitrary topologies vs bipartite graphs)<p>Other than that, it feels like a "road not taken" of machine learning. It handles building complex agent behaviors really well, and pressure to minimal topology results in understandable and reverse interpretable networks.<p>Its easier to learn and implement than back propagation, the speciation is the most complex but awesome feature. It develops multiple solutions, clustering them to iterate on them separately. An online learning NEAT agent is in practice is an online collection of behaviors, adapting and swapping dominance as their fitness changes.
HyperNEAT is also really cool: <a href="https://en.wikipedia.org/wiki/HyperNEAT" rel="nofollow">https://en.wikipedia.org/wiki/HyperNEAT</a><p>It uses NEAT to evolve a CPPN which is then sampled at some specified resolution to determine the actual network topology. The really cool thing is the sampling resolution can be varied to scale the size of the network while maintaining the overall "shape" of the topology. I took Ken's neuroevolution course back in the day and worked with HyperNEAT before deep learning got big. Ever since, deep learning network architectures have always reminded me of the dense layered topologies that result from higher sampling resolutions with HyperNEAT. It would be interesting to see if HyperNEAT along with Ken's novelty search technique could be used to evolve useful deep learning architectures.
For anyone interested in NEAT, you’ll likely enjoy Ken’s later work on novelty search, open-endedness, etc.<p>His book “Why Greatness Cannot be Planned: The Myth of the Objectives” is one of the most perspective altering works I have ever read.<p>Very excited to see what he does next. He mentioned on Twitter a couple times his interest in representation learning and how objective based search affects this. Very interesting stuff
Shameless plug:<p>I have an implementation in Rust (CPU only) with added GRU gates<p><a href="https://crates.io/crates/neat-gru" rel="nofollow">https://crates.io/crates/neat-gru</a>
Legendary Sethbling video from 2015 where he implemented NEAT for SMW: <a href="https://www.youtube.com/watch?v=qv6UVOQ0F44&t=1" rel="nofollow">https://www.youtube.com/watch?v=qv6UVOQ0F44&t=1</a>
There's a lot of really interesting work in neuroevolution that has the potential to make some really interesting unsupervised training regimes. I think theres some really interesting possibilities for unique encoding schemes like ACE encoding to speed up training and provide much smarter behavior out the other end. Especially, if "genes" can form reusable elements of neural topology that makes scaling networks faster. Reusing components all over body is how we fit such complexity in the relatively little unique DNA we have. The other interesting thing about using genetic algorithms for a portion of training/network mapping is that allows you to have heterogenous networks, so you can have simulations or representations of astrocyte/glial behaivor easily get integrated with neural networks. With traditional training methods it's a massive fucking pain to train a non-feed forward network.<p>I do think that languages like Elixir and other cpu concurrent strong tools can really be leveraged to make some dynamite libraries.
I've written a NEAT implementation in Rust a few years back. It was able to train XOR and pole balancing.<p><a href="https://sgolem.com/blog/neat-simple-neuroevolution-framework-in-rust" rel="nofollow">https://sgolem.com/blog/neat-simple-neuroevolution-framework...</a>
NEAT:
<a href="https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies" rel="nofollow">https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_t...</a><p>NEAT > See also links to EANT.<p>EANT:
<a href="https://en.wikipedia.org/wiki/Evolutionary_acquisition_of_neural_topologies" rel="nofollow">https://en.wikipedia.org/wiki/Evolutionary_acquisition_of_ne...</a> :<p>> <i>Evolutionary acquisition of neural topologies (EANT/EANT2) is an evolutionary reinforcement learning method that evolves both the topology and weights of artificial neural networks. It is closely related to the works of Angeline et al. [1] and Stanley and Miikkulainen. [2 [NEAT (2002)] Like the work of Angeline et al., the method uses a type of parametric mutation that comes from evolution strategies and evolutionary programming (now using the most advanced form of the evolution strategies CMA-ES in EANT2), in which adaptive step sizes are used for optimizing the weights of the neural networks. Similar to the work of Stanley (NEAT), the method starts with minimal structures which gain complexity along the evolution</i> path.<p>(in Hilbert space)<p>> [EANT] <i>introduces a genetic encoding called</i> common genetic encoding (CGE) <i>that handles both direct and indirect encoding of neural networks within the same theoretical framework. The encoding has important properties that makes it suitable for evolving neural networks:</i><p>> <i>It is complete in that it is able to represent all types of valid phenotype networks.</i><p>> <i>It is closed, i.e. every valid genotype represents a valid phenotype. (Similarly, the encoding is closed under genetic operators such as structural mutation and crossover.)</i><p>> <i>These properties have been formally proven. [3] </i><p>Neither MOSES: Meta-Optimizing Semantic Evolutionary Search nor PLN: Probabilistic Logic Networks are formally proven FWIU.<p>/? Z3 ... <a href="https://news.ycombinator.com/item?id=41944043">https://news.ycombinator.com/item?id=41944043</a> :<p>> <i>Does [formal verification [with Z3]] distinguish between x</i>y+z and z+y<i>x, in terms of floating point output?</i><p>> <i>math.fma() would be a different function call with the same or similar output.</i><p>> <i>(deal-solver is a tool for verifying formal implementations in Python with Z3.)</i>
Some more interesting approaches in the same space:<p>- <a href="https://github.com/openai/evolution-strategies-starter">https://github.com/openai/evolution-strategies-starter</a><p>- <a href="https://cloud.google.com/blog/topics/developers-practitioners/evojax-bringing-power-neuroevolution-solve-your-problems" rel="nofollow">https://cloud.google.com/blog/topics/developers-practitioner...</a><p>And perhaps most close:<p>- <a href="https://weightagnostic.github.io/" rel="nofollow">https://weightagnostic.github.io/</a><p>Which also showed that you can make NNs weight agnostic and just let the architecture evolve using a GA.<p>Even though these approaches are cool and NEAT even is somewhat easier to implement than getting started with RL (at least that is what based on so many AI Youtubers starting with NEAT first) they didn't ever seem to fully take off. Although knowing about metaheuristics is still a good tool to know IMO.
If you like NEATs you might also want to have a look at PDGP by R. Poli.<p>This is a more traditional / general GP in the sense that it expresses computer programs rather than NNs specifically, but is graph based rather than tree based, in a way that allows one to add a "link set" on top of the "function set" and "terminal sets", allowing it to fully express neural networks as well as more general graph structures. And as the name implies, it lends itself well to parallel/distributed operations.<p>The big difference with NEATs is that your initial population is randomly generated programs rather than "base" networks ... but you could easily treat these programs in the same manner Koza used GPs to evolve circuits from "embryonic circuits" by having terminals act as circuit-modifying operations.
SharpNeat is an independent implementation that targets C# and .NET 8.<p><a href="https://github.com/colgreen/sharpneat">https://github.com/colgreen/sharpneat</a><p><a href="https://sharpneat.sourceforge.io/" rel="nofollow">https://sharpneat.sourceforge.io/</a>
I did my high school senior project on NEAT! It was a c++ project set up as an "ants eating food" expermient where their miniscule brains learned over several iterations to go to the closest food and eat it.<p>I later did a version (in common lisp) where instead of an ant knowing where the nearest food was, it had a visual field extending forward that would feed as inputs to the NN. Then added predators, poisonous food items, etc etc. It was fascinating seeing it all unfold.<p>Genetic algorithms seem like a somewhat forgotten but very handy tool for ML. I've always wondered what the cost of running experiments on species/populations over and over for however many hundreds of iterations is compared to something like back propagation on a large network.