Neuroevolution of augmenting topologies

It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity.

On simple control tasks, the NEAT algorithm often arrives at effective networks more quickly than other contemporary neuro-evolutionary techniques and reinforcement learning methods, as of 2006.

[1][2] Traditionally, a neural network topology is chosen by a human experimenter, and effective connection weight values are learned through a training procedure.

In 2003, Stanley devised an extension to NEAT that allows evolution to occur in real time rather than through the iteration of generations as used by most genetic algorithms.

In the first phase of the game, individual players deploy robots in a 'sandbox' and train them to some desired tactical doctrine.

An extension of Ken Stanley's NEAT, developed by Colin Green, adds periodic pruning of the network topologies of candidate solutions during the evolution process.

Content-Generating NEAT (cgNEAT) evolves custom video game content based on user preferences.

In this way, robots executing odNEAT have the potential to adapt to changing conditions and learn new behaviors as they carry out their tasks.