It is closely related to the works of Angeline et al.[1] and Stanley and Miikkulainen.
[2] Like the work of Angeline et al., the method uses a type of parametric mutation that comes from evolution strategies and evolutionary programming (now using the most advanced form of the evolution strategies CMA-ES in EANT2), in which adaptive step sizes are used for optimizing the weights of the neural networks.
Similar to the work of Stanley (NEAT), the method starts with minimal structures which gain complexity along the evolution path.
Despite sharing these two properties, the method has the following important features which distinguish it from previous works in neuroevolution.
Moreover, a newer version of EANT, called EANT2, was tested on a visual servoing task and found to outperform NEAT and the traditional iterative Gauss–Newton method.