Evolution strategy (ES) from computer science is a subclass of evolutionary algorithms, which serves as an optimization technique.
[1] It uses the major genetic operators mutation, recombination and selection of parents.
[2] The 'evolution strategy' optimization technique was created in the early 1960s and developed further in the 1970s and later by Ingo Rechenberg, Hans-Paul Schwefel and their co-workers.
In common with evolutionary algorithms, the operators are applied in a loop.
The sequence of generations is continued until a termination criterion is met.
The special feature of the ES is the self-adaptation of mutation step sizes and the coevolution associated with it.
The ES is briefly presented using the standard form,[16][17][18] pointing out that there are many variants.
First, new mutation step sizes are generated per mating by intermediate recombination of the parental
Next, discrete recombination of the decision variables is followed by a mutation using the new mutation step sizes as standard deviations of the normal distribution.
In this way, it can be ensured that the ES searches for its target in ever finer steps.
However, there is also the danger of being able to skip larger invalid areas in the search space only with difficulty.
The ES knows two variants of best selection for the generation of the next parent population (
- number of offspring):[2] Bäck and Schwefel recommend that the value of
must not be chosen too small because of the strong selection pressure.
The selection of the next generation in evolution strategies is deterministic and only based on the fitness rankings, not on the actual fitness values.
The resulting algorithm is therefore invariant with respect to monotonic transformations of the objective function.
operates on a population of size two: the current point (parent) and the result of its mutation.
mutants can be generated and compete with the parent, called
For some of these variants, proofs of linear convergence (in a stochastic sense) have been derived on unimodal objective functions.
[23][24] Individual step sizes for each coordinate, or correlations between coordinates, which are essentially defined by an underlying covariance matrix, are controlled in practice either by self-adaptation or by covariance matrix adaptation (CMA-ES).
[21] When the mutation step is drawn from a multivariate normal distribution using an evolving covariance matrix, it has been hypothesized that this adapted matrix approximates the inverse Hessian of the search landscape.
This hypothesis has been proven for a static model relying on a quadratic approximation.