Gaussian adaptation

In every cycle a sufficiently large number of Gaussian distributed points are sampled and tested for membership in the region of acceptability.

In this case the rules of genetic variation such as crossover, inversion, transposition etcetera may be seen as random number generators for the phenotypes.

The process may get stuck for millions of years at B or C, as long as the hollows to the right of these points remain, and the mutation rate is too small.

If the mutation rate is sufficiently high, the disorder or variance may increase and the parameter(s) may become distributed like the green bell curve.

Because the hollows to the right of B and C have now disappeared, the process may continue up to the peaks at D. But of course the landscape puts a limit on the disorder or variability.

In reality however, the number of individuals is always limited, which gives rise to an uncertainty in the estimation of m and M (the moment matrix of the Gaussian).

M may in principle be updated after every step y leading to a feasible point where yT is the transpose of y and b << 1 is another suitable constant.

In practice it will suffice to update W only This is the formula used in a simple 2-dimensional model of a brain satisfying the Hebbian rule of associative learning; see the next section (Kjellström, 1996 and 1999).

The metaphor with the mental landscape is based on the assumption that certain signal patterns give rise to a better well-being or performance.

For instance, the control of a group of muscles leads to a better pronunciation of a word or performance of a piece of music.

In this simple model it is assumed that the brain consists of interconnected components that may add, multiply and delay signal values.

This is a basis of the theory of digital filters and neural networks consisting of components that may add, multiply and delay signalvalues and also of many brain models, Levine 1991.

The stem also constitutes a disordered structure surrounded by more ordered shells (Bergström, 1969), and according to the central limit theorem the sum of signals from many neurons may be Gaussian distributed.

When a signal is accepted the contact areas in the synapses are updated according to the formulas below in agreement with the Hebbian theory.

Gaussian adaptation as an evolutionary model of the brain obeying the Hebbian theory of associative learning offers an alternative view of free will due to the ability of the process to maximize the mean fitness of signal patterns in the brain by climbing a mental landscape in analogy with phenotypic evolution.

This measure of efficiency is valid for a large class of random search processes provided that certain conditions are at hand.

This condition may be approximately fulfilled when the moment matrix of the Gaussian has been adapted for maximum average information to some region of acceptability, because linear transformations of the whole process do not affect efficiency.

Then, the following theorem may be proved: All measures of efficiency, that satisfy the conditions above, are asymptotically proportional to –P log(P/q) when the number of dimensions increases, and are maximized by P = q exp(-1) (Kjellström, 1996 and 1999).

In both cases the maximum likelihood method is used for estimation of mean values by adaptation at one sample at a time.