Nodes in the attractor network converge toward a pattern that may either be fixed-point (a single state), cyclic (with regularly recurring states), chaotic (locally but not globally unstable) or random (stochastic).
[1] Attractor networks have largely been used in computational neuroscience to model neuronal processes such as associative memory[2] and motor behavior, as well as in biologically inspired methods of machine learning.
An attractor network contains a set of n nodes, which can be represented as vectors in a d-dimensional space where n>d.
Cyclic attractors evolve the network toward a set of states in a limit cycle, which is repeatedly traversed.
Attractor networks are initialized based on the input pattern.
The basin of attraction is the set of states that results in movement towards a certain attractor.
The fixed point attractor naturally follows from the Hopfield network.
Conventionally, fixed points in this model represent encoded memories.
These models have been used to explain associative memory, classification, and pattern completion.
Hopfield nets contain an underlying energy function[4] that allow the network to asymptotically approach a stationary state.
Another class of attractor network features predefined weights that are probed by different types of input.
If this stable state is different during and after the input, it serves as a model of associative memory.
These line attractors, or neural integrators, describe eye position in response to stimuli.
Cyclic attractors are instrumental in modelling central pattern generators, neurons that govern oscillatory activity in animals such as chewing, walking, and breathing.
While chaotic attractors have the benefit of more quickly converging upon limit cycles, there is yet no experimental evidence to support this theory.
The observed activity of grid cells is successfully explained by assuming the presence of ring attractors in the medial entorhinal cortex.
[6] Recently, it has been proposed that similar ring attractors are present in the lateral portion of the entorhinal cortex and their role extends to registering new episodic memories.
However, they have been largely impractical for computational purposes because of difficulties in designing the attractor landscape and network wiring, resulting in spurious attractors and poorly conditioned basins of attraction.
Furthermore, training on attractor networks is generally computationally expensive, compared to other methods such as k-nearest neighbor classifiers.
[8] However, their role in general understanding of different biological functions, such as, locomotor function, memory, decision-making, to name a few, makes them more attractive as biologically realistic models.
These recurrent networks are initialized by the input, and tend toward a fixed-point attractor.
models stimulus priming by allowing quicker convergence toward a recently visited attractor.
This algorithm uses the EM method above, with the following modifications: (1) early termination of the algorithm when the attractor's activity is most distributed, or when high entropy suggests a need for additional memories, and (2) the ability to update the attractors themselves: