Normalization model

David Heeger developed the model in the early 1990s,[2] and later refined it together with Matteo Carandini and J. Anthony Movshon.

In the denominator, a constant plus a measure of local stimulus contrast.

Although the normalization model was initially developed to explain responses in the primary visual cortex, normalization is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions, including the representation of odors in the olfactory bulb,[4] the modulatory effects of visual attention, the encoding of value, and the integration of multisensory information.

[1] Divisive normalization reduces the redundancy in natural stimulus statistics[6] and is sometimes viewed as an implementation of the efficient coding principle.

Formally, divisive normalization is an information-maximizing code for stimuli following a multivariate Pareto distribution.