Stephen Grossberg

Stephen Grossberg (born December 31, 1939) is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist.

Grossberg received a PhD in mathematics from Rockefeller in 1967 for a thesis that proved the first global content addressable memory theorems about the neural learning models that he had discovered at Dartmouth.

Grossberg was hired in 1967 as an assistant professor of applied mathematics at MIT following strong recommendations from Mark Kac and Rota.

In 1969, Grossberg was promoted to associate professor after publishing a stream of conceptual and mathematical results about many aspects of neural networks, including a series of foundational articles in the Proceedings of the National Academy of Sciences between 1967 and 1971.

His work focuses upon the design principles and mechanisms that enable the behavior of individuals, or machines, to adapt autonomously in real time to unexpected environmental challenges.

This research has included neural models of vision and image processing; object, scene, and event learning, pattern recognition, and search; audition, speech and language; cognitive information processing and planning; reinforcement learning and cognitive-emotional interactions; autonomous navigation; adaptive sensory-motor control and robotics; self-organizing neurodynamics; and mental disorders.

At that time, Grossberg introduced the paradigm of using nonlinear systems of differential equations to show how brain mechanisms can give rise to behavioral functions.

[4] This paradigm is helping to solve the classical mind/body problem, and is the basic mathematical formalism that is used in biological neural network research today.

In particular, in 1957–1958, Grossberg discovered widely used equations for (1) short-term memory (STM), or neuronal activation (often called the Additive and Shunting models, or the Hopfield model after John Hopfield's 1984 application of the Additive model equation); (2) medium-term memory (MTM), or activity-dependent habituation (often called habituative transmitter gates, or depressing synapses after Larry Abbott's 1997 introduction of this term); and (3) long-term memory (LTM), or neuronal learning (often called gated steepest descent learning).

As part of this analysis, he introduced a Liapunov functional method to help classify the limiting and oscillatory dynamics of competitive systems by keeping track of which population is winning through time.

Grossberg has introduced, and developed with his colleagues, fundamental concepts, mechanisms, models, and architectures across a wide spectrum of topics about brain and behavior.

[6] These models have provided unified and principled explanations of psychological and neurobiological data about processes including auditory and visual perception, attention, consciousness, cognition, cognitive-emotional interactions, and action in both typical, or normal, individuals and clinical patients.

This work models how particular brain breakdowns or lesions cause behavioral symptoms of mental disorders such as Alzheimer's disease, autism, amnesia, PTSD, ADHD, visual and auditory agnosia and neglect, and slow-wave sleep.

Models that Grossberg introduced and helped to develop include: Given that there was little or no infrastructure to support the fields that he and other modeling pioneers were advancing, Grossberg founded several institutions aimed at providing interdisciplinary training, research, and publication outlets in the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology.

ART is a cognitive and neural theory of how the brain can quickly learn, and stably remember and recognize, objects and events in a changing world.

ART also predicts how large enough mismatches between bottom-up feature patterns and top-down expectations can drive a memory search, or hypothesis testing, for recognition categories with which to better learn to classify the world.

How do specializations of this shared laminar design embody different types of biological intelligence, including vision, speech and language, and cognition?

Embodying such designs into VLSI chips promises to enable the development of increasingly general-purpose adaptive autonomous algorithms for multiple applications.