Connectionism

The first wave appeared 1943 with Warren Sturgis McCulloch and Walter Pitts both focusing on comprehending neural circuitry through a formal and mathematical approach,[2] and Frank Rosenblatt who published the 1958 paper "The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain" in Psychological Review, while working at the Cornell Aeronautical Laboratory.

[3] The first wave ended with the 1969 book about the limitations of the original perceptron idea, written by Marvin Minsky and Seymour Papert, which contributed to discouraging major funding agencies in the US from investing in connectionist research.

The term connectionist model was reintroduced in a 1982 paper in the journal Cognitive Science by Jerome Feldman and Dana Ballard.

The second wave blossomed in the late 1980s, following a 1987 book about Parallel Distributed Processing by James L. McClelland, David E. Rumelhart et al., which introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (now known as "hidden layers") alongside input and output units, and used a sigmoid activation function instead of the old "all-or-nothing" function.

Their work built upon that of John Hopfield, who was a key figure investigating the mathematical characteristics of sigmoid activation functions.

This principle has been seen as an alternative to GOFAI and the classical theories of mind based on symbolic computation, but the extent to which the two approaches are compatible has been the subject of much debate since their inception.

Neural networks follow two basic principles: Most of the variety among the models comes from: Connectionist work in general does not need to be biologically realistic.

[10][11][12][13][14][15][16] One area where connectionist models are thought to be biologically implausible is with respect to error-propagation networks that are needed to support learning,[17][18] but error propagation can explain some of the biologically-generated electrical activity seen at the scalp in event-related potentials such as the N400 and P600,[19] and this provides some biological support for one of the key assumptions of connectionist learning procedures.

[22] The first wave begun in 1943 with Warren Sturgis McCulloch and Walter Pitts both focusing on comprehending neural circuitry through a formal and mathematical approach.

The research group led by Widrow empirically searched for methods to train two-layered ADALINE networks (MADALINE), with limited success.

Some key publications included (John Hopfield, 1982)[37] which popularized Hopfield networks, the 1986 paper that popularized backpropagation,[38] and the 1987 two-volume book about the Parallel Distributed Processing (PDP) by James L. McClelland, David E. Rumelhart et al., which has introduced a couple of improvements to the simple perceptron idea, such as intermediate processors (known as "hidden layers" now) alongside input and output units and using sigmoid activation function instead of the old 'all-or-nothing' function.

[3] Another important series of publications proved that neural networks are universal function approximators, which also provided some mathematical respectability.

They argued that connectionism, as then developing, threatened to obliterate what they saw as the progress being made in the fields of cognitive science and psychology by the classical approach of computationalism.

The debate was largely centred on logical arguments about whether connectionist networks could produce the syntactic structure observed in this sort of reasoning.

Some researchers suggest that the analysis gap is the consequence of connectionist mechanisms giving rise to emergent phenomena that may be describable in computational terms.

The subsymbolic paradigm, or connectionism in general, would thus have to explain the existence of systematicity and compositionality without relying on the mere implementation of a classical cognitive architecture.

This challenge implies a dilemma: If the Subsymbolic Paradigm could contribute nothing to the systematicity and compositionality of mental representations, it would be insufficient as a basis for an alternative theory of cognition.

[58] This challenge has been met in modern connectionism, for example, not only by Smolensky's "Integrated Connectionist/Symbolic (ICS) Cognitive Architecture",[59][60] but also by Werning and Maye's "Oscillatory Networks".

A 'second wave' connectionist (ANN) model with a hidden layer