Perceptrons: An Introduction to Computational Geometry is a book written by Marvin Minsky and Seymour Papert and published in 1969.
An expanded edition was further published in 1988 (ISBN 9780262631112) after the revival of neural networks, containing a chapter dedicated to counter the criticisms made of it in the 1980s.
The main subject of the book is the perceptron, a type of artificial neural network developed in the late 1950s and early 1960s.
[1] Rosenblatt and Minsky knew each other since adolescence, having studied with a one-year difference at the Bronx High School of Science.
[2] They became at one point central figures of a debate inside the AI research community, and are known to have promoted loud discussions in conferences, yet remained friendly.
The problem of connectedness is illustrated at the awkwardly colored cover of the book, intended to show how humans themselves have difficulties in computing this predicate.
[5] One reviewer, Earl Hunt, noted that the XOR function is difficult for humans to acquire as well during concept learning experiments.
[7][8][9] An "expanded edition" was published in 1988, which adds a prologue and an epilogue to discuss the revival of neural networks in the 1980s, but no new scientific results.
[10] In 2017, the expanded edition was re-printed, with a foreword by Léon Bottou that discusses the book from the perspective of someone working in deep learning.
The perceptron is a neural net developed by psychologist Frank Rosenblatt in 1958 and is one of the most famous machines of its period.
[11][12] In 1960, Rosenblatt and colleagues were able to show that the perceptron could in finitely many training cycles learn any task that its parameters could embody.
[12] During this period, neural net research was a major approach to the brain-machine issue that had been taken by a significant number of individuals.
[12] Reports by the New York Times and statements by Rosenblatt claimed that neural nets would soon be able to see images, beat humans at chess, and reproduce.
[13] Different groups found themselves competing for funding and people, and their demand for computing power far outpaced available supply.
[14] Perceptrons: An Introduction to Computational Geometry is a book of thirteen chapters grouped into three sections.
[15][16] Minsky and Papert took as their subject the abstract versions of a class of learning devices which they called perceptrons, "in recognition of the pioneer work of Frank Rosenblatt".
To the authors, this implied that "each association unit could receive connections only from a small part of the input area".
Hardware for realizing linear threshold logic included magnetic core, resistor-transistor, parametron, resistor-tunnel diode, and multiple coil relay.
[26] There were also theoretical studies on the upper and lower bounds on the minimum number of perceptron units necessary to realize any Boolean function.
This was contrary to a hope held by some researchers [citation needed] in relying mostly on networks with a few layers of "local" neurons, each one connected only to a small number of inputs.
A feed-forward machine with "local" neurons is much easier to build and use than a larger, fully connected neural network, so researchers at the time concentrated on these instead of on more complicated models[citation needed].
Some other critics, notably Jordan Pollack, note that what was a small proof concerning a global issue (parity) not being detectable by local detectors was interpreted by the community as a rather successful attempt to bury the whole idea.
They conjecture that Gamba machines would require "an enormous number" of Gamba-masks and that multilayer neural nets are a "sterile" extension.
Neural networks trained by gradient descent would fail to scale up, due to local minima, extremely large weights, and slow convergence.
In 1969, Stanford professor Michael A. Arbib stated, "[t]his book has been widely hailed as an exciting new chapter in the theory of pattern recognition.
He argued that they "study a severely limited class of machines from a viewpoint quite alien to Rosenblatt's", and thus the title of the book was "seriously misleading".
[15] Contemporary neural net researchers shared some of these objections: Bernard Widrow complained that the authors had defined perceptrons too narrowly, but also said that Minsky and Papert's proofs were "pretty much irrelevant", coming a full decade after Rosenblatt's perceptron.
[38][3] With the revival of connectionism in the late 80s, PDP researcher David Rumelhart and his colleagues returned to Perceptrons.
In a 1986 report, they claimed to have overcome the problems presented by Minsky and Papert, and that "their pessimism about learning in multilayer machines was misplaced".
On his website Harvey Cohen,[39] a researcher at the MIT AI Labs 1974+,[40] quotes Minsky and Papert in the 1971 Report of Project MAC, directed at funding agencies, on "Gamba networks":[30] "Virtually nothing is known about the computational capabilities of this latter kind of machine.