Symbolic artificial intelligence

[4] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the ultimate goal of their field.

[15] However, neural networks were not viewed as successful until about 2012: "Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless.

Success at early attempts in AI occurred in three main areas: artificial neural networks, knowledge representation, and heuristic search, contributing to high expectations.

A robotic turtle, with sensors, motors for driving and steering, and seven vacuum tubes for control, based on a preprogrammed neural net, was built as early as 1948.

[20] An important early symbolic AI program was the Logic theorist, written by Allen Newell, Herbert Simon and Cliff Shaw in 1955–56, as it was able to prove 38 elementary theorems from Whitehead and Russell's Principia Mathematica.

The approach advocated by Simon and Newell is to employ heuristics: fast algorithms that may fail on some inputs or output suboptimal solutions.

A* is used as a subroutine within practically every AI algorithm today but is still no magic bullet; its guarantee of completeness is bought at the cost of worst-case exponential time.

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate the exact mechanisms of human thought, but could instead try to find the essence of abstract reasoning and problem-solving with logic,[27] regardless of whether people used the same algorithms.

[a] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.

[32][33] Researchers at MIT (such as Marvin Minsky and Seymour Papert)[34][35][6] found that solving difficult problems in vision and natural language processing required ad hoc solutions—they argued that no simple and general principle (like logic) would capture all the aspects of intelligent behavior.

[36][37] Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.

A professor of applied mathematics, Sir James Lighthill, was commissioned by Parliament to evaluate the state of AI research in the nation.

[45]This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first commercially successful form of AI software.

MYCIN exemplifies the classic expert system architecture of a knowledge-base of rules coupled to a symbolic reasoning mechanism, including the use of certainty factors to handle uncertainty.

In 1996, this allowed IBM's Deep Blue, with the help of symbolic AI, to win in a game of chess against the world champion at that time, Garry Kasparov.

So we wanted a program that would look at thousands of spectra and infer the knowledge of mass spectrometry that DENDRAL could use to solve individual hypothesis formation problems.

The CBR approach outlined in his book, Dynamic Memory,[67] focuses first on remembering key problem-solving cases for future use and generalizing them where appropriate.

Garcez and Lamb describe research in this area as being ongoing for at least the past twenty years,[83] dating from their 2002 book on neurosymbolic learning systems.

In their 2015 paper, Neural-Symbolic Learning and Reasoning: Contributions and Challenges, Garcez et al. argue that: The integration of the symbolic and connectionist paradigms of AI has been pursued by a relatively small research community over the last two decades and has yielded several significant results.

Further, neural-symbolic systems have been applied to a number of problems in the areas of bioinformatics, control engineering, software verification and adaptation, visual intelligence, ontology learning, and computer games.

Henry Kautz's taxonomy of neuro-symbolic architectures, along with some examples, follows: Many key research questions remain, such as: This section provides an overview of techniques and contributions in an overall context leading to many other, more detailed articles in Wikipedia.

CLOS is a Lisp-based object-oriented system that allows multiple inheritance, in addition to incremental extensions to both classes and metaclasses, thus providing a run-time meta-object protocol.

Search arises in many kinds of problem solving, including planning, constraint satisfaction, and playing games such as checkers, chess, and go.

Qualitative simulation, such as Benjamin Kuipers's QSIM,[90] approximates human reasoning about naive physics, such as what happens when we heat a liquid in a pot on the stove.

Parsing, tokenizing, spelling correction, part-of-speech tagging, noun and verb phrase chunking are all aspects of natural language processing long handled by symbolic AI, but since improved by deep learning approaches.

For example, experimental symbolic machine learning systems explored the ability to take high-level natural language advice and to interpret it into domain-specific actionable rules.

One of the researchers who has elaborated this position most explicitly is Andy Clark, a philosopher from the School of Cognitive and Computing Sciences of the University of Sussex (Brighton, England).

[97]Gary Marcus has claimed that the animus in the deep learning community against symbolic approaches now may be more sociological than philosophical:To think that we can simply abandon symbol-manipulation is to suspend disbelief.

Where classical computers and software solve tasks by defining sets of symbol-manipulating rules dedicated to particular jobs, such as editing a line in a word processor or performing a calculation in a spreadsheet, neural networks typically try to solve tasks by statistical approximation and learning from examples.According to Marcus, Geoffrey Hinton and his colleagues have been vehemently "anti-symbolic":When deep learning reemerged in 2012, it was with a kind of take-no-prisoners attitude that has characterized most of the last decade.

In turn, connectionist AI has been criticized as poorly suited for deliberative step-by-step problem solving, incorporating knowledge, and handling planning.