Commonsense reasoning

In artificial intelligence (AI), commonsense reasoning is a human-like ability to make presumptions about the type and essence of ordinary situations humans encounter every day.

Humans also have a powerful mechanism of "folk psychology" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence".

[10][11][12] Overlapping subtopics of commonsense reasoning include quantities and measurements, time and space, physics, minds, society, plans and goals, and actions and change.

In 1961, Bar Hillel first discussed the need and significance of practical knowledge for natural language processing in the context of machine translation.

For instance, when a machine is used to translate a text, problems of ambiguity arise, which could be easily resolved by attaining a concrete and true understanding of the context.

The machine has seen and read in the body of texts that the German words for "laboring" and "electrician" are frequently used in a combination and are found close together.

Existing computer programs carry out simple language tasks by manipulating short phrases or separate words, but they don't attempt any deeper understanding and focus on short-term results.

[1][17] For instance when looking at a photograph of a bathroom some items that are small and only partly seen, such as facecloths and bottles, are recognizable due to the surrounding objects (toilet, wash basin, bathtub), which suggest the purpose of the room.

In the contemporary state of the art, it is impossible to build and manage a program that will perform such tasks as reasoning, i.e. predicting characters’ actions.

The need and importance of commonsense reasoning in autonomous robots that work in a real-life uncontrolled environment is evident.

Such tasks seem obvious when an individual possesses simple commonsense reasoning, but to ensure that a robot will avoid such mistakes is challenging.

This theory was firstly formulated by Johan de Kleer, who analyzed an object moving on a roller coaster.

According to Ernest Davis and Gary Marcus, five major obstacles interfere with the producing of a satisfactory "commonsense reasoner".

[1] Compared with humans, as of 2018 existing computer programs perform extremely poorly on modern "commonsense reasoning" benchmark tests such as the Winograd Schema Challenge.

In informal knowledge-based approaches, theories of reasoning are based on anecdotal data and intuition that are results from empirical behavioral psychology.

Like many other current efforts, COMET over-relies on surface language patterns and is judged to lack deep human-level understanding of many commonsense concepts.

A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.