Language of thought hypothesis

On this view, simple concepts combine in systematic ways (akin to the rules of grammar in language) to build thoughts.

[1] Linguistic tokens used in mental language describe elementary concepts which are operated upon by logical rules establishing causal connections to allow for complex thought.

But others contend that complex thought is present even in those who do not possess a public language (e.g. babies, aphasics, and even higher-order primates), and therefore some form of mentalese must be innate.

An objection to this point comes from John Searle in the form of biological naturalism, a non-representational theory of mind that accepts the causal efficacy of mental states.

The lower-level, nonrepresentational neurophysiological processes have causal power in intention and behavior rather than some higher-level mental representation.

[citation needed] Tim Crane, in his book The Mechanical Mind,[6] states that, while he agrees with Fodor, his reason is very different.

A logical objection challenges LOTH’s explanation of how sentences in natural languages get their meaning.

Dennett points out that a chess program can have the attitude of “wanting to get its queen out early,” without having representation or rule that explicitly states this.

[3] Susan Schneider has recently developed a version of LOT that departs from Fodor's approach in numerous ways.

However, connectionism stresses the possibility of thinking machines, most often realized as artificial neural networks, an inter-connectional set of nodes, and describes mental states as able to create memory by modifying the strength of these connections over time.

Since connectionist models can change over time, supporters of connectionism claim that it can solve the problems that LOTH brings to classical AI.

These problems are those that show that machines with a LOT syntactical framework very often are much better at solving problems and storing data than human minds, yet much worse at things that the human mind is quite adept at such as recognizing facial expressions and objects in photographs and understanding nuanced gestures.

[6] Fodor defends LOTH by arguing that a connectionist model is just some realization or implementation of the classical computational theory of mind and therein necessarily employs a symbol-manipulating LOT.

Cognitive architecture is the set of basic functions of an organism with representational input and output.