The symbol grounding problem is a concept in the fields of artificial intelligence, cognitive science, philosophy of mind, and semantics.
It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to.
As defined by Harnad, a "symbol system" is "...a set of arbitrary 'physical tokens' scratches on paper, holes on a tape, events in a digital computer, etc.
have interpreted Peirce as addressing the problem of grounding, feelings, and intentionality for the understanding of semiotic processes.
[7] In recent years, Peirce's theory of signs has been rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.
[11] On paper or in a computer, language, too, is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words.
Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts.
On the other hand, if the symbols (words and sentences) refer to the very bits of '0' and '1', directly connected to their electronic implementations, which a (any?)
The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science.
But the problem of explaining how consciousness could play an "independent" role in doing so is probably insoluble, except on pain of telekinetic dualism.