Symbol grounding
From Wikipedia, the free encyclopedia
The Symbol Grounding Problem is related to the problem of how words get their meanings, and of what meanings are. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful.
A symbol is an arbitrary object, an element of a code or formal notational system. It is interpretable as referring to something, but its shape is arbitrary in relation to its meaning: It neither resembles nor is causally connected to its referent (see Saussure's L'arbitraire du signe). Its meaning is agreed upon and shared by convention.
Objects cannot be symbols autonomously; symbols are elements in symbol systems. The meanings of the symbols in a symbol system are systematically interrelated and systematically interpretable. Symbols are combined and manipulated on the basis of formal rules that operate on their (arbitrary) shapes, not their meanings; i.e., the rules are syntactic, not semantic. Yet the syntactically well-formed combinations of symbols are semantically interpretable. (Think of words, combined and recombined to form sentences that all have different meanings, but are systematically interrelated with one another.)
There is no symbol grounding problem for symbols in external symbol systems, such as those in a mathematical formula or the words in a spoken or written sentence. The problem of symbol grounding arises only with internal symbols, symbols in the head -- the symbols in what some have called "mentalese or the language of thought. External symbols get their meaning from the thoughts going on in the minds of their users and interpreters. But the internal symbols inside those users and interpreters need to be meaningful autonomously. Their meaning cannot just be based on a definition, because a definition is just a string of symbols, and those symbols need to be have meaning too. Definitions are meaningful if their component symbols are meaningful, but what can give their component symbols meaning?
If it were formal definitions all the way down, this would lead to a problem of infinite regress. If the meaning depended on an external interpreter, then it would not be autonomous (and it makes no sense to say that the meanings of the symbols in my head depend on the interpretation of someone outside my head).
It is tempting to suppose that the meaning of a symbol that occurs inside an autonomous sensorimotor system (or robot) is whatever internal structures and processes give that robot the ability to detect, identify, and interact with that symbol's external referent. But that would only be the symbol's grounding, not yet its meaning.
Designing a human-scale grounded symbol system that can pass the Turing Test (a robot whose performance capacities are equivalent to and indistinguishable from our own) is (or should be) the methodological and empirical goal of cognitive science. But would a Turing-scale robot's internal symbols have meaning rather than just grounding?
The only difference between a grounded Turing-scale robot and us would be whether it was conscious (i.e., whether it had feelings). If it feels, then there is someone home in the robot for its internal symbols to be meaningful to. If the robot is not conscious, then its internal symbols have only grounding (in its sensorimotor capacity) but not meaning. Turing's test can be interpreted as showing that we cannot ask for more of cognitive science than that it should discover what internal structures and processes are sufficient to ground symbols autonomously; the other minds problem prevents us from being able to learn more.
[edit] References
- Harnad, Stevan (1990), "The Symbol Grounding Problem", Physica D 42, 335–346. Eprint.
- Harnad, Stevan (2003) "Symbol-Grounding Problem", Encyclopedia of Cognitive Science, MacMillan, Nature Publishing Group. Eprint.
- Taddeo, Mariarosaria & Floridi, Luciano (2005), "The Symbol Grounding Problem: a Critical Review of Fifteen Years of Research", Journal of Experimental and Theoretical Artificial Intelligence, 17.4, pp. 419 - 445. Online version.