Embodied cognitive science
From Wikipedia, the free encyclopedia
- For approaches to cognitive science that emphasize the embodied mind, see embodied mind thesis
Embodied Cognitive Science is an interdisciplinary field of research whose aim is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: 1) the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity, 2) the formation of a common set of general principles of intelligent behavior, and 3) The experimental use of robotic agents in controlled environments.
Embodied cognitive science borrows heavily from embodied philosophy and the related research fields of cognitive science, psychology, neuroscience and artificial intelligence. From the perspective of neuroscience, research in this field was led by Gerald Edelman of the Neurosciences Institute at La Jolla, and J. A. Scott Kelso of FAU. From the perspective of psychology, research by Michael Turvey and Eleanor Rosch. From the perspective of language acquisition, Eric Lenneberg and Philip Rubin at Haskins Laboratories. From the perspective of autonomous agent design, early work is sometimes attributed to Rodney Brooks or Valentino Braitenberg. From the perspective of artificial intelligence, see Understanding Intelligence by Rolf Pfeifer and Christian Scheier or How the body shapes the way we think, also by Rolf Pfeifer and Josh C. Bongard.
Turing proposed that a machine may need a human-like body to think and speak:
It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. That process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again, I do not know what the right answer is, but I think both approaches should be tried (Turing, 1950). [1]
Contents |
[edit] General principles of intelligent behavior
In the formation of general principles of intelligent behavior, Pfeifer intended to be contrary to older principles given in Traditional Artificial Intelligence. The most dramatic difference is that the principles are applicable only to situated robotic agents in the real world, a domain where Traditional Artificial Intelligence showed the least promise.
Principle of Cheap Design and Redundancy :: Pfeifer realized that implicit assumptions made by engineers often substantially influence a control architecture's complexity.[2] This insight is reflected in discussions of the scalability problem in robotics. The internal processing needed for some bad architectures can grow out of proportion to new tasks needed of an agent.
One of the primary reasons for scalability problems is that the amount of programming and knowledge engineering that the robot designers have to perform grows very rapidly with the complexity of the robot's tasks. There is mounting evidence that pre-programming cannot be the solution to the scalability problem ... The problem is that programmers introduce too many hidden assumptions in the robot's code. [3]
The proposed solutions are as follows, to have the agent exploit the inherent physics of its environment, to exploit the constraints of its niche, and to have agent morphology based on parsimony and the principle of Redundancy. Redundancy reflects the desire for the error-correction of signals afforded by duplicating like channels. Additionally, it reflects the desire to exploit the associations between sensory modalities. (See redundant modalities). In terms of design, this implies that redundancy should be introduced with respect not only to one sensory modality but to several.[4] It has been suggested that the fusion and transfer of knowledge between modalities can be the basis of reducing the size of the sense data taken from the real world. [5] This again addresses the scalability problem.
Principle of Parallel, Loosely-coupled Processes :: An alternative to hierarchical methods of knowledge and action selection. This design principle differs most importantly from the Sense-Think-Act cycle of traditional AI. Since it does not involve this famous cycle, it is not affected by the Frame problem.
Principle of Sensory-Motor Coordination :: Ideally, internal mechanisms in an agent should give rise to things like memory and choice-making in an emergent fashion, rather than being prescriptively programmed from the beginning. These kinds of things are allowed to emerge as the agent interacts with the environment. The motto is, build less assumptions into the agent's controller now, so that learning can be more robust and idiosyncratic in the future.
Principle of Ecological Balance :: This is more a theory than a principle, but its implications are widespread. Its claim is that the internal processing of an agent cannot be made more complex unless there is a corresponding increase in complexity of the motors, limbs, and sensors of the agent. In other words, the extra complexity added to the brain of a simple robot will not create any discernible change in its behavior. The robot's morphology must already contain the complexity in itself to allow enough "breathing room" for more internal processing to develop.
The Value Principle :: This was the architecture developed in the Darwin III robot of Gerald Edelman. It relies heavily on connectionism.
[edit] See also
- Action selection
- Behavior-based robotics
- Behaviorism
- Cognitive science
- Cognitive neuroscience
- Connectionism
- Embodied Embedded Cognition
- Embodied philosophy
- Linguistics
- Situated cognition
- Strong AI
[edit] References
- ^ Turing, Alan M. 1950 Computing Machinery and Intelligence. Mind, 59(236): 433-460.
- ^ Pfeifer, R., Scheier, C., Understanding Intelligence (MIT Press, 2001) ISBN 0-262-66125-X (436)
- ^ Stoytchev, A. (2006). Five Basic Principles of Developmental Robotics NIPS 2006 Workshop on Grounding Perception, Knowledge and Cognition in Sensori-Motor Experience. Department of Computer Science, Iowa State U
- ^ Pfeifer, R., Scheier, C., Understanding Intelligence (MIT Press, 2001) ISBN 0-262-66125-X (448)
- ^ Konijn, Paul (2007). Summer Workshop on Multi-Sensory Modalities in Cognitive Science Detection and Identification of Rare Audiovisual Cues. DIRAC EU IP IST project, Switzerland.
[edit] Further reading
- Braitenberg, Valentino (1986). Vehicles: Experiments in Synthetic Psychology. Cambridge, MA: The MIT Press. ISBN 0262521121
- Brooks, Rodney A. (1999). Cambrian Intelligence: The Early History of the New AI. Cambridge, MA: The MIT Press. ISBN 0262522632
- Edelman, G. Wider than the Sky (Yale University Press, 2004) ISBN 0-300-10229-1
- Fowler, C., Rubin, P. E., Remez, R. E., & Turvey, M. T. (1980). Implications for speech production of a general theory of action. In B. Butterworth (Ed.), Language Production, Vol. I: Speech and Talk (pp. 373-420). New York: Academic Press. ISBN 0121475018
- Lenneberg, Eric H. (1967). Biological Foundations of Language. John Wiley & Sons. ISBN 0471526266
- Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). ISBN 0-262-16239-3}}
[edit] External links
- AI lectures from Tokyo hosted by Rolf Pfeiffer
- synthetic neural modelling in DARWIN IV
- Society for the Simulation of Adaptive BehaviorCatogory:Cognitive neuroscience