Embodied cognitive science

From Wikipedia, the free encyclopedia

Embodied Cognitive Science is an interdisciplinary field of research whose aim is to explain the mechanisms underlying intelligent behavior. It is comprised of three main methodological thrusts:

1. The building of robotic agents capable of engaging in tasks that require realtime adaptive behavior (as opposed to engineered solutions seen in industrial machines.)

2. The formation of general principles of intelligent behavior.

3. The modelling of biological systems in a holistic manner that includes both the brain and the body as a single entity.

Embodied Cog Sci borrows heavily from the philosophy of Embodiment and research fields related to this philosophy, namely Behavior-based robotics. Researchers in Embodied Cog Sci can, and occasionally do, talk about issues of free will and anthropomorphism.

The most rigorous account of Embodied Cog Sci is given by Rolf Pfeifer, in his book Understanding Intelligence, co-authored by Christian Scheier. Other important work in this field was led by Gerald Edelman, in particular, his Darwin III project at NSI in La Jolla. Historically, very early work is sometimes attributed to Rodney Brooks or Valentino Braitenberg.

Contents

[edit] General Principles of Intelligent Behavior

In the formation of general principles of intelligent behavior, Pfeifer intended to be contrary to older principles given in Traditional Artificial Intelligence. The most dramatic difference is that the principles are applicable only to situated robotic agents in the real world, a domain where Traditional Artificial Intelligence showed the least promise.

Principle of Parallel, Loosely-coupled Processes :: An alternative to hierarchical methods of knowledge and action selection. This design principle differs most importantly from the Sense-Think-Act cycle of traditional AI. Since it does not involve this famous cycle, it is not affected by the Frame problem.

Principle of Sensory-Motor Coordination :: Ideally, internal mechanisms in an agent should give rise to things like memory and choice-making in an emergent fashion, rather than being prescriptively programmed from the beginning. These kinds of things are allowed to emerge as the agent interacts with the environment. The motto is, build less assumptions into the agent's controller now, so that learning can be more robust and idiosyncratic in the future.

Principle of Cheap Design and Redundancy :: Cheap design is a wink to the mass production of robots, and also a reference to the fact that an agent can still exhibit surprising behavior without a lot of internal processing.

Principle of Ecological Balance :: This is more a theory than a principle, but its implications are widespread. Its claim is that the internal processing of an agent cannot be made more complex unless there is a corresponding increase in complexity of the motors, limbs, and sensors of the agent. In other words, the extra complexity added to the brain of a simple robot will not create any discernable change in its behavior. The robot's morphology must already contain the complexity in itself to allow enough "breathing room" for more internal processing to develop.

The Value Principle :: This was the architecture developed in the Darwin III robot of Gerald Edelman. It relies heavily on connectionism.

[edit] References

[edit] External links

[edit] See also