Cognitive architecture
A cognitive architecture is a blueprint for intelligent agents. It proposes (artificial) computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent under some definition. Cognitive architectures form a subset of general agent architectures. The term 'architecture' implies an approach that attempts to model not only behavior, but also structural properties of the modelled system.
Characterization
Common among researchers on cognitive architectures is the belief that understanding (human, animal or machine) cognitive processes means being able to implement them in a working system, though opinions differ as to what form such a system can have: some researchers assume that it will necessarily be a symbolic computational system whereas others argue for alternative models such as connectionist systems or dynamical systems. Cognitive architectures can be characterized by certain properties or goals, as follows, though there is not general agreement on all aspects:
- Implementation of not just various different aspects of cognitive behavior but of cognition as a whole (Holism, e.g. Unified theory of cognition). This is in contrast to cognitive models, which focus on a particular competence, such as a kind of problem solving or a kind of learning.
- The architecture often tries to reproduce the behavior of the modelled system (human), in a way that timely behavior (reaction times) of the architecture and modelled cognitive systems can be compared in detail. Other cognitive limitations are often modeled as well, e.g. limited working memory, attention or issues due to cognitive load.
- Robust behavior in the face of error, the unexpected, and the unknown. (see Graceful degradation).
- Learning (not for all cognitive architectures)
- Parameter-free: The system does not depend on parameter tuning (in contrast to Artificial neural networks) (not for all cognitive architectures)
- Some early theories such as Soar and ACT-R originally focused only on the 'internal' information processing of an intelligent agent, including tasks like reasoning, planning, solving problems, learning concepts. More recently many architectures (including Soar, ACT-R, PreAct, ICARUS, CLARION, FORR) have expanded to include perception, action, and also affective states and processes including motivation, attitudes, and emotions.
- On some theories the architecture may be composed of different kinds of sub-architectures (often described as 'layers' or 'levels') where the layers may be distinguished by types of function, types of mechanism and representation used, types of information manipulated, or possibly evolutionary origin. These are hybrid architectures (e.g., CLARION).
- Some theories allow different architectural components to be active concurrently, whereas others assume a switching mechanism that selects one component or module at a time, depending on the current task. Concurrency is normally required for an architecture for an animal or robot that has multiple sensors and effectors in a complex and dynamic environment, but not in all robotic paradigms.
- Most theories assume that an architecture is fixed and only the information stored in various subsystems can change over time (e.g. Langley et al., below), whereas others allow architectures to grow, e.g. by acquiring new subsystems or new links between subsystems (e.g. Minsky and Sloman, below).
It is important to note that cognitive architectures don't have to follow a top-down approach to cognition (cf. Top-down and bottom-up design).
Distinctions
Cognitive architectures can be symbolic, connectionist, or hybrid. Some cognitive architectures or models are based on a set of generic rules, as, e.g., the Information Processing Language (e.g., Soar based on the unified theory of cognition, or similarly ACT-R). Many of these architectures are based on the-mind-is-like-a-computer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on emergent properties of processing units (e.g. nodes). Hybrid architectures combine both types of processing (such as CLARION). A further distinction is whether the architecture is centralized with a neural correlate of a processor at its core, or decentralized (distributed). The decentralized flavor, has become popular under the name of parallel distributed processing in mid-1980s and connectionism, a prime example being neural networks. A further design issue is additionally a decision between holistic and atomistic, or (more concrete) modular structure. By analogy, this extends to issues of knowledge representation.
In traditional AI, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence, though many traditional AI systems were also designed to learn (e.g. improving their game-playing or problem-solving competence). Biologically inspired computing, on the other hand, takes sometimes a more bottom-up, decentralised approach; bio-inspired techniques often involve the method of specifying a set of simple generic rules or a set of simple nodes, from the interaction of which emerges the overall behavior. It is hoped to build up complexity until the end result is something markedly complex (see complex systems). However, it is also arguable that systems designed top-down on the basis of observations of what humans and other animals can do rather than on observations of brain mechanisms, are also biologically inspired, though in a different way.
Some well-known cognitive architectures
- 4CAPS, developed at Carnegie Mellon University under Marcel A. Just
- ACT-R, developed at Carnegie Mellon University under John R. Anderson.
- ALifeE, developed under Toni Conde at the Ecole Polytechnique Fédérale de Lausanne.
- Apex developed under Michael Freed at NASA Ames Research Center.
- ASMO, developed under Rony Novianto at University of Technology, Sydney.
- CHREST, developed under Fernand Gobet at Brunel University and Peter C. Lane at the University of Hertfordshire.
- CLARION the cognitive architecture, developed under Ron Sun at Rensselaer Polytechnic Institute and University of Missouri.
- Copycat, by Douglas Hofstadter and Melanie Mitchell at the Indiana University.
- DUAL, developed at the New Bulgarian University under Boicho Kokinov.
- EPIC, developed under David E. Kieras and David E. Meyer at the University of Michigan.
- FORR developed by Susan L. Epstein at The City University of New York.
- GAIuS developed by Sevak Avakians.
- The H-Cogaff architecture, which is a special case of the CogAff schema. (See Taylor & Sayda, and Sloman refs below).
- CoJACK An ACT-R inspired extension to the JACK multi-agent system that adds a cognitive architecture to the agents for eliciting more realistic (human-like) behaviors in virtual environments.
- IDA and LIDA, implementing Global Workspace Theory, developed under Stan Franklin at the University of Memphis.
- PMML.1, Michael S. Gashler, University of Arkansas.
- PreAct, developed under Dr. Norm Geddes at ASI.
- PRODIGY, by Veloso et al.
- PRS 'Procedural Reasoning System', developed by Michael Georgeff and Amy Lansky at SRI International.
- Psi-Theory developed under Dietrich Dörner at the Otto-Friedrich University in Bamberg, Germany.
- R-CAST, developed at the Pennsylvania State University.
- Soar, developed under Allen Newell and John Laird at Carnegie Mellon University and the University of Michigan.
- Society of mind and its successor the Emotion machine proposed by Marvin Minsky.
- Subsumption architectures, developed e.g. by Rodney Brooks (though it could be argued whether they are cognitive).
See also
- Artificial brain
- Artificial consciousness
- Autonomous agent
- Biologically inspired cognitive architectures
- Cognitive architecture comparison
- Cognitive science
- Intelligent system
- Memristor
- Production system
- Simulated reality
- Social simulation
- Strong AI
- Unified theory of cognition