BDI software agent
From Wikipedia, the free encyclopedia
A BDI agent is a particular type of bounded rational software agent, imbued with particular mental attitudes, viz: Beliefs, Desires and Intentions (BDI).
The BDI model has some philosophical basis in the Belief-Desire-Intention theory of human practical reasoning, expounded by Michael Bratman.
The model has formal logical descriptions such as Anand Rao and Michael Georgeff's BDICTL, which combines a mulitipy-modal logic (with modalities representing beliefs, desires and intentions) with the temporal logic CTL*. More recently, Michael Wooldridge has extended BDICTL to define the Logic Of Rational Agents (LORA).
Finally, there are numerous implementations of architectures for building BDI agents. The original, the Procedural Reasoning System (PRS), was developed by a team led by Georgeff. Later implementations (according to Wooldridge) have followed the PRS model.
Contents |
[edit] BDI Agents
Wooldridge lists four characteristics of intelligent agents which naturally fit the purpose and design of the BDI model:
- Situated - they are embedded in their environment
- Goal directed - they have goals that they try to achieve
- Reactive - they react to changes in their environment
- Social - they can communicate with other agents (including humans)
[edit] Beliefs
Beliefs represent the informational state of the agent - in other words its beliefs about the world (including itself and other agents). Beliefs can also include inference rules, allowing forward chaining to lead to new beliefs. Typically, this information will be stored in a database (sometimes called a belief base), although that is an implementation decision.
Using the term belief - rather than knowledge - recognises that what an agent believes may not necessarily be true (and in fact may change in the future).
[edit] Desires
Desires (or goals) represent the motivational state of the agent. They represent objectives or situations that the agent would like to accomplish or bring about. Examples of desires might be: find the best price, go to the party or become rich.
Usage of the term goals adds the further restriction that the set of goals must be consistent. For example, one should not have concurrent goals to go to a party and to stay at home - even though they could both be desirable.
[edit] Intentions
Intentions represent the deliberative state of the agent: what the agent has chosen to do. Intentions are desires to which the agent has to some extent committed (in implemented systems, this means the agent has begun executing a plan).
[edit] Plans
Plans are sequences of actions that an agent can perform to achieve one or more of its intentions. Plans may include other plans: my plan to go for a drive may include a plan to find my car keys. This reflects that in Bratman's model, plans are initially only partially conceived, with details being filled in as they progress.
[edit] BDI Agent Implementations
[edit] 'Pure' BDI
- PRS
- IRMA (not implemented but can be considered as PRS with non-reconsideration)
- UM-PRS
- dMARS
- AgentSpeak(L)
- JAM
- JACK
- JADEX
- Jason
- 3APL
[edit] Extensions and Hybrid Systems
- JACK Teams
[edit] BDI Agent Architectures
Strictly speaking there is no single software architecture that represents BDI. The diagram to the right (from Georgeff, Ingrand Decision-Making in an Embedded Reasoning System, IJCAI-11, 1989) shows a very generic model, which does not address any issues of design or implementation. In fact, Wooldridge states that implemented systems since PRS have followed the PRS model, and so there should be a closer relationship between them than is described in the diagram. Indeed, the core BDI engines in dMARS (written in C++) and JACK (written in Java) are virtually identical in design.
[edit] See also
[edit] External links
- A Formal Specification of dMARS - Mark d'Inverno, David Kinny, Michael Luck, Michael Wooldridge
[edit] References
- Bratman, M. E. [1987] (1999). Intention, Plans, and Practical Reason. CSLI Publications. ISBN 1-57586-192-5.
- Wooldridge, M. (2000). Reasoning About Rational Agents. The MIT Press. ISBN 0-262-23213-8.