Anticipation (artificial intelligence)
From Wikipedia, the free encyclopedia
In artificial intelligence, anticipation is the concept of an agent making decisions based on predictions, expectations, or beliefs about the future. It is widely considered that anticipation is a vital component of complex natural cognitive systems. As a branch of AI, anticipatory systems is a specialization still echoing the debates from the 1980s about the necessity for AI for an internal model.
Contents |
[edit] Reaction, proaction and anticipation
Elementary forms of artificial intelligence can be constructed using a policy based on simple if-then rules. An example of such a system would be an agent following the rules
If it rains outside, take the umbrella. Otherwise leave the umbrella home
A system such as the one defined above might be viewed as inherently reactive because the decision making is based on the current state of the environment with no explicit regard to the future. An agent employing anticipation would try to predict the future state of the environment (weather in this case) and make use of the predictions in the decision making. For example
If the sky is cloudy and the air pressure is low, it will probably rain soon so take the umbrella with you. Otherwise leave the umbrella home.
These rules appear more proactive, because they explicitly take into account possible future events. Notice though that in terms of representation and reasoning, these two rule sets are identical, both behave in response to existing conditions. Note too that both systems assume the agent is proactively
- leaving the house, and
- trying to stay dry.
In practice, systems incorporating reactive planning tend to be autonomous systems proactively pursuing at least one, and often many, goals. What define anticipation in an AI model is the explicit existence of an inner model of the environment for the anticipatory system (sometimes including the system itself). For example, if the phrase it will probably rain were computed on line in real time, the system would be seen as anticipatory.
In the 1985, Robert Rosen defined an anticipatory system as follows:
A system containing a predictive model of itself and/or its environment, which allows it to change state at an instant in accord with the model's predictions pertaining to a latter instant.
To some extent, this applies to any system incorporating machine learning. At issue is how much of a system's behaviour should or indeed can be determined by reasoning over dedicated represenations, how much by on-line planning, and how much must be provided by the system's designers.
[edit] Anticipation in evolution and cognition
The anticipation of future states is a major evolutionary and cognitive advance (Sjolander 1995). The very first steps that lead from reactive systems to anticipatory reasoning systems are yet to be understood from a computational, cognitive and evolutionary perspective. The incorporation of anticipatory functionalities into reactive neural systems can greatly enhance intelligent systems performances and researches (MindRACES EU Project, 2004) in that direction are trying to shed new light on these problems. Reactive behaviour for embodied systems has mainly be implemented by using behaviour based architectures and neural networks models. Behaviour based architecture (Brooks 1991) is a popular architecture used to control robots. The behaviour based architecture consists of a set of layered modules. Each layer interacts with the environment and is sufficient to control the robot. The modules all read from sensors that are highly tuned to the specific behaviours, while their output can either directly influence the robot's behaviour or indirectly suppress other behaviours. Prem (1998) has shown that behaviour based architectures have an interesting biological plausibility since they have connections with the ethology models of some animals proposed by theoretical biologists such as Jakob von Uexkull in 1928. Reactive agents, by definition, do not anticipate the environment evolution or the consequences of their own actions. However, in the majority of cases in which the reactive behaviour is acquired through learning, as in the case of reinforcement learning (Sutton & Barto, 1998; Balkenius, 1995), "weak" forms of anticipation are embedded in the system. Two major examples of this are available. The first example is that the majority of reinforcement learning systems learn to anticipate future rewards and punishments deriving from actions (Sutton & Barto, 1998). The second example is that reinforcement learning mechanisms can be used to guide attention so as to direct perception towards aspects of the environment that are expected to carry important information (Balkenius & Hulth, 1999). One of the most important properties of neural networks is their capacity to learn. The learning algorithms of neural networks, for example the popular error back propagation applied to feedforward neural networks, have been used to learn models of the environment (Lin, 1992). This has been done for agents that have a stereotyped behaviour or predict the evolution of the environment without acting on it: in these cases future states do not depend on the agent's actions. In these situations given the current state of the environment (input pattern feed into the neural network), the network can learn to yield an output pattern that corresponds to the state that the environment will have n time steps into the future (Nolfi & Tani, 1999). In order to do so, the error back propagation algorithm will change the weights of the network to decrease the mismatch between the anticipated state and the state actually experienced in the future. The same kind of mismatch between expectations and reality has been used to reinforce "curious" exploring agents. In this way agents learn to explore situations where they expect to engage with novel and "interesting" experiences (Wiering & Schmidhuber, 1998).
[edit] See also
- Action selection
- Behavior based AI
- Cognition
- The History of artificial intelligence
- MindRACES
- Nature-nurture
- The Physical symbol system hypothesis
- Strong AI
[edit] References
- Anticipatory Systems, Robert Rosen, 1985, Pergamon Press
- MindRACES: From Reactive to Anticipatory Cognitive Embodied Systems, http://www.mindraces.org, 2004