Automated planning and scheduling

From Wikipedia, the free encyclopedia

Automated planning and scheduling is a branch of artificial intelligence that concerns the realisation of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex, unknown and have to be discovered and optimised in multidimensional space.

In known environments with available models planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments the strategy often needs to be revised online. Models and policies need to be adapted. Solutions usually resort to iterative trial and error processes commonly seen in artificial intelligence. These include dynamic programming, reinforcement learning and combinatorial optimization.

A typical planner takes three inputs: a description of the initial state of the world, a description of the desired goal, and a set of possible actions, all encoded in a formal language such as STRIPS. The planner produces a sequence of actions that lead from the initial state to a state meeting the goal. An alternative language for describing planning problems is that of hierarchical task networks, in which a set of tasks is given, and each task can be either realized by a primitive action or decomposed in a set of other tasks.

The difficulty of planning is dependent on the simplifying assumptions employed, e.g. atomic time, deterministic time, complete observability, etc. Classical planners make all these assumptions and have been studied most fully. Some popular techniques include: forward chaining and backward chaining state-space search, possibly enhanced by the use of relationships among conditions (see graphplan) or heuristics synthesized from the problem, search through plan space, and translation to propositional satisfiability (satplan).

If the assumption of determinism is dropped and a probabilistic model of uncertainty is adopted, then this leads to the problem of policy generation for a Markov decision process (MDP) or (in the general case) Partially observable Markov decision process (POMDP).

[edit] Examples

[edit] See also

[edit] External Links

In other languages