Pontryagin's minimum principle
From Wikipedia, the free encyclopedia
Pontryagin's minimum principle is used in optimal control theory to find the best possible control for taking a dynamic system from one state to another, especially in the presence of constraints for the state or input controls. It was formulated by the Russian mathematician Lev Semenovich Pontryagin and his students.
The principle states informally that the Hamiltonian must be minimized over , the set of all permissible controls. If is the optimal control for the problem, then the principle states that:
where is the optimal state trajectory and is the optimal costate trajectory. The result was first successfully applied into minimum time problems where the input control is constrained, but can also be useful in studying state-constrained problems.
Special conditions for the Hamiltonian can also be derived. When the final time tf is fixed and the Hamiltonian does not depend explicitly on time (), then:
and if the final time is free, then:
The result is sometimes also known as Pontryagin's maximum principle.
Source: D.E. Kirk. Optimal Control Theory, An Introduction. Prentice Hall, 1970.
[edit] Maximizing H
Being x the state of the system with input u, statisfying the following equation:
One wishes to determine the function u, defined in [0,T], which maximizes the cost function J:
In the optimal state trajectory, x * , u * and λ must verify the following conditions:
For each t, the Hamiltonian H defined by:
is maximized for the optimal value of u(t).
Here, the following notation is used:
If one wishes to minimize the cost instead of maximizing, the cost function should be multiplied by − 1.