Hamiltonian (control theory)

From Wikipedia, the free encyclopedia

The Hamiltonian of optimal control theory was developed by L. S. Pontryagin as part of his minimum principle. It was inspired by, but is distinct from, the Hamiltonian of classical mechanics. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to minimize the Hamiltonian. For details see Pontryagin's minimum principle.

[edit] Definition of the Hamiltonian


H(x,\lambda,u,t)=\lambda^T(t)f(x,u,t)+L(x,u,t) \,

where λ(t) is a vector of costate variables of the same dimension as the state variables x(t).

[edit] Notation and Problem statement

A control u(t) is to be chosen so as to minimize the objective function:


J(u)=\Psi(x(T))+\int^T_0 L(x,u,t) dt

The system state x(t) evolves according to the state equations


\dot{x}=f(x,u,t) \qquad x(0)=x_0 \quad t \in [0,T]

the control must satisfy the constraints


a \le u(t) \le b \quad t \in [0,T]

[edit] The Hamiltonian in discrete time

When the problem is formulated in discrete time, the Hamiltonian is defined as:


H(x,\lambda,u,t)=\lambda^T(t+1)f(x,u,t)+L(x,u,t) \,

and the costate equations are


\lambda(t)=-\frac{\partial H}{\partial x}

(Note that the discrete time Hamiltonian at time t involves the costate variable at time t + 1. This small detail is essential so that when we differentiate with respect to x we get a term involving λ(t + 1) on the right hand side of the costate equations. Using a wrong convention here can lead to incorrect results, i.e. a costate equation which is not a backwards difference equation).