Singular control

From Wikipedia, the free encyclopedia

In optimal control, problems of singular control are problems that are difficult to solve because a straightforward application of Pontryagin's minimum principle fails to yield a complete solution. Only a few such problems have been solved. The most well known is probably Merton's portfolio problem in financial economics. A more technical explanation follows.

The most common difficulty in applying Pontryagin's principle arises when the Hamiltonian depends linearly on the control u, i.e., is of the form: H(u)=\phi(x,\lambda,t)u+\cdots and the control is restricted to being between an upper and a lower bound: a\le u(t)\le b. To minimize H(u), we need to make u as big or as small as possible, depending on the sign of φ(x,λ,t), specifically:

u(t) = \begin{cases} b, & \phi(x,\lambda,t)<0 \\ ?, & \phi(x,\lambda,t)=0 \\ a, & \phi(x,\lambda,t)>0.\end{cases}

If φ is positive at some times, negative at others and is only zero instantaneously, then the solution is straightforward and is a bang-bang control that switches from b to a at times when φ switches from negative to positive.

The case when φ remains at zero for a finite length of time t_1\le t\le t_2 is called the singular control case. Between t1 and t2 the maximization of the Hamiltonian with respect to u gives us no useful information and the solution in that time interval is going to have to be found from other considerations. (One approach would be to differentiate \partial H/\partial u with respect to time until the control u again explicitly appears, which is guaranteed to happen eventually. One can then set that expression to zero and solve for u).