Talk:Hamilton-Jacobi-Bellman equation

From Wikipedia, the free encyclopedia

WikiProject Systems
This article is within the scope of WikiProject Systems, which collaborates on articles related to Systems science.
Systems rating: Start Class Mid importance  Field: Control theory
Please update this rating as the article progresses, or if the rating is inaccurate. Please also add comments to suggest improvements to the article.

Um, I don't think the Hamilton-Jacobi-Bellman equation is the Hamilton-Jacobi equation anymore than let's say Shannon information is the thermodynamic entropy. Phys 02:57, 15 Aug 2004 (UTC)

Phys is right. There is some mixing together here of Hamilton-Jacobi-Bellman and Hamilton-Jacobi, of Optimal Control and Physics. The result is confusing. I will rewrite from the point of view of O.C. only. Someone else can add the relation to physics and to the pre-Bellman work. Encyclops July 2005

[edit] Bracket notation

Is there a reason for using the notation \langle a,b \rangle to denote inner product here? I'd prefer ordinary matrix notation: aTb. The latter is less confusing, since it can't be confused with other variations of scalar product, such as  \int a^\prime b \, \mathrm{d} x--PeR 12:08, 14 June 2006 (UTC)

aTb is very clear. But when a and b are somewhat messy expressions it becomes less readable. In our case what would we have \left(\frac{\partial}{\partial x}V(x,t)\right)^T F(x, u) ? I don't know if I like it or not. Encyclops 00:23, 15 June 2006 (UTC)

[edit] Sufficient condition?

The current article claims that the HJB is a sufficient condition. That sounds wrong to me, because first of all the equation itself is not a sufficient condition: I assume what is meant is that "if V solves HJB, this suffices to conclude that it optimizes the objective". But is this true in general? I know that in discrete-time, infinite-horizon cases, a solution of the Bellman equation only serves to identify a candidate solution for the original sequence problem, that is, solving the Bellman equation is necessary but not sufficient for optimality. (See Stokey-Lucas-Prescott, Recursive Methods in Economic Dynamics.)

Is the sufficiency claim in this article based on the fact that the example given has a finite horizon T? If so, this should be clarified, and it would be helpful to add more general cases too. --Rinconsoleao (talk) 08:07, 30 May 2008 (UTC)

The continuous time/continuous state case we are looking at here is more complex than the discrete time case you mention; there are some delicate technical issues that do not arise in d.t. control. A number of "verification theorems" have been proven, using various assumptions. The simplest theorems, one of which goes back to Bellman, says that if a control satisfies HJB and the terminal condition, then that control is optimal. HJB => optimality. In this sense HJB is a sufficient condition. However, there could exist optimal solutions that are not smooth (not continuous or not differentiable), do not satisfy HJB, but are nevertheless optimal. There are also other verification theorems that establish HJB as necessary and sufficient, but that requires additional assumptions, so they are more restrictive. We also have to ask what kind of "solutions" of HJB we are talking about, the "classical" PDE solutions that Bellman used or the modern viscosity solutions. Frankly, my knowledge of this area is not sufficient ;-) to give an overview of all these theorems. Encyclops (talk) 23:49, 30 May 2008 (UTC)