Danskin's theorem
From Wikipedia, the free encyclopedia
In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form
The theorem has applications in optimization, where it sometimes is used to solve minimax problems.
[edit] Statement
The theorem applies to the following situation. Suppose φ(x,z) is a continuous function of two arguments,
where is a compact set. Further assume that φ(x,z) is convex in x for every .
Under these conditions, Danskin's theorem provides conclusions regarding the differentiability of the function
To state these results, we define the set of maximizing points Z0(x) as
Danskin's theorem then provides the following results.
- Convexity
- f(x) is convex.
- Directional derivatives
- The directional derivative of f(x) in the direction y, denoted , is given by
- where φ'(x,z;y) is the directional derivative of the function at x in the direction y.
- Derivative
- f(x) is differentiable at x if Z0(x) consists of a single element . In this case, the derivative of f(x) (or the gradient of f(x) if x is a vector) is given by
- Subdifferential
- If φ(x,z) is differentiable with respect to x for all , and if is continuous with respect to z for all x, then the subdifferential of f(x) is given by
- where conv indicates the convex hull operation.
[edit] References
- Bertsekas, Dimitri P. (1999). Nonlinear Programming. Belmont, MA: Athena Scientific, 717. ISBN 1-886529-00-0.