Subgradient method
From Wikipedia, the free encyclopedia
Subgradient methods are algorithms for solving convex optimization problems. Originally developed by Shor and others in the 1960s and 1970s, subgradient methods can be used with a non-differentiable objective function. When the objective function is differentiable, subgradient methods for unconstrained problems use the same search direction as the method of steepest descent.
Although subgradient methods can be much slower than interior-point methods and Newton's method in practice, they can be immediately applied to a far wider variety of problems and require much less memory. Moreover, by combining the subgradient method with primal or dual decomposition techniques, it is sometimes possible to develop a simple distributed algorithm for a problem.
Contents |
[edit] Basic subgradient update
Let be a convex function with domain . The subgradient method uses the iteration
where g(k) denotes a subgradient of at . If is differentiable, its only subgradient is the gradient vector itself. It may happen that − g(k) is not a descent direction for at x(k). We therefore maintain a list that keeps track of the lowest objective function value found so far, i.e.
[edit] Step size rules
Many different types of step size rules are used in the subgradient method. Five basic step size rules for which convergence is guaranteed are:
- Constant step size, αk = α.
- Constant step length, , which gives
- Square summable but not summable step size, i.e. any step sizes satisfying
- Nonsummable diminishing, i.e. any step sizes satisfying
- Nonsummable diminishing step lengths, i.e. , where
Notice that the step sizes listed above are determined before the algorithm is run and do not depend on any data computed during the algorithm. This is very different from the step size rules found in standard descent methods, which depend on the current point and search direction.
[edit] Convergence results
For constant step size and constant step length, the subgradient algorithm is guaranteed to converge to within some range of the optimal value, i.e.,
[edit] Constrained optimization
[edit] Projected subgradient
One extension of the subgradient method is the projected subgradient method, which solves the constrained optimization problem
- minimize subject to
where is a convex set. The projected subgradient method uses the iteration
where P is projection on and g(k) is any subgradient of at x(k).
[edit] General constraints
The subgradient method can be extended to solve the inequality constrained problem
- minimize subject to
where fi are convex. The algorithm takes the same form as the unconstrained case
where αk > 0 is a step size, and g(k) is a subgradient of the objective or one of the constraint functions at Take
where denotes the subdifferential of . If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.
[edit] References
- Bertsekas, D. (1999). Nonlinear Programming. Cambridge, MA.: Athena Scientific.
- Shor, N. (1985). Minimization Methods for Non-differentiable Functions. Springer.