Line search
From Wikipedia, the free encyclopedia
In (unconstrained) optimization, the line search strategy is one of two basic iterative approaches to finding a local minimum of an objective function . The other method is that of trust regions.
Contents |
[edit] Algorithms
[edit] Gradient method
- Set iteration counter , and make an initial guess, for the minimum
- Repeat:
- Compute a descent direction
- Choose to 'loosely' minimize over
- Update ,
- Until > tolerance
In step 4 we can either exactly minimize , by solving , or loosely, by asking for a sufficient decrease in . The latter may be performed in a number of ways, perhaps by doing a backtracking line search or using the Wolfe conditions.
Like other optimization methods, line search may be combined with simulated annealing to allow it to jump over some local minima.
[edit] Direct search methods
First the minimum must be bracketed, that is, it should lie between x1 and x2. The interval is then divided by computing f(x) at two internal points, x3 and x4, and rejecting whichever of the two outer points has the highest function value. Subsequently only one extra internal point needs to be calculated. Of the various methods of dividing the interval[2], those that use the golden ratio are particularly simple and effective.
[edit] See also
[edit] References
- ^ N. I. M. Gould and S. Leyffer, An introduction to algorithms for nonlinear optimization. In J. F. Blowey, A. W. Craig, and T. Shardlow, Frontiers in Numerical Analysis, pages 109-197. Springer Verlag, Berlin, 2003.
- ^ M.J. Box, D. Davies and W.H. Swann, Non-Linear optimisation Techniques, Oliver & Boyd, 1969