Linesearch

From Wikipedia, the free encyclopedia

In (unconstrained) optimization, the linesearch strategy is one of two basic iterative approaches to finding a local minimum \mathbf{x}^* of an objective function f:\mathbb R^n\to\mathbb R. The other method is that of trust regions.

[edit] Algorithm

i) Set iteration counter k = 0, and make an initial guess, \mathbf{x}_0 for the minimum.
ii) Compute a descent direction \mathbf{p}_k.
iii) Choose αk to 'loosely' minimize \phi(\alpha)=f(\mathbf{x}_k+\alpha\mathbf{p}_k) over \alpha\in\mathbb R.
iv) Update \mathbf{x}_{k+1}=\mathbf{x}_k+\alpha_k\mathbf{p}_k, k = k + 1.
If \|\nabla f(\mathbf{x}_k)\|\leqtolerance, STOP.
Else, goto ii).

In step iii) we can either exactly minimize φ, by solving φ'(αk) = 0, or loosely, by asking for a sufficient decrease in φ. The latter may be performed in a number of ways, perhaps by doing a backtracking linesearch or using the Wolfe conditions.

Like other optimization methods, linesearch may be combined with simulated annealing to allow it to jump over some local minima.

[edit] See also

[edit] References

  • N. I. M. Gould and S. Leyffer, An introduction to algorithms for nonlinear optimization. In J. F. Blowey, A. W. Craig, and T. Shardlow, Frontiers in Numerical Analysis, pages 109-197. Springer Verlag, Berlin, 2003.
In other languages