Newton's method in optimization

From Wikipedia, the free encyclopedia

A comparison of gradient descent (green) and Newton's method (red) for minimizing a function (with small step sizes). Newton's method uses curvature information to take a more direct route.
A comparison of gradient descent (green) and Newton's method (red) for minimizing a function (with small step sizes). Newton's method uses curvature information to take a more direct route.

In mathematics, Newton's method is a well-known algorithm for finding roots of equations in one or more dimensions. It can also be used to find local maxima and local minima of functions by noticing that if a real number x * is a stationary point of a function f(x), then x * is a root of the derivative f'(x), and therefore one can solve for x * by applying Newton's method to f'(x). The Taylor expansion of f(x)

\displaystyle f(x+\Delta x)=f(x)+f'(x)\Delta x+\frac 1 2 f'' (x) \Delta x^2,

attains its extremum when Δx solves the linear equation:

\displaystyle  f'(x)+f'' (x) \Delta x=0

and \displaystyle f'' (x) is positive. Thus, provided that \displaystyle f(x) is a twice-differentiable function and the initial guess \displaystyle x_0 is chosen close enough to x * , the sequence (xn) defined by

x_{n+1} = x_n - \frac{f'(x_n)}{f''(x_n)}, \ n \ge 0

will converge towards x * .

This iterative scheme can be generalized to several dimensions by replacing the derivative with the gradient, \nabla f(\mathbf{x}), and the reciprocal of the second derivative with the inverse of the Hessian matrix, H f(\mathbf{x}). One obtains the iterative scheme

\mathbf{x}_{n+1} = \mathbf{x}_n - [H f(\mathbf{x}_n)]^{-1} \nabla f(\mathbf{x}_n), \ n \ge 0.

Usually Newton's method is modified to include a small step size γ > 0 instead of γ = 1

\mathbf{x}_{n+1} = \mathbf{x}_n - \gamma[H f(\mathbf{x}_n)]^{-1} \nabla f(\mathbf{x}_n).

This is often done to ensure that the Wolfe conditions are satisfied at each step \mathbf{x}_n \to \mathbf{x}_{n+1} of the iteration.

The geometric interpretation of Newton's method is that at each iteration one approximates f(\mathbf{x}) by a quadratic function around \mathbf{x}_n, and then takes a step towards the maximum/minimum of that quadratic function. (If f(\mathbf{x}) happens to be a quadratic function, then the exact extremum is found in one step.)

Newton's method converges much faster towards a local maximum or minimum than gradient descent. In fact, every local minimum has a neighborhood N such that, if we start with \mathbf{x}_0 \in N, Newton's method with step size γ = 1 converges quadratically (if the Hessian is invertible in that neighborhood).

Finding the inverse of the Hessian is an expensive operation, so the linear equation

\mathbf{p}_{n} = \mathbf{x}_{n+1}-\mathbf{x}_{n} = -[H f(\mathbf{x}_n)]^{-1} \nabla f(\mathbf{x}_n), \ n \ge 0..

is often solved approximately (but to great accuracy) using a method such as conjugate gradient. There also exist various quasi-Newton methods, where an approximation for the Hessian is used instead.

If the Hessian is close to a non-invertible matrix, the inverted Hessian can be numerically unstable and the solution may diverge. In this case, certain workarounds have been tried in the past, which have varied success with certain problems. One can, for example, modify the Hessian by adding a correction matrix Bn so as to make H_f(\mathbf{x}_n) + B_n positive definite. One approach is to diagonalize Hf and choose Bn so that H_f(\mathbf{x}_n) + B_n has the same eigenvectors as Hf, but with each negative eigenvalue replaced by ε > 0.

[edit] See also

[edit] References

  • Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN 0-486-43227-0.
  • Nocedal, Jorge & Wright, Stephen J. (1999). Numerical Optimization. Springer-Verlag. ISBN 0-387-98793-2.

[edit] External links