Broyden's method
In numerical analysis, Broyden's method is a quasi-Newton method for the root-finding algorithm in k variables. It was originally described by C. G. Broyden in 1965.[1]
Newton's method for solving uses the Jacobian matrix, , at every iteration. However, computing this Jacobian is a difficult and expensive operation. The idea behind Broyden's method is to compute the whole Jacobian only at the first iteration, and to do a rank-one update at the other iterations.
In 1979 Gay proved that when Broyden's method is applied to a linear system of size n x n, it terminates in 2n steps,[2] although like all quasi-Newton methods, it may not converge for nonlinear systems.
Description of the method
Solving single variable equation
In the secant method, we replace the first derivative with the finite difference approximation:
and proceeds similar to Newton's Method ( is the index for the iterations):
Solving a set of nonlinear equations
To solve a set of nonlinear equations
- ,
where the vector is a function of vector as (if we have equations):
For such problems, Broyden gives a generalization of above formula, replacing the derivative with the Jacobian . The Jacobian matrix is determined iteratively based on the secant equation with the finite difference approximation:
where is the index of iterations. However above equation is under determined in more than one dimension. Broyden suggests using the current estimate of the Jacobian matrix and improving upon it by taking the solution to the secant equation that is a minimal modification to (minimal in the sense of minimizing the Frobenius norm ):
where
then we proceed in the Newton direction as:
Broyden also suggested using the Sherman-Morrison formula to update directly the inverse of the Jacobian matrix:
This method is commonly known as the "good Broyden's method". A similar technique can be derived by using a slightly different modification to (which minimizes instead); this yields the so-called "bad Broyden's method" (but see[3]):
Many other quasi-Newton schemes have been suggested in optimization, where one seeks a maximum or minimum by finding the root of the first derivative (gradient in multi dimensions). The Jacobian of the gradient is called Hessian and is symmetric, adding further constraints to its upgrade.
See also
- Secant method
- Newton's method
- Quasi-Newton method
- Newton's method in optimization
- Davidon-Fletcher-Powell formula
- Broyden-Fletcher-Goldfarb-Shanno (BFGS) method
References
- ↑ Broyden, C. G. (October 1965). "A Class of Methods for Solving Nonlinear Simultaneous Equations". Mathematics of Computation (American Mathematical Society) 19 (92): 577–593. doi:10.2307/2003941. JSTOR 2003941.
- ↑ Gay, D.M. (August 1979). "Some convergence properties of Broyden's method". SIAM Journal of Numerical Analysis (SIAM) 16 (4): 623–630. doi:10.1137/0716047.
- ↑ Kvaalen, Eric (November 1991). "A faster Broyden method". BIT Numerical Mathematics (SIAM) 31 (2): 369–372. doi:10.1007/BF01931297.
External links
|