LU decomposition

From Wikipedia, the free encyclopedia

In linear algebra, the LU decomposition is a matrix decomposition which writes a matrix as the product of a lower and upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear equations or find the inverse of a matrix.

Contents

[edit] Definitions

Let A be a square matrix. An LU decomposition is a decomposition of the form

A = LU, \,

where L and U are lower and upper triangular matrices (of the same size), respectively. This means that L has only zeros above the diagonal and U has only zeros below the diagonal. For a 3 \times 3 matrix, this becomes:

\begin{bmatrix}            a_{11} & a_{12} & a_{13} \\            a_{21} & a_{22} & a_{23} \\            a_{31} & a_{32} & a_{33} \\         \end{bmatrix} =       \begin{bmatrix}            l_{11} & 0 & 0 \\            l_{21} & l_{22} & 0 \\            l_{31} & l_{32} & l_{33} \\         \end{bmatrix}         \begin{bmatrix}            u_{11} & u_{12} & u_{13} \\            0 & u_{22} & u_{23} \\            0 & 0 & u_{33} \\         \end{bmatrix}

An LDU decomposition is a decomposition of the form

A = LDU, \,

where D is a diagonal matrix and L and U are unit triangular matrices, meaning that all the entries on the diagonals of L and U are one.

A PLU decomposition is a decomposition of the form

A = PLU, \,

where L and U are again lower and upper triangular matrices and P is a permutation matrix, i.e., a matrix of zeros and ones that has exactly one entry 1 in each row and column.

Finally, a PLUQ decomposition is a decomposition of the form

A = PLUQ, \,

where P and Q are permutation matrices and L and U are lower and upper triangular matrices.

[edit] Existence and uniqueness

An invertible matrix admits an LU factorization if and only if all its principal minors are non-zero. The factorization is unique if we require that the diagonal of L (or U) consist of ones. The matrix has a unique LDU factorization under the same conditions.

If the matrix is singular, then an LU factorization may still exist. In fact, a square matrix of rank k has an LU factorization if the first k principal minors are non-zero.

The exact necessary and sufficient conditions under which a not necessarily invertible matrix over any field has an LU factorization are known. The conditions are expressed in terms of the ranks of certain submatrices. The Gaussion elimination algorithm for obtaining LU decomposition has also been extended to this most general case (Okunev & Johnson 1997).

Every invertible matrix admits a PLU factorization. Finally, every square matrix A has a PLUQ factorization.

[edit] Positive definite matrices

If the matrix A is Hermitian and positive definite, then we can arrange matters so that U is the conjugate transpose of L. In this case, we have written A as

A = L L^{*}. \,

This decomposition is called the Cholesky decomposition. The Cholesky decomposition always exists and is unique. Furthermore, computing the Cholesky decomposition is more efficient and numerically more stable than computing the LU decomposition.

[edit] Algorithms

The LU decomposition is basically a modified form of Gaussian elimination. We transform the matrix A into an upper triangular matrix U by eliminating the entries below the main diagonal. The Doolittle algorithm does the elimination column by column starting from the left, by multiplying A to the left with atomic lower triangular matrices. It results in a unit lower triangular matrix and an upper triangular matrix. The Crout algorithm is slightly different and constructs a lower triangular matrix and a unit upper triangular matrix.

Computing the LU decomposition using either of these algorithms requires 2n3 / 3 floating point operations, ignoring lower order terms. Partial pivoting adds only a quadratic term and can thus be neglected; this is not the case for full pivoting (Golub & Van Loan 1996).

[edit] Doolittle algorithm

Given an N × N matrix

A = (an,n)

we define

A(0): = A

and then we iterate n = 1,...,N-1 as follows.

We eliminate the matrix elements below the main diagonal in the n-th column of A(n-1) by adding to i-th row of this matrix the n-th row multiplied by

l_{i,n} := -\frac{a_{i,n}^{(n-1)}}{a_{n,n}^{(n-1)}}

for i = n+1,\ldots,N. This can be done by multiplying A(n-1) to the left with the lower triangular matrix

L_n = \begin{pmatrix}      1 &        &           &         &         & 0 \\        & \ddots &           &         &         &   \\        &        &         1 &         &         &   \\        &        & l_{n+1,n} &  \ddots &         &   \\        &        &    \vdots &         &  \ddots &   \\      0 &        &   l_{N,n} &         &         & 1 \\ \end{pmatrix}.

We set

A(n): = LnA(n − 1).

After N-1 steps, we eliminated all the matrix elements below the main diagonal, so we obtain an upper triangular matrix A(N-1). We find the decomposition

A = L_{1}^{-1} L_{1} A^{(0)} = L_{1}^{-1} A^{(1)} = L_{1}^{-1} L_{2}^{-1} L_{2} A^{(1)} =  L_{1}^{-1}L_{2}^{-1} A^{(2)} =\ldots = L_{1}^{-1} \ldots L_{N-1}^{-1} A^{(N-1)}.

Denote the upper triangular matrix A(N-1) by U, and L=L_{1}^{-1} \ldots L_{N-1}^{-1}. Because the inverse of a lower triangular matrix Ln is again a lower triangular matrix, and the multiplication of two lower triangular matrices is again a lower triangular matrix, it follows that L is a lower triangular matrix. We obtain A = LU.

It is clear that in order for this algorithm to work, one needs to have a_{n,n}^{(n-1)}\not=0 at each step (see the definition of li,n). If this assumption fails at some point, one needs to interchange n-th row with another row below it before continuing. This is why the LU decomposition in general looks like P − 1A = LU.

[edit] Crout algorithm

Main article Crout matrix decomposition (Note that there is only a short description of an algorithm, not the algorithm itself)

[edit] Small Example

\begin{bmatrix}            4 & 3 \\            6 & 3 \\         \end{bmatrix} =       \begin{bmatrix}            l_{11} & 0 \\            l_{21} & l_{22} \\         \end{bmatrix}         \begin{bmatrix}            u_{11} & u_{12} \\            0 & u_{22} \\         \end{bmatrix}

One way of finding the LU decomposition of this simple matrix would be to simply solve the linear equations by inspection. You know that:

l11 * u11 + 0 * 0 = 4
l11 * u12 + 0 * u22 = 3
l21 * u11 + l22 * 0 = 6
l21 * u12 + l22 * u22 = 3

By solving these equations, you get:

l11 = 1
u11 = 4
u12 = 3
l21 = 1.5
l22 * u22 = − 1.5

To find the unique LU decomposition, we define the all the entries of the main diagonal of the upper triangular matrix to be ones. This is known as an unit upper triangular matrix and then solve the linear equations.

l11 = 4
u12 = 0.75
l21 = 6
l22 = − 1.5

Substituting these values into the LU decomposition above:

\begin{bmatrix}            4 & 3 \\            6 & 3 \\         \end{bmatrix} =       \begin{bmatrix}            4 & 0 \\            6 & -1.5 \\         \end{bmatrix}         \begin{bmatrix}            1 & 0.75 \\            0 & 1 \\         \end{bmatrix}

[edit] Applications

[edit] Solving linear equations

Given a matrix equation

Ax = LUx = b

we want to solve the equation for a given A and b. In this case the solution is done in two logical steps:

  1. First, we solve the equation Ly = b for y
  2. Second, we solve the equation Ux = y for x.

Note that in both cases we have triangular matrices (lower and upper) which can be solved directly using forward and backward substitution without using the Gaussian elimination process (however we need this process or equivalent to compute the LU decomposition itself). Thus the LU decomposition is computationally efficient only when we have to solve a matrix equation multiple times for different b. It is faster to do a LU decomposition of the matrix A once and then solve the triangular matrices for the different b than to use Gaussian elimination each time.

[edit] Inverse matrix

The matrices L and U can be used to calculate the matrix inverse.

Computer implementations that invert matrices often use this approach.

[edit] See also

[edit] References

[edit] External links

  • LAPACK is a collection of FORTRAN subroutines for solving dense linear algebra problems
  • ALGLIB includes a partial port of the LAPACK to C++, C#, Delphi, etc.
  • Online Matrix Calculator performs LU decomposition
  • LU decomposition at Holistic Numerical Methods Institute