Matrix decomposition
From Wikipedia, the free encyclopedia
In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. There are many different matrix decompositions; each finds use among a particular class of problems.
Contents |
[edit] Example
In numerical analysis, different decompositions are used to implement efficient matrix algorithms.
For instance, when solving a system of linear equations Ax = b, the matrix A can be decomposed via the LU decomposition. The LU decomposition factorizes a matrix into a lower triangular matrix L and an upper triangular matrix U. The systems L(Ux) = b and Ux = L − 1b are much easier to solve than the original.
[edit] Decompositions related to solving systems of linear equations
[edit] LU decomposition
- Applicable to: square matrix A
- Decomposition: A = LU, where L is lower triangular and U is upper triangular
- Related: the LDU decomposition is A = LDU, where L is lower triangular with ones on the diagonal, U is upper triangular with ones on the diagonal, and D is a diagonal matrix.
- Related: the LUP decomposition is A = LUP, where L is lower triangular, U is upper triangular, and P is a permutation matrix.
- Existence: An LUP decomposition exists for any square matrix A. When P is an identity matrix, the LUP decomposition reduces to the LU decomposition. If the LU decomposition exists, the LDU decomposition does too.
- Comments: The LUP and LU decompositions are useful in solving an n-by-n system of linear equations Ax = b. These decompositions summarize the process of Gaussian elimination in matrix form. Matrix P represents any row interchanges carried out in the process of Gaussian elimination; if the system can be solved by Gaussian elimination without row interchanges, then P=I, so an LU decomposition exists.
[edit] LU Reduction
[edit] Block LU decomposition
[edit] Cholesky decomposition
- Applicable to: square, symmetric, positive definite matrix A
- Decomposition: A = UTU, where U is upper triangular with positive diagonal entries
- Comment: the Cholesky decomposition is a special case of the symmetric LU decomposition, with L = UT.
- Comment: the Cholesky decomposition is unique
- Comment: the Cholesky decomposition is also applicable for complex hermitian positive definite matrices
[edit] QR decomposition
- Applicable to: m-by-n matrix A
- Decomposition: A = QR where Q is an orthogonal matrix of size m-by-m, and R is an upper triangular matrix of size m-by-n
- Comment: The QR decomposition provides an alternative way of solving the system of equations Ax = b without inverting the matrix A. The fact that Q is orthogonal means that QTQ = I, so that Ax = b is equivalent to Rx = QTb, which is easier to solve since R is triangular.
[edit] Decompositions based on eigenvalues and related concepts
[edit] Eigendecomposition (also called spectral decomposition)
- Applicable to: square matrix A
- Decomposition: A = VDV − 1, where D is a diagonal matrix formed from the eigenvalues of A, and the columns of V are the corresponding eigenvectors of A.
- Existence: An n-by-n matrix A always has n eigenvalues, which can be ordered (in more than one way) to form an n-by-n diagonal matrix D and a corresponding matrix of nonzero columns V that satisfies the eigenvalue equation AV = VD. If the n eigenvalues are distinct (that is, none is equal to any of the others), then V is invertible, implying the decomposition A = VDV − 1.
- Comment: The eigendecomposition is useful for understanding the solution of a system of linear ordinary differential equations or linear difference equations. For example, the difference equation xt + 1 = Axt starting from the initial condition x0 = c is solved by xt = Atc, which is equivalent to xt = VDtV − 1c, where V and D are the matrices formed from the eigenvectors and eigenvalues of A. Since D is diagonal, raising it to power Dt, just involves raising each element on the diagonal to the power t. This is much easier to do and to understand than raising A to power t, since A is usually not diagonal.
[edit] Jordan decomposition
- Applicable to: square matrix A
- Comment: the Jordan decomposition generalizes the eigendecomposition to cases where there are repeated eigenvalues.
[edit] Schur decomposition
- Applicable to: square matrix A
- Comment: there are two versions of this decomposition: the complex Schur decomposition and the real Schur decomposition.
- Decomposition (complex version): A = UTUH, where U is a unitary matrix, UH is the conjugate transpose of U, and T is an upper triangular matrix called the complex Schur form which has the eigenvalues of A along its diagonal.
- Decomposition (real version): A = VSVT, where A, V, S and VT are matrices that contain real numbers only. In this case, V is an orthogonal matrix, VT is the transpose of V, and S is a block upper triangular matrix called the real Schur form. The blocks on the diagonal of S are of size 1×1 (in which case they represent real eigenvalues) or 2×2 (in which case they are derived from complex conjugate eigenvalue pairs).
[edit] QZ decomposition (also called generalized Schur decomposition)
- Applicable to: square matrices A and B
- Comment: there are two versions of this decomposition: complex and real.
- Decomposition (complex version): A = QSZH and B = QTZH where Q and Z are unitary matrices, the H superscript represents conjugate transpose, and S and T are upper triangular matrices.
- Comment: in the complex QZ decomposition, the ratios of the diagonal elements of S to the corresponding diagonal elements of T, λi = Sii / Tii, are the generalized eigenvalues that solve the generalized eigenvalue problem Av = λBv (where λ is an unknown scalar and v is an unknown nonzero vector).
- Decomposition (real version): A = QSZT and B = QTZT where A, B, Q, Z, S, and T are matrices containing real numbers only. In this case Q and Z are orthogonal matrices, the T superscript represents transposition, and S and T are block upper triangular matrices. The blocks on the diagonal of S and T are of size 1×1 or 2×2.
[edit] Singular value decomposition
- Applicable to: m-by-n matrix A.
- Decomposition: A = UDVH, where D is a nonnegative diagonal matrix, and U and V are unitary matrices, and VH denotes the conjugate transpose of V (or simply the transpose, if V contains real numbers only).
- Comment: The diagonal elements of D are called the singular values of A.
- Comment: like the eigendecomposition, the singular value decomposition involves finding basis directions along which matrix multiplication is equivalent to scalar multiplication, but it has greater generality since the matrix under consideration need not be square.
[edit] Takagi's factorization
- Applicable to: square, complex, symmetric matrix A.
- Decomposition: A = VDVT, where D is a real nonnegative diagonal matrix, and V is unitary. VT denotes the matrix transpose of V.
- Comment: the diagonal elements of D are the nonnegative square roots of the eigenvalues of AAH.
- Comment: V may be complex even if A is real.
[edit] Other decompositions
[edit] See also
|