Square root of a matrix
From Wikipedia, the free encyclopedia
In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices.
Contents |
[edit] Square roots of positive operators
In linear algebra and operator theory, given a bounded positive semidefinite operator T on a complex Hilbert space, B is a square root of T if T = B*B. According to the spectral theorem, the continuous functional calculus can be applied to obtain an operator T½ such that T½ is itself positive and (T½)2 = T. The operator T½ is the unique positive square root of T.
A bounded positive operator on a complex Hilbert space is self adjoint. So T = (T½)* T½. Conversely, it is trivially true that every operator of the form B*B is positive. Therefore, an operator T is positive if and only if T = B*B for some B (equivalently, T = CC* for some C).
The Cholesky factorization is a particular example of square root.
[edit] Unitary freedom of square roots
If T is a n × n positive matrix, all square roots of T are related by unitary transformations. More precisely, if T = AA* = BB*, then there exists an unitary U s.t. A = BU. We now verify this claim.
Take B = T½ to be the unique positive square root of T. It suffices to show that, for any square root A of T, A = UB. Let {ai}1 ≤ i ≤ n, and {bi}1 ≤ i ≤ n be the set of column vectors of A and B, respectively. By the construction of the square root, {bi}1 ≤ i ≤ n is the set of, not necessarily normalized, eigenvectors of a Hermitian matrix, therefore an orthogonal basis for Cn. Notice we include the eigenvectors corresponding to eigenvalue 0 when appropriate. The argument below hinges on the fact that {bi}1 ≤ i ≤ n is linearly independent and spans Cn.
The equality AA* = BB* can be rewritten as
By completeness of {bi}, for all j, there exists n scalars, by appending zeros if necessary, {ui j}1 ≤ i ≤ n s.t.
i.e.
We need to show the matrix U = [ui j] is unitary. Compute directly
So
By assumption,
Now because {bi} is a basis of Cn, the set {bi b*k} is a basis for n × n matrices. For linear spaces in general, the expression of an element in terms of a given basis is unique. Therefore
In other words, the n column vectors of U is an orthonormal set and the claim is proved.
The argument extends to the case where A and B are not necessarily square, provided one retains the assumption {bi} is linearly independent. In that case, the existence of the rectangular matrix U = [ui j] follows from the relation
rather than the completeness of {bi}. The conclusion UU* = I still holds. In general, B is n × m for some m ≤ n. A is n × k where m ≤ k ≤ n. U is a m × k partial isometry. (By a partial isometry, we mean a rectangular matrix U with UU* = I.)
[edit] Some applications
The unitary freedom of square roots has applications in linear algebra.
[edit] Kraus operators
By Choi's result, a linear map
is completely positive if and only if it is of the form
where k ≤ nm. Let {Ep q} ⊂ Cn × n be the n2 elementary matrix units. The positive matrix
is called the Choi matrix of Φ. The Kraus operators correspond to the, not necessarily square, square roots of MΦ: For any square root B of MΦ, one can obtain a family of Kraus operators Vi by undoing the Vec operation to each column bi of B. Thus all sets of Kraus operators are related by partial isometries.
[edit] Mixed ensembles
In quantum physics, a density matrix for a n-level quantum system is a n × n complex matrix ρ that is positive semidefinite with trace 1. If ρ can be expressed as
where ∑ pi = 1, the set
is said to be an ensemble that describes the mixed state ρ. Notice {vi} is not required to be orthogonal. Different ensembles describing the state ρ are related by unitary operators, via the square roots of ρ. For instance, suppose
The trace 1 condition means
Let
and vi be the normalized ai. We see that
gives the mixed state ρ.
[edit] Operators on Hilbert space
In general, if A, B are closed and densely defined operators on a Hilbert space H, and A*A = B*B, then A = UB where U is a partial isometry.
[edit] Numerical analysis
One can also define the square root of a n × n, not necessarily positive, matrix A. A matrix B is said to be a square root of A if the matrix product B B is equal to A.
[edit] Numerical method
Given a square matrix A, one way to find its square root is the Denman-Beavers square root iteration, described below:
- Given matrix A, and the goal being to find A1/2:
- Let Y0 = A and Z0 = I, where I is the identity matrix.
- Iterate:
- ,
- then
- .
[edit] Diagonalizable matrices
A n × n matrix A is diagonalizable if and only if the eigenvectors of A constitute a basis for Cn.
When A is diagonalizable, A1/2 can be computed in a natural way:
- 1. Find the matrix V such that the columns of V are eigenvectors of A.
- 2. Find the inverse V−1 of V.
- 3. Let
- The matrix A' will be a diagonal matrix whose diagonal elements are eigenvalues of A. (This is the diagonalization of A.)
- 4. Replace each diagonal element of A' by its square root to obtain
- then
On a graphing calculator, this method is more efficient than the previous one.
This approach works only for diagonalizable matrices. For non-diagonalizable matrices one can calculate the Jordan normal form followed by a series expansion, similar to the approach described in logarithm of a matrix.
[edit] See also
[edit] References
- Sheung Hun Cheng, Nicholas J. Higham, Charles S. Kenney, and Alan J. Laub, Approximating the Logarithm of a Matrix to Specified Accuracy, SIAM Journal on Matrix Analysis and Applications, vol. 22 (2001), no. 4, pp. 1112–1125.