Smith normal form
From Wikipedia, the free encyclopedia
The Smith normal form is a normal form that can be defined for any matrix (not necessarily square) with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module.
Let A be a nonzero m×n matrix over a principal ideal domain R, and for a in R \ {0}, write δ(a) for the number of prime factors of a (these exist and are unique since any PID is also a unique factorization domain). In particular, R is also a Bézout domain, so it is a gcd domain and the gcd of any two elements satisfies a Bézout's identity.
Contents |
[edit] Algorithm
Our goal will be to find invertible square matrices S and T such that the product S A T is diagonal. This is the hardest part of the algorithm and once we have achieved diagonality it becomes relatively easy to put the matrix in Smith normal form. (Note that invertibility of a matrix with entries in R is the same as saying that its determinant is a unit.) Phrased more abstractly, the goal is to show that, thinking of A as a map from Rn (the free R-module of rank n) onto Rm (the free R-module of rank m), there are isomorphisms and such that has the simple form of a diagonal matrix. The matrices S and T will be found by repeatedly applying elementary transformations that replace a row (column) with a linear combination of itself and another row (column).
Set t = 1 and choose jt to be the smallest column index of A with a non-zero entry. One can repeatedly apply the following to put a matrix into Smith normal form.
[edit] Case I
If and , exchange rows t and k.
[edit] Case II
If there is an entry at position (k,jt) such that , then, letting , we know by the Bézout property that there exist σ, τ in R such that
By left-multiplication with an appropriate matrix L it can be achieved that row t of the matrix product is the sum of row t multiplied by σ and row k multiplied by -τ. (If σ and τ satisfy the above equation, they must be relatively prime; so there exist α and γ such that
or in other words, the determinant of the matrix
equals one. L can be obtained by fitting this matrix into the diagonal of the identity matrix at the appropriate positions, depending on the value of t and k. That L has determinant one guarantees that L is invertible over R.) After left-multiplying by L we get β at position (t,jt), where and β divides . Repeating these steps, one ends up with a matrix having an entry at position (t,jt) that divides all entries in column jt.
[edit] Case III
Finally, adding appropriate multiples of row t, it can be achieved that all entries in column jt except for that at position (t,jt) are zero. This can be achieved by left-multiplication with an appropriate matrix. However, to make the matrix fully diagonal we need to eliminate nonzero entries on the row of position (t,jt) as well. This can be achieved by repeating the steps in Case II for columns instead of rows, and using multiplication on the right. In general this will result in the zero entries from the prior application of Case II becoming nonzero again.
However, notice that the ideals generated by the elements at position (t,jt) form an ascending chain, because entries from a later step always divide entries from a previous step. Therefore, since R is a Noetherian ring (it is a PID), the ideals eventually become stationary and do not change. This means that at some stage after Case II has been applied, the entry at (t,jt) will divide all nonzero row or column entries before applying any more steps in Case II. Then we can eliminate entries in the row or column with nonzero entries while preserving the zeros in the already-zero row or column. At this point, only the block of A to the lower right of (t,jt) needs to be diagonalized, and the algorithm can be applied recursively, treating this block as a separate matrix.
[edit] Results
Applying the steps described above to the remaining non-zero columns of the resulting matrix (if any), we get an -matrix with column indices where , each of which satisfies the following:
- the entry at position (l,jl) is non-zero;
- all entries below and above position (l,jl) as well as entries left of (l,jl) are zero.
Furthermore, all rows below the r-th row are zero.
This is a version of the Gauss algorithm for principal ideal domains which is usually described only for commutative fields.
Now we can re-order the columns of this matrix so that elements on positions (i,i) for are nonzero and for ; and all columns right of the r-th column (if present) are zero. For short set αi for the element at position (i,i). δ has non-negative integer values; so δ(α1) = 0 is equivalent to α1 being a unit of R.
δ(αi) = δ(αi + 1) can either happen if αi and αi + 1 differ by a unit factor, or if they are relative prime. In the latter case one can add column i + 1 to column i (which doesn't change αi) and then apply appropriate row manipulations to get αi = 1. And for δ(αi) < δ(αi + 1) and one can apply step (II) after adding column i + 1 to column i.
This diminishes the minimum δ-values for non-zero entries of the matrix, and by reordering columns etc. we end up with a matrix whose diagonal elements αi satisfy .
Since all row and column manipulations involved in the process are invertible, this shows that there exist invertible and -matrices S, T so that the product S A T is
This is the Smith normal form of the matrix. The elements αi are unique up to associatedness and are called the elementary divisors, invariants, or invariant factors.
[edit] Applications
The Smith normal form is useful for computing the homology of a chain complex when the chain modules of the chain complex are finitely generated. For instance, in topology, it can be used to compute the homology of a simplicial complex or CW complex over the integers, because the boundary maps in such a complex are just integer matrices. It can also be used to prove the well known structure theorem for finitely generated modules over a principal ideal domain.
[edit] Example
As an example, we will find the Smith normal form of the following matrix over the integers.
The following matrices are the intermediate steps as the algorithm is applied to the above matrix.
So the Smith normal form is
and the elementary divisors are 2, 6 and 12.
[edit] Similarity
The Smith normal form can be used to determine whether or not matrices with entries over a common field are similar. Specifically two matrices A and B are similar if and only if the characteristic matrices xI − A and xI − B have the same Smith normal form.
For example, with
A and B are similar because the Smith normal form of their characteristic matrices match, but are not similar to C because the Smith normal form of the characteristic matrices do not match.
[edit] See also
- Canonical form
- Henry John Stephen Smith (1826 - 1883), whose name is attached to the Smith normal form
[edit] External links
- Thomas Heye's GFDL Smith normal form article at PlanetMath
- GFDL Example of Smith normal form at PlanetMath