In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows or columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices.
Contents |
Let A be an m × n matrix and k an integer with 0 < k ≤ m, and k ≤ n. A k × k minor of A is the determinant of a k × k matrix obtained from A by deleting m − k rows and n − k columns.
Since there are:
ways to choose k rows from m rows, and there are
ways to choose k columns from n columns, there are a total of
minors of size k × k.
The (i,j) minor (often denoted Mij) of an n × n square matrix A is defined as the determinant of the (n − 1) × (n − 1) matrix formed by removing from A its ith row and jth column. An (i,j) minor is also referred to as (i,j)th minor, or simply i,j minor.
Mij is also called the minor of the element aij of matrix A.
A minor that is formed by removing only one row and column from a square matrix A (such as Mij) is called a first minor. When two rows and columns are removed, this is called a second minor.[1]
The (i,j) cofactor Cij of a square matrix A is just (−1)i + j times the corresponding (n − 1) × (n − 1) minor Mij:
The cofactor matrix of A, or matrix of A cofactors, typically denoted C, is defined as the n×n matrix whose (i,j) entry is the (i,j) cofactor of A.
The transpose of C is called the adjugate or classical adjoint of A. (In modern terminology, the "adjoint" of a matrix most often refers to the corresponding adjoint operator.) Adjugate matrices are used to compute the inverse of square matrices.
For example, given the matrix
suppose we wish to find the cofactor C23. The minor M23 is the determinant of the above matrix with row 2 and column 3 removed (the following is not standard notation):
where the vertical bars around the matrix indicate that the determinant should be taken. Thus, C23 is (-1)2+3 M23
The complement, C, of a minor, M, of a square matrix, A, is formed by the determinant of the matrix A from which all the rows and columns associated with M have been removed. The complement of the first minor of an element aij is merely that element.[2]
The cofactors feature prominently in Laplace's formula for the expansion of determinants. If all the cofactors of a square matrix A are collected to form a new matrix of the same size and then transposed, one obtains the adjugate of A, which is useful in calculating the inverse of small matrices.
Given an m × n matrix with real entries (or entries from any other field) and rank r, then there exists at least one non-zero r × r minor, while all larger minors are zero.
We will use the following notation for minors: if A is an m × n matrix, I is a subset of {1,...,m} with k elements and J is a subset of {1,...,n} with k elements, then we write [A]I,J for the k × k minor of A that corresponds to the rows with index in I and the columns with index in J.
Both the formula for ordinary matrix multiplication and the Cauchy-Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that A is an m × n matrix, B is an n × p matrix, I is a subset of {1,...,m} with k elements and J is a subset of {1,...,p} with k elements. Then
where the sum extends over all subsets K of {1,...,n} with k elements. This formula is a straightforward extension of the Cauchy-Binet formula.
A more systematic, algebraic treatment of the minor concept is given in multilinear algebra, using the wedge product: the k-minors of a matrix are the entries in the kth exterior power map.
If the columns of a matrix are wedged together k at a time, the k × k minors appear as the components of the resulting k-vectors. For example, the 2 × 2 minors of the matrix
are −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product
where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it is bilinear and
and
we can simplify this expression to
where the coefficients agree with the minors computed earlier.
|