In linear algebra, the transpose of a matrix A is another matrix AT (also written Atr, tA, or A′) created by any one of the following equivalent actions:
Formally, the transpose of an m × n matrix A is the n × m matrix
Contents |
For matrices A, B and scalar c we have the following properties of transpose:
A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, A is symmetric if
A square matrix whose transpose is also its inverse is called an orthogonal matrix; that is, G is orthogonal if
A square matrix whose transpose is equal to its negative is called skew-symmetric matrix; that is, A is skew-symmetric if
The conjugate transpose of the complex matrix A, written as A*, is obtained by taking the transpose of A and the complex conjugate of each entry:
If f: V→W is a linear map between vector spaces V and W with nondegenerate bilinear forms, we define the transpose of f to be the linear map tf : W→V, determined by
Here, BV and BW are the bilinear forms on V and W respectively. The matrix of the transpose of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms.
Over a complex vector space, one often works with sesquilinear forms instead of bilinear (conjugate-linear in one argument). The transpose of a map between such spaces is defined similarly, and the matrix of the transpose map is given by the conjugate transpose matrix if the bases are orthonormal. In this case, the transpose is also called the Hermitian adjoint.
If V and W do not have bilinear forms, then the transpose of a linear map f: V→W is only defined as a linear map tf : W*→V* between the dual spaces of W and V.
On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement.
However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality.
Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an N × M matrix in-place, with O(1) additional storage or at most storage much less than MN. For N ≠ M, this involves a complicated permutation of the data elements that is non-trivial to implement in-place. Therefore efficient in-place matrix transposition has been the subject of numerous research publications in computer science, starting in the late 1950s, and several algorithms have been developed.
|