Transpose

This article is about the Matrix Transpose operator. For other uses, see Transposition

In linear algebra, the transpose of a matrix A is another matrix AT (also written Atr, tA, or A′) created by any one of the following equivalent actions:

Formally, the transpose of an m × n matrix A is the n × m matrix

\mathbf{A}^\mathrm{T}_{ij} = \mathbf{A}_{ji} for  1 \le i \le n, 1 \le j \le m.

Contents

Examples

Properties

For matrices A, B and scalar c we have the following properties of transpose:

  1. \left( \mathbf{A}^\mathrm{T} \right) ^\mathrm{T} = \mathbf{A} \quad \,
    Taking the transpose is an involution (self inverse).
  2. (\mathbf{A}+\mathbf{B}) ^\mathrm{T} = \mathbf{A}^\mathrm{T} + \mathbf{B}^\mathrm{T} \,
    The transpose is a linear map from the space of m × n matrices to the space of all n × m matrices.
  3. \left( \mathbf{A B} \right) ^\mathrm{T} = \mathbf{B}^\mathrm{T} \mathbf{A}^\mathrm{T} \,
    Note that the order of the factors reverses. From this one can deduce that a square matrix A is invertible if and only if AT is invertible, and in this case we have (A−1)T = (AT)−1. It is relatively easy to extend this result to the general case of multiple matrices, where we find that (ABC...XYZ)T = ZTYTXT...CTBTAT.
  4. (c \mathbf{A})^\mathrm{T} = c \mathbf{A}^\mathrm{T} \,
    The transpose of a scalar is the same scalar.
  5. \det(\mathbf{A}^\mathrm{T}) = \det(\mathbf{A}) \,
    The determinant of a matrix is the same as that of its transpose.
  6. The dot product of two column vectors a and b can be computed as
     \mathbf{a} \cdot \mathbf{b} = \mathbf{a}^{\mathrm{T}} \mathbf{b},
    which is written as ai bi in Einstein notation.
  7. If A has only real entries, then ATA is a positive-semidefinite matrix.
  8. If A is over some field, then A is similar to AT.
  9. (\mathbf{A}^\mathrm{T})^{-1} = (\mathbf{A}^{-1})^\mathrm{T} \,
    The transpose of an invertible matrix is also invertible, and its inverse is the transpose of the inverse of the original matrix.
  10. If A is a square matrix, then its eigenvalues are equal to the eigenvalues of its transpose.

Special transpose matrices

A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, A is symmetric if

\mathbf{A}^{\mathrm{T}} = \mathbf{A}.\,

A square matrix whose transpose is also its inverse is called an orthogonal matrix; that is, G is orthogonal if

\mathbf{G G}^\mathrm{T} = \mathbf{G}^\mathrm{T} \mathbf{G} = \mathbf{I}_n , \,   the identity matrix, i.e. GT = G-1.

A square matrix whose transpose is equal to its negative is called skew-symmetric matrix; that is, A is skew-symmetric if

\mathbf{A}^{\mathrm{T}} = -\mathbf{A}.\,

The conjugate transpose of the complex matrix A, written as A*, is obtained by taking the transpose of A and the complex conjugate of each entry:

\mathbf{A}^* = (\overline{\mathbf{A}})^{\mathrm{T}} = \overline{(\mathbf{A}^{\mathrm{T}})}.

Transpose of linear maps

Main article: Dual space#Transpose of a linear map
Main article: Hermitian adjoint

If f: VW is a linear map between vector spaces V and W with nondegenerate bilinear forms, we define the transpose of f to be the linear map tf : WV, determined by

B_V(v,{}^tf(w))=B_W(f(v),w) \quad \forall\ v \in V, w \in W.

Here, BV and BW are the bilinear forms on V and W respectively. The matrix of the transpose of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms.

Over a complex vector space, one often works with sesquilinear forms instead of bilinear (conjugate-linear in one argument). The transpose of a map between such spaces is defined similarly, and the matrix of the transpose map is given by the conjugate transpose matrix if the bases are orthonormal. In this case, the transpose is also called the Hermitian adjoint.

If V and W do not have bilinear forms, then the transpose of a linear map f: VW is only defined as a linear map tf : W*V* between the dual spaces of W and V.

Implementation of matrix transposition on computers

On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement.

However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality.

Main article: In-place matrix transposition

Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an N × M matrix in-place, with O(1) additional storage or at most storage much less than MN. For N ≠ M, this involves a complicated permutation of the data elements that is non-trivial to implement in-place. Therefore efficient in-place matrix transposition has been the subject of numerous research publications in computer science, starting in the late 1950s, and several algorithms have been developed.

External links