Matrix norm

From Wikipedia, the free encyclopedia

In mathematics, the term matrix norm can have two meanings:

  • A sub-multiplicative vector norm is any vector norm on square matrices compatible with matrix multiplication in the sense that
\|AB\|\le\|A\| \|B\|.
The set of all n-by-n matrices, together with such a sub-multiplicative norm, is a Banach algebra.

In the rest of the article, we will follow the tradition in matrix theory. We use term "vector norm" for the first definition and "matrix norm" for the second definition.

Contents

[edit] Operator norm or induced norm

If norms on Km and Kn are given (K is field of real or complex numbers), then one defines the corresponding induced norm or operator norm on the space of m-by-n matrices as the following maxima:

\|A\|=\max\{\|Ax\| : x\in K^n \mbox{ with }\|x\|\le 1\}
= \max\{\|Ax\| : x\in K^n \mbox{ with }\|x\| = 1\}
= \max\left\{\frac{\|Ax\|}{\|x\|} : x\in K^n \mbox{ with }x\ne 0\right\}.

If m = n and one uses the same norm on domain and range, then these operator norms are all (submultiplicative) matrix norms.

[edit] p-norms and uniform norms

For example, the operator norm corresponding to the p-norm for vectors is:

\left \| A \right \| _p = \operatorname{max} _{x \ne 0} \frac{\left \| A x\right \| _p}{\left \| x\right \| _p}.

In the case of p = 1 and p=\infty, the norms can be computed as:

\left \| A \right \| _1 = \operatorname{max} _{1 \leq j \leq n} \sum _{i=1} ^m | a_{ij} |
\left \| A \right \| _\infty = \operatorname{max} _{1 \leq i \leq m} \sum _{j=1} ^n | a_{ij} | .

These are different from the Schatten p-norms for matrices, which are also usually denoted by \left \| A \right \| _p .

[edit] Spectral norm

In the special case of p = 2 (the Euclidean norm) and m = n (square matrices), the induced matrix norm is the spectral norm. The spectral norm of a matrix A is the largest singular value of A or the square root of the largest eigenvalue of the positive-semidefinite matrix A*A:

\left \| A \right \| _2=\sqrt{\lambda_{\mbox{max}}(A^* A)}

where A* denotes the conjugate transpose of A.

Any induced norm satisfies the inequality

\left \| A \right \| _2 \ge \rho(A),

where ρ(A) is the spectral radius of A. Furthermore, we have

\lim_{r\rarr\infty}\|A^r\|^{1/r}=\rho(A).

[edit] "Entrywise" norms

These vector norms treat a matrix as an m \times n vector, and use one of the familiar vector norms. Most entrywise norms are not (submultiplicative) matrix norms.

For example, using the p-norm for vectors, we get:

\Vert A \Vert_{p} = \Big( \sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^p \Big)^{1/p}. \,

[edit] Frobenius norm

For p = 2, this is called the Frobenius norm. This norm can be defined in various ways:

\|A\|_F^2=\sum_{i=1}^m\sum_{j=1}^n |a_{ij}|^2=\operatorname{trace}(A^\dagger{} A)=\sum_{i=1}^{\min\{m,\,n\}} \sigma_{i}^2

where A denotes the conjugate transpose of A, σi are the singular values of A, and the trace function is used. The Frobenius norm is very similar to the Euclidean norm on Kn and comes from an inner product on the space of all matrices.

The Frobenius norm is submultiplicative and is very useful for numerical linear algebra. This norm is often more natural and more convenient than the induced norms.

[edit] Trace norm

The trace norm is defined as

\|A\|_{tr} =\operatorname{trace}(\sqrt{A^*A})=\sum_{i=1}^{\min\{m,\,n\}} \sigma_{i}.

Clearly, for all A \in K^{m \times n} we have \|A\|_{tr} \ge \|A\|_{F}.

[edit] Consistent norms

A matrix norm \| \cdot \|_{ab} on K^{m \times n} is called consistent with a vector norm \| \cdot \|_{a} on Kn and a vector norm \| \cdot \|_{b} on Km if:

\|Ax\|_b \leq \|A\|_{ab} \|x\|_a

for all A \in K^{m \times n}, x \in K^n. All induced norms are consistent by definition.

[edit] Equivalence of norms

For any two vector norms ||·||α and ||·||β, we have

r\left\|A\right\|_\alpha\leq\left\|A\right\|_\beta\leq s\left\|A\right\|_\alpha

for some positive numbers r and s, for all matrices A. In other words, they are equivalent norms; they induce the same topology on the real or complex vector space.

Moreover, when A\in \mathbb{R}^{n\times n}, then for any vector norm ||·||, there exists a unique positive number k such that k||A|| is a (submultiplicative) matrix norm.

A matrix norm ||·||p is said to be minimal if there exists no other matrix norm ||·||q satisfying ||·||q≤||·||p for all ||·||q.

[edit] Examples of norm equivalence

For matrix A\in\mathbb{R}^{m\times n} the following inequalities hold:

[edit] References

  1. ^ a b c d e Golub, Gene; Charles F. Van Loan (1996). Matrix Computations - Third Edition. Baltimore: The Johns Hopkins University Press, 56-57. ISBN 0-8018-5413-X. 
  • Roger Horn and Charles Johnson. Matrix Analysis, Chapter 5, Cambridge University Press, 1985. ISBN 0-521-38632-2.
  • L. Thomas, Norms and Condition Numbers of a Matrix [1]
  • James W. Demmel, Applied Numerical Linear Algebra, section 1.7, published by SIAM, 1997.
In other languages