Norm (mathematics)

In linear algebra, functional analysis and related areas of mathematics, a norm is a function that assigns a strictly positive length or size to all vectors in a vector space, other than the zero vector. A seminorm (or pseudonorm), on the other hand, is allowed to assign zero length to some non-zero vectors.

A simple example is the 2-dimensional Euclidean space R2 equipped with the Euclidean norm. Elements in this vector space (e.g., (3, 7) ) are usually drawn as arrows in a 2-dimensional cartesian coordinate system starting at the origin (0, 0). The Euclidean norm assigns to each vector the length of its arrow. Because of this, the Euclidean norm is often known as the magnitude.

A vector space with a norm is called a normed vector space. Similarly, a vector space with a seminorm is called a seminormed vector space.

Contents

Definition

Given a vector space V over a subfield F of the complex numbers such as the complex numbers themselves or the real or rational numbers, a seminorm on V is a function p:V\to\mathbb{R}; x\mapsto{}p(x) with the following properties:

For all a in F and all u and v in V,

  1. p(a v) = |a| p(v), (positive homogeneity or positive scalability)
  2. p(u + v) ≤ p(u) + p(v) (triangle inequality or subadditivity).

A simple consequence of these two axioms, positive homogeneity and the triangle inequality, is p(0) = 0 and thus

p(v) ≥ 0 (positivity).

A norm is a seminorm with the additional property

p(v) = 0 if and only if v is the zero vector (positive definiteness).

Although every vector space is seminormed (e.g., with the trivial seminorm in the Examples section below), it may be not normed. Every vector space V with seminorm p(v) induces a normed space V/W, called the quotient space, where W is the subspace of V consisting of all vectors v in V with p(v) = 0. The induced norm on V/W is clearly well-defined and is given by:

p(W+v) = p(v).

A topological vector space is called normable (seminormable) if the topology of the space can be induced by a norm (seminorm).

Notation

The norm of a vector v is usually denoted ||v||, and sometimes |v|. However, the latter notation is generally discouraged, because it is also used to denote the absolute value of scalars and the determinant of matrices.

Examples

Euclidean norm

Main article: Euclidean distance

On Rn, the intuitive notion of length of the vector x = [x1, x2, ..., xn] is captured by the formula

\|\mathbf{x}\|�:= \sqrt{x_1^2 + \cdots + x_n^2}.

This gives the ordinary distance from the origin to the point x, a consequence of the Pythagorean theorem. The Euclidean norm is by far the most commonly used norm on Rn, but there are other norms on this vector space as will be shown below.

On Cn the most common norm is

\|\mathbf{z}\|�:= \sqrt{|z_1|^2 + \cdots + |z_n|^2}., equivalent with the Euclidean norm on R2n.

In each case we can also express the norm as the square root of the inner product of the vector and itself. The Euclidean norm is also called the L2 distance or L2 norm; see Lp space.

\|\mathbf{x}\|�:= \sqrt{x^{T}x}.

The set of vectors whose Euclidean norm is a given constant forms the surface of an n-sphere, with n+1 being the dimension of the Euclidean space.

Taxicab norm or Manhattan norm

Main article: Taxicab geometry
\|\emph{\textbf{x}}\|_1�:= \sum_{i=1}^{n} |x_i|.

The name relates to the distance a taxi has to drive in a rectangular street grid to get from the origin to the point x.

The set of vectors whose 1-norm is a given constant forms the surface of a cross polytope of dimension equivalent to that of the norm minus 1.

p-norm

Main article: Lp space

Let p ≥ 1 be a real number.

\|\emph{\textbf{x}}\|_p�:= \left( \sum_{i=1}^n |x_i|^p \right)^\frac{1}{p}.

Note that for p = 1 we get the taxicab norm and for p = 2 we get the Euclidean norm.

This formula is also valid for 0 < p < 1, but the resulting function does not define a norm,[1] because it violates the triangle inequality.

Taking the limit p \to \infty yields the uniform norm, and taking the limit p \to 0 yields the so-called zero norm, which, despite the name, is not a norm.

Infinity norm or maximum norm

\|x\|_\infty = 1
Main article: Maximum norm
\|\emph{\textbf{x}}\|_\infty�:= \max \left(|x_1|, \ldots ,|x_n| \right).

The set of vectors whose infinity norm is a given constant, c, forms the surface of a hypercube with edge length 2c.

Zero norm

In the machine learning and optimization literature, one often finds reference to the zero norm. The zero norm of x is defined as  \lim_{p\rightarrow 0} \|x\|_p^p, where \|x\|_p is the p-norm defined above. If we define 0^0 \ \stackrel{\mathrm{def}}{=}\  0 then we can write the zero norm as \sum_{i=1}^n x_i^0. It follows that the zero norm of x is simply the number of non-zero elements of x. Despite its name, the zero norm is not a true norm; in particular, it is not positive homogeneous. Such a norm can be defined over arbitrary fields (besides the fields of complex numbers). In the context of the information theory, it is often called the Hamming distance in the case of the 2-element GF(2) field.

Other norms

Other norms on Rn can be constructed by combining the above; for example

\|\emph{\textbf{x}}\|�:= 2|x_1| + \sqrt{3|x_2|^2 + \max(|x_3|,2|x_4|)^2}

is a norm on R4.

For any norm and any bijective linear transformation A we can define a new norm of x, equal to

\|Ax\|.

In 2D, with A a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. In 2D, each A applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: a parallelogram of a particular shape, size and orientation. In 3D this is similar but different for the 1-norm (octahedrons) and the maximum norm (prisms with parallelogram base).

All the above formulas also yield norms on Cn without modification.

Infinite dimensional case

The generalization of the above norms to an infinite number of components leads to the Lp spaces, with norms

 \|x\|_p = \left(\sum_{i\in\mathbb N}|x_i|^p\right)^{\frac1p} resp.  \|f\|_{p,X} = \left(\int_X|f(x)|^p\,\mathrm dx\right)^{\frac1p}

(for complex-valued sequences x resp. functions f defined on X\subset\mathbb R), which can be further generalized (see Haar measure).

Any inner product induces in a natural way the norm \|x\|�:= \sqrt{\langle x,x\rangle}.

Other examples of infinite dimensional normed vector spaces can be found in the Banach space article.

Properties

Illustrations of unit circles in different norms.

The concept of unit circle (the set of all vectors of norm 1) is different in different norms: for the 1-norm the unit circle in R2 is a square, for the 2-norm (Euclidean norm) it is the well-known unit circle, while for the infinity norm it is a different square. For any p-norm it is a superellipse (with congruent axes). See the accompanying illustration. Note that due to the definition of the norm, the unit circle is always convex and centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle).

In terms of the vector space, the seminorm defines a topology on the space, and this is a Hausdorff topology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm.

Two norms ||•||α and ||•||β on a vector space V are called equivalent if there exist positive real numbers C and D such that

C\|x\|_\alpha\leq\|x\|_\beta\leq D\|x\|_\alpha

for all x in V. On a finite-dimensional vector space all norms are equivalent. For instance, the l_1, l_2, and l_\infty norms are all equivalent on \mathbb{R}^n:

\|x\|_2\le\|x\|_1\le\sqrt{n}\|x\|_2
\|x\|_\infty\le\|x\|_2\le\sqrt{n}\|x\|_\infty
\|x\|_\infty\le\|x\|_1\le n\|x\|_\infty

Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space is uniformly isomorphic.

Every (semi)-norm is a sublinear function, which implies that every norm is a convex function. As a result, finding a global optimum of a norm-based objective function is often tractable.

Given a finite family of seminorms pi on a vector space the sum

p(x):=\sum_{i=0}^n p_i(x)

is again a seminorm.

For any norm p on a vector space V, we have that for all u and vV:

p(u ± v) ≥ | p(u) − p(v) |

For the lp norms, we have[2]

|x^\top y|\le\| x\|_p\|y\|_q\qquad \frac{1}{p}+\frac{1}{q}=1

A special case of the above property is the Cauchy-Schwarz inequality:[2]

|x^\top y|\le\|x\|_2\|y\|_2

Classification of seminorms: Absolutely convex absorbing sets

All seminorms on a vector space V can be classified in terms of absolutely convex absorbing sets in V. To each such set, A, corresponds a seminorm pA called the gauge of A, defined as

pA(x) := inf{α : α > 0, x ∈ α A}

with the property that

{x : pA(x) < 1} ⊆ A ⊆ {x : pA(x) ≤ 1}.

Conversely, if a norm p is given and A is its open (or closed) unit ball, then A is an absolutely convex absorbing set, and p = pA.

Any locally convex topological vector space has a local basis consisting of absolutely convex absorbing sets. A common method to construct such a basis is to use a family of seminorms. Typically this family is infinite, and there are enough seminorms to distinguish between elements of the vector space, creating a Hausdorff space.

Notes

  1. Except in R1, where it coincides with the Euclidean norm, and R0, where it is trivial.
  2. 2.0 2.1 Golub, Gene; Charles F. Van Loan (1996). Matrix Computations - Third Edition. Baltimore: The Johns Hopkins University Press. pp. 53. ISBN 0-8018-5413-X. 

References

Bourbaki, N. (1987). Topological Vector Spaces, Chapters 1-5. Elements of Mathematics. Springer. ISBN 3-540-13627-4. 

See also