Dimension theorem for vector spaces

From Wikipedia, the free encyclopedia

In mathematics, the dimension theorem for vector spaces states that a vector space has a definite, well-defined number of dimensions. This may be finite, or an infinite cardinal number.

Formally, the dimension theorem for vector spaces states that

Given a vector space V, any two linearly independent generating sets (in other words, any two bases) have the same cardinality.

If V is finitely generated, the result says that any two bases have the same number of elements.

The cardinality of a basis is called the dimension of the vector space.

While the proof of the existence of a basis for any vector space in the general case requires Zorn's lemma and is in fact equivalent to the axiom of choice, the uniqueness of the cardinality of the basis requires only the ultrafilter lemma, which is strictly weaker. The theorem can be generalized to arbitrary R-modules for rings R having invariant basis number.

For the finitely generated case it can be done with elementary arguments of linear algebra, requiring no forms of choice.

Contents

[edit] Proof

Assume that { ai: iI } and { bj: jJ } are both bases, with the cardinality of I bigger than the cardinality of J. From this assumption we will derive a contradiction.

[edit] Case 1

Assume that I is infinite.

Every bj can be written as a finite sum

b_j = \sum_{i\in E_j} \lambda_{i,j} a_i , where Ej is a finite subset of I.

Since the cardinality of I is greater than that of J and the Ej's are finite subsets of I, the cardinality of I is also bigger than the cardinality of \bigcup_{j\in J} E_j. (Note that this argument works only for infinite I.) So there is some i_0\in I which does not appear in any Ej. The corresponding a_{i_0} can be expressed as a finite linear combination of bj's, which in turn can be expressed as finite linear combination of ai's, not involving a_{i_0}. Hence  a_{i_0} is linearly dependent on the other ai's.

[edit] Case 2

Now assume that I is finite and of cardinality bigger than the cardinality of J. Write m and n for the cardinalities of I and J, respectively. Every ai can be written as a sum

a_i = \sum_{j\in J} \mu_{i,j} b_j

The matrix  (\mu_{i,j}: i\in I, j\in J) has n columns (the j-th column is the m-tuple  (\mu_{i,j}: i\in I)), so it has rank at most n. This means that its m rows cannot be linearly independent. Write r_i = (\mu_{i,j}: j\in J) for the j-th row, then there is a nontrivial linear combination

 \sum_{i\in I}  \nu_i r_i = 0

But then also \sum_{i\in I} \nu_i a_i = \sum_{i\in I} \nu_i \sum_{j\in J} \mu_{i,j} b_j = \sum_{j\in J} \biggl(\sum_{i\in I} \nu_i\mu_{i,j} \biggr) b_j = 0, so the ai are linearly dependent.


[edit] Kernel extension theorem for vector spaces

This application of the dimension theorem is sometimes itself called the dimension theorem. Let

T: UV

be a linear transformation. Then

dim(range(T)) + dim(kernel(T)) = dim(U),

that is, the dimension of U is equal to the dimension of the transformation's range plus the dimension of the kernel. See rank-nullity theorem for a fuller discussion.


[edit] See also

Languages