Theorems and definitions in linear algebra

This article collects the main theorems and definitions in linear algebra.

Vector Spaces

Let V be a set on which two operations (vector addition and scalar multiplication) are defined. If the listed axioms are satisfied for every \vec u, \vec v, and \vec w in V and every scalar c and d, then V is called a vector space:

Addition:

  1. \vec u + \vec v\text{ is in }V\text{.}
  2. \vec u + \vec v = \vec v + \vec u
  3. \vec u + (\vec v + \vec w) = (\vec u + \vec v) + \vec w
  4. V\text{ has a }\mathbf{zero}\text{ }\mathbf{vector}\text{ }\vec 0\text{ such that for every }\vec u\text{ in }V\text{, }\vec u + \vec 0 = \vec u
  5. \text{For every }\vec u\text{ in }V\text{, there is a vector in }V\text{ denoted by }-\vec u\text{ such that }\vec u + (-\vec u) = \vec 0\text{.}

Scalar Multiplication:

  1. c\vec u\text{ is in }V\text{.}
  2. c(\vec u + \vec v) = c\vec u + c\vec v
  3. (c + d)\vec u = c\vec u + d\vec u
  4. c(d\vec u) = (cd)\vec u
  5. 1(\vec u) = \vec u

Subspaces

If W is a nonempty subset of a vector space V, then W is a subspace of V if and only if the following closure conditions hold:

  1. \text{If }\vec u\text{ and }\vec v\text{ are in }W\text{, then }\vec u + \vec v\text{ is in }W\text{.}
  2. \text{If }\vec u\text{ is in }W\text{ and }c\text{ is any scalar, then }c\vec u\text{ is in }W\text{.}

Linear combinations

A vector \vec v in a vector space V is called a linear combination of the vectors \vec u_1, \vec u_2, \cdots , \vec u_k in V if \vec v can be written in the form \vec v = c_1\vec u_1 + c_2 \vec u_2 + \cdots + c_k\vec u_k, where c_1, c_2, \cdots , c_k are scalars.

Systems of linear equations

A system of linear equations (or linear system) is a collection of linear equations involving the same set of variables.

Cramer's Rule

If a system of n linear equations in n variables has a coefficient matrix with a nonzero determinant |A|, then the solution of the system is given by

x_1 = \frac{\det(A_1)}{\det(A)}, \qquad x_2 = \frac{\det(A_2)}{\det(A)} , \qquad \dots , \qquad x_n = \frac{\det(A_n)}{\det(A)},

where Ai is the matrix A with the i-th column of A replaced by the column of constants in the system of equations.

Linear dependence

A set of vectors \{\vec{v_1}, \vec{v_2}, \cdots, \vec{v_k}\} in a vector space V is linearly dependent if there exists a set of scalars, {x1, x2, ..., xk }, not all zero, so that the vector equation x_1\vec{v_1}+x_2\vec{v_2}+\cdots+x_k\vec{v_k} = \vec{0} has a solution.

Linear independence

A set of vectors \{\vec{v_1}, \vec{v_2}, \cdots, \vec{v_k}\} in a vector space V is linearly independent if the vector equation x_1\vec{v_1}+x_2\vec{v_2}+\cdots+x_k\vec{v_k} = \vec{0} has only the trivial solution, x_1 = x_2 = \cdots = x_k = 0.

Bases

A set of vectors S = \{\vec v_1, \vec v_2, \cdots , \vec v_n\} in a vector space V is called a basis if the following conditions are true:

  1. S spans V.
  2. S is linearly independent.

Linear transformations and matrices

Change of coordinate matrix
Clique
Coordinate vector relative to a basis
Dimension theorem
Dominance relation
Identity matrix
Identity transformation
Incidence matrix
Inverse of a linear transformation
Inverse of a matrix
Invertible linear transformation
Isomorphic vector spaces
Isomorphism
Kronecker delta
Left-multiplication transformation
Linear operator
Linear transformation
Matrix representing a linear transformation
Nullity of a linear transformation
Null space
Ordered basis
Product of matrices
Projection on a subspace
Projection on the x-axis
Range
Rank of a linear transformation
Reflection about the x-axis
Rotation
Similar matrices
Standard ordered basis for F_n
Standard representation of a vector space with respect to a basis
Zero transformation

P.S. coefficient of the differential equation, differentiability of complex function,vector space of functionsdifferential operator, auxiliary polynomial, to the power of a complex number, exponential function.

Definition of a Linear Transformation

Let V and W be vector spaces. The function T:V\to W is called a linear transformation of V into W if the following two properties are true for all \vec u and \vec v in V and for any scalar c.

  1. T(\vec u + \vec v) = T(\vec u) + T(\vec v)
  2. T(c\vec u) = cT(\vec u)


{\color{Blue}~2.1} N(T) and R(T) are subspaces

Let V and W be vector spaces and I: VW be linear. Then N(T) and R(T) are subspaces of V and W, respectively.

{\color{Blue}~2.2} R(T)= span of T(basis in V)

Let V and W be vector spaces, and let T: V→W be linear. If \beta={v_1,v_2,\ldots,v_n} is a basis for V, then

\mathrm{R(T)}=\mathrm{span}(T(\beta\mathrm{))}=\mathrm{span}({T(v_1),T(v_2),\ldots,T(v_n)}).

{\color{Blue}~2.3} Dimension theorem

Let V and W be vector spaces, and let T: V → W be linear. If V is finite-dimensional, then

\mathrm{nullity}(T)+\mathrm{rank}(T)=\dim(V).

{\color{Blue}~2.4} one-to-one ⇔ N(T) = {0}

Let T:V\to W be a linear transformation. Then T is one-to-one if and only if \operatorname{ker}(T) = \{\vec 0\}.

{\color{Blue}~2.5} one-to-one ⇔ onto ⇔ rank(T) = dim(V)

Let V and W be vector spaces of equal (finite) dimension, and let T:VW be linear. Then the following are equivalent.

(a) T is one-to-one.
(b) T is onto.
(c) rank(T) = dim(V).

{\color{Blue}~2.6}{w_1,w_2,\ldots,w_n}= exactly one T (basis),

Let V and W be vector space over F, and suppose that {v_1, v_2,\ldots,v_n} is a basis for V. For w_1, w_2,\ldots,w_n in W, there exists exactly one linear transformation T: V→W such that \mathrm{T}(v_i)=w_i for i=1,2,\ldots,n.
Corollary. Let V and W be vector spaces, and suppose that V has a finite basis {v_1,v_2,\ldots,v_n}. If U, T: V→W are linear and U(v_i)=T(v_i) for i=1,2,\ldots,n, then U=T.

{\color{Blue}~2.7} T is vector space

Let V and W be vector spaces over a field F, and let T, U: V→W be linear.

(a) For all a F, a\mathrm{T}+\mathrm{U} is linear.
(b) Using the operations of addition and scalar multiplication in the preceding definition, the collection of all linear transformations form V to W is a vector space over F.

{\color{Blue}~2.8} linearity of matrix representation of linear transformation

Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively, and let T, U: V→W be linear transformations. Then

(a)[T+U]_\beta^\gamma=[T]_\beta^\gamma+[U]_\beta^\gamma and
(b)[aT]_\beta^\gamma=a[T]_\beta^\gamma for all scalars a.

{\color{Blue}~2.9} composition law of linear operators

Let V,W, and Z be vector spaces over the same field f, and let T:V→W and U:W→Z be linear. then UT:V→Z is linear.

{\color{Blue}~2.10} law of linear operator

Let v be a vector space. Let T, U1, U2\mathcal{L}(V). Then
(a) T(U1+U2)=TU1+TU2 and (U1+U2)T=U1T+U2T
(b) T(U1U2)=(TU1)U2
(c) TI=IT=T
(d) a(U1U2)=(aU1)U2=U1(aU2) for all scalars a.

{\color{Blue}~2.11} [UT]αγ=[U]βγ[T]αβ

Let V, W and Z be finite-dimensional vector spaces with ordered bases α β γ, respectively. Let T: V→W and U: W→Z be linear transformations. Then

[UT]_\alpha^\gamma=[U]_\beta^\gamma[T]_\alpha^\beta.

Corollary. Let V be a finite-dimensional vector space with an ordered basis β. Let T,U∈\mathcal{L}(V). Then [UT]β=[U]β[T]β.

{\color{Blue}~2.12} law of matrix

Let A be an m×n matrix, B and C be n×p matrices, and D and E be q×m matrices. Then

(a) A(B+C)=AB+AC and (D+E)A=DA+EA.
(b) a(AB)=(aA)B=A(aB) for any scalar a.
(c) ImA=AIn.
(d) If V is an n-dimensional vector space with an ordered basis β, then [Iv]β=In.

Corollary. Let A be an m×n matrix, B1,B2,...,Bk be n×p matrices, C1,C1,...,C1 be q×m matrices, and a_1,a_2,\ldots,a_k be scalars. Then

A\Bigg(\sum_{i=1}^k a_iB_i\Bigg)=\sum_{i=1}^k a_iAB_i

and

\Bigg(\sum_{i=1}^k a_iC_i\Bigg)A=\sum_{i=1}^k a_iC_iA.

{\color{Blue}~2.13} law of column multiplication

Let A be an m×n matrix and B be an n×p matrix. For each j (1\le j\le p) let u_j and v_j denote the jth columns of AB and B, respectively. Then
(a) u_j=Av_j
(b) v_j=Be_j, where e_j is the jth standard vector of Fp.

{\color{Blue}~2.14} [T(u)]γ=[T]βγ[u]β

Let V and W be finite-dimensional vector spaces having ordered bases β and γ, respectively, and let T: V→W be linear. Then, for each u ∈ V, we have

[T(u)]_\gamma=[T]_\beta^\gamma[u]_\beta.

{\color{Blue}~2.15} laws of LA

Let A be an m×n matrix with entries from F. Then the left-multiplication transformation LA: Fn→Fm is linear. Furthermore, if B is any other m×n matrix (with entries from F) and β and γ are the standard ordered bases for Fn and Fm, respectively, then we have the following properties.
(a) [L_A]_\beta^\gamma=A.
(b) LA=LB if and only if A=B.
(c) LA+B=LA+LB and LaA=aLA for all a∈F.
(d) If T:Fn→Fm is linear, then there exists a unique m×n matrix C such that T=LC. In fact, \mathrm{C}=[L_T]_\beta^\gamma.
(e) If W is an n×p matrix, then LAE=LALE.
(f ) If m=n, then L_{I_n}=I_{F^n}.

{\color{Blue}~2.16} A(BC)=(AB)C

Let A,B, and C be matrices such that A(BC) is defined. Then A(BC)=(AB)C; that is, matrix multiplication is associative.

{\color{Blue}~2.17} T−1is linear

Let V and W be vector spaces, and let T:V→W be linear and invertible. Then T−1: W →V is linear.

{\color{Blue}~2.18} [T−1]γβ=([T]βγ)−1

Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively. Let T:V→W be linear. Then T is invertible if and only if [T]_\beta^\gamma is invertible. Furthermore, [T^{-1}]_\gamma^\beta=([T]_\beta^\gamma)^{-1}

Lemma. Let T be an invertible linear transformation from V to W. Then V is finite-dimensional if and only if W is finite-dimensional. In this case, dim(V)=dim(W).

Corollary 1. Let V be a finite-dimensional vector space with an ordered basis β, and let T:V→V be linear. Then T is invertible if and only if [T]β is invertible. Furthermore, [T−1]β=([T]β)−1.

Corollary 2. Let A be an n×n matrix. Then A is invertible if and only if LA is invertible. Furthermore, (LA)−1=LA−1.

{\color{Blue}~2.19} V is isomorphic to W dim(V)=dim(W)

Let W and W be finite-dimensional vector spaces (over the same field). Then V is isomorphic to W if and only if dim(V)=dim(W).

Corollary. Let V be a vector space over F. Then V is isomorphic to Fn if and only if dim(V)=n.

{\color{Blue}~2.20} ??

Let V and W be finite-dimensional vector spaces over F of dimensions n and m, respectively, and let β and γ be ordered bases for V and W, respectively. Then the function ~\Phi: \mathcal{L}(V,W)→Mm×n(F), defined by ~\Phi(T)=[T]_\beta^\gamma for T∈\mathcal{L}(V,W), is an isomorphism.

Corollary. Let V and W be finite-dimensional vector spaces of dimension n and m, respectively. Then \mathcal{L}(V,W) is finite-dimensional of dimension mn.

{\color{Blue}~2.21} Φβ is an isomorphism

For any finite-dimensional vector space V with ordered basis β, Φβ is an isomorphism.

{\color{Blue}~2.22} ??

Let β and β' be two ordered bases for a finite-dimensional vector space V, and let Q=[I_V]_{\beta'}^\beta. Then
(a) Q is invertible.
(b) For any v\in V, ~[v]_\beta=Q[v]_{\beta'}.

{\color{Blue}~2.23} [T]β'=Q−1[T]βQ

Let T be a linear operator on a finite-dimensional vector space V,and let β and β' be two ordered bases for V. Suppose that Q is the change of coordinate matrix that changes β'-coordinates into β-coordinates. Then

~[T]_{\beta'}=Q^{-1}[T]_\beta Q.

Corollary. Let A∈Mn×n(F), and le t γ be an ordered basis for Fn. Then [LA]γ=Q−1AQ, where Q is the n×n matrix whose jth column is the jth vector of γ.

Principal Axes Theorem

For a conic whose equation is ax^2 + bxy + cy^2 + dx + ey + f = 0, the rotation given by X=PX' eliminates the xy-term if P is an orthogonal matrix, with \left\vert P\right\vert = 1, that diagonalizes A. That is,

P^TAP = \begin{bmatrix}
\lambda_1 & 0 \\
0 & \lambda_2
\end{bmatrix},

────────────────────────────────────────────────────────────────────────────────────────────────────where \lambda_1 and \lambda_2 are eigenvalues of A. The equation of the rotated conic is given by

\lambda_1(x')^2 + \lambda_2(y')^2 + \begin{bmatrix}
d & e\\
\end{bmatrix}PX' + f = 0.

{\color{Blue}~2.27} p(D)(x)=0 (p(D)∈C)⇒ x(k)exists (k∈N)

Any solution to a homogeneous linear differential equation with constant coefficients has derivatives of all orders; that is, if x is a solution to such an equation, then x^{(k)} exists for every positive integer k.

{\color{Blue}~2.28} {solutions}= N(p(D))

The set of all solutions to a homogeneous linear differential equation with constant coefficients coincides with the null space of p(D), where p(t) is the auxiliary polynomial with the equation.

Corollary. The set of all solutions to s homogeneous linear differential equation with constant coefficients is a subspace of \mathrm{C}^\infty.

{\color{Blue}~2.29} derivative of exponential function

For any exponential function f(t)=e^{ct}, f'(t)=ce^{ct}.

{\color{Blue}~2.30} {e−at} is a basis of N(p(D+aI))

The solution space for the differential equation,

y'+a_0y=0

is of dimension 1 and has \{e^{-a_0t}\}as a basis.

Corollary. For any complex number c, the null space of the differential operator D-cI has {e^{ct}} as a basis.

{\color{Blue}~2.31} e^{ct} is a solution

Let p(t) be the auxiliary polynomial for a homogeneous linear differential equation with constant coefficients. For any complex number c, if c is a zero of p(t), then to the differential equation.

{\color{Blue}~2.32} dim(N(p(D)))=n

For any differential operator p(D) of order n, the null space of p(D) is an n_dimensional subspace of C.

Lemma 1. The differential operator D-cI: C to C is onto for any complex number c.

Lemma 2 Let V be a vector space, and suppose that T and U are linear operators on V such that U is onto and the null spaces of T and U are finite-dimensional, Then the null space of TU is finite-dimensional, and

dim(N(TU))=dim(N(U))+dim(N(U)).

Corollary. The solution space of any nth-order homogeneous linear differential equation with constant coefficients is an n-dimensional subspace of C.

{\color{Blue}~2.33} ecit is linearly independent with each other (ci are distinct)

Given n distinct complex numbers c_1, c_2,\ldots,c_n, the set of exponential functions \{e^{c_1t},e^{c_2t},\ldots,e^{c_nt}\} is linearly independent.

Corollary. For any nth-order homogeneous linear differential equation with constant coefficients, if the auxiliary polynomial has n distinct zeros c_1, c_2, \ldots, c_n, then \{e^{c_1t},e^{c_2t},\ldots,e^{c_nt}\} is a basis for the solution space of the differential equation.

Lemma. For a given complex number c and positive integer n, suppose that (t-c)^n is athe auxiliary polynomial of a homogeneous linear differential equation with constant coefficients. Then the set

\beta=\{e^{c_1t},e^{c_2t},\ldots,e^{c_nt}\}

is a basis for the solution space of the equation.

{\color{Blue}~2.34} general solution of homogeneous linear differential equation

Given a homogeneous linear differential equation with constant coefficients and auxiliary polynomial

(t-c_1)^{n_1}(t-c_2)^{n_2}\cdots(t-c_k)^{n_k},

where n_1, n_2,\ldots,n_k are positive integers and c_1, c_2, \ldots, c_n are distinct complex numbers, the following set is a basis for the solution space of the equation:

\{e^{c_1t}, te^{c_1t},\ldots,t^{n_1-1}e^{c_1t},\ldots,e^{c_kt},te^{c_kt},\ldots,t^{n_k-1}e^{c_kt}\}.

Definition of an Orthogonal Matrix

A square matrix P is called orthogonal if it is invertible and if

P^{-1} = P^T.

Real Spectral Theorem

If A is an n\times n symmetric matrix, then the following properties are true:

  1. A is diagonalizable.
  2. All eigenvalues of A are real.
  3. If \lambda is an eigenvalue of A with multiplicity k, then \lambda has k linearly independent eigenvectors. That is, the eigenspace of \lambda has dimension k.

Also, the set of eigenvalues of A is called the spectrum of A.

Elementary matrix operations and systems of linear equations

Elementary matrix operations

The three elementary row operations are the following:

  1. Interchange two rows.
  2. Multiply a row by a nonzero constant.
  3. Add a multiple of a row to another row.

Elementary matrix

An n \times n matrix is called an elementary matrix if it can be obtained from the identity matrix I_n by a single elementary row operation.

Rank of a matrix

The rank of a matrix A is the number of pivot columns after the reduced row echelon form of A.

Invertible Matrices

\text{If }A\text{ is }n\text{ × }n\text{, then the following statements are equivalent:}

  1. A \text{ is invertible.}
  2. A\vec x = \vec b\text{ has a unique solution for every }n \times 1\text{ column matrix }\vec b\text{.}
  3. A\vec x = \vec 0\text{ has only the trivial solution.}
  4. A\text{ is row-equivalent to }I_{n}\text{.}
  5. A\text{ can be written as the product of elementary matrices.}
  6. \det(A) \ne 0
  7. \operatorname{rk}(A) = n\text{ number of columns.}
  8. \operatorname{nul}(A) = 0
  9. \text{All of the }n\text{-row vectors of }A\text{ are linearly independent.}
  10. \text{All of the }n\text{-column vectors of }A\text{ are linearly independent.}

Determinants

If

A = \begin{pmatrix}
a & b \\
c & d \\
\end{pmatrix}

is a 2×2 matrix with entries form a field F, then we define the determinant of A, denoted det(A) or |A|, to be the scalar ad-bc.


*Theorem 1: linear function for a single row.
*Theorem 2: nonzero determinant ⇔ invertible matrix

Theorem 1: The function det: M2×2(F) → F is a linear function of each row of a 2×2 matrix when the other row is held fixed. That is, if u,v, and w are in F² and k is a scalar, then

\det\begin{pmatrix}
u + kv\\
w\\
\end{pmatrix}
=\det\begin{pmatrix}
u\\
w\\
\end{pmatrix}
+ k\det\begin{pmatrix}
v\\
w\\
\end{pmatrix}

and

\det\begin{pmatrix}
w\\
u + kv\\
\end{pmatrix}
=\det\begin{pmatrix}
w\\
u\\
\end{pmatrix}
+ k\det\begin{pmatrix}
w\\
v\\
\end{pmatrix}

Theorem 2: Let A \in M2×2(F). Then thee deter minant of A is nonzero if and only if A is invertible. Moreover, if A is invertible, then

A^{-1}=\frac{1}{\det(A)}\begin{pmatrix}
A_{22}&-A_{12}\\
-A_{21}&A_{11}\\
\end{pmatrix}

Diagonalization

Characteristic polynomial of a linear operator/matrix

{\color{Blue}~5.1} diagonalizable⇔basis of eigenvector

A linear operator T on a finite-dimensional vector space V is diagonalizable if and only if there exists an ordered basis β for V consisting of eigenvectors of T. Furthermore, if T is diagonalizable, \beta= {v_1,v_2,\ldots,v_n} is an ordered basis of eigenvectors of T, and D = [T]β then D is a diagonal matrix and D_{jj} is the eigenvalue corresponding to v_j for 1\le j \le n.

{\color{Blue}~5.2} eigenvalue⇔det(AIn)=0

Let A∈Mn×n(F). Then a scalar λ is an eigenvalue of A if and only if det(AIn)=0

{\color{Blue}~5.3} characteristic polynomial

Let A∈Mn×n(F).
(a) The characteristic polynomial of A is a polynomial of degree n with leading coefficient (-1)n.
(b) A has at most n distinct eigenvalues.

{\color{Blue}~5.4} υ to λ⇔υ∈N(T-λI)

Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T.
A vector υ∈V is an eigenvector of T corresponding to λ if and only if υ≠0 and υ∈N(T-λI).

{\color{Blue}~5.5} vi to λi⇔vi is linearly independent

Let T be a linear operator on a vector space V, and let \lambda_1,\lambda_2,\ldots,\lambda_k, be distinct eigenvalues of T. If v_1,v_2,\ldots,v_k are eigenvectors of t such that \lambda_i corresponds to v_i (1\le i\le k), then {v_1,v_2,\ldots,v_k} is linearly independent.

{\color{Blue}~5.6} characteristic polynomial splits

The characteristic polynomial of any diagonalizable linear operator splits.

{\color{Blue}~5.7} 1 ≤ dim(Eλ) ≤ m

Let T be alinear operator on a finite-dimensional vectorspace V, and let λ be an eigenvalue of T having multiplicity m. Then 1 \le\dim(E_{\lambda})\le m.

{\color{Blue}~5.8} S = S1S2 ∪ ...∪ Sk is linearly independent

Let T be a linear operator on a vector space V, and let \lambda_1,\lambda_2,\ldots,\lambda_k, be distinct eigenvalues of T. For each i=1,2,\ldots,k, let S_i be a finite linearly independent subset of the eigenspace E_{\lambda_i}. Then S=S_1\cup S_2 \cup\cdots\cup S_k is a linearly independent subset of V.

{\color{Blue}~5.9} ⇔T is diagonalizable

Let T be a linear operator on a finite-dimensional vector space V that the characteristic polynomial of T splits. Let \lambda_1,\lambda_2,\ldots,\lambda_k be the distinct eigenvalues of T. Then
(a) T is diagonalizable if and only if the multiplicity of \lambda_i is equal to \dim(E_{\lambda_i}) for all i.
(b) If T is diagonalizable and \beta_i is an ordered basis for E_{\lambda_i} for each i, then \beta=\beta_1\cup \beta_2\cup \cup\beta_k is an ordered basis^2 for V consisting of eigenvectors of T.

Test for diagonlization

Inner product spaces

Inner product, standard inner product on Fn, conjugate transpose, adjoint, Frobenius inner product, complex/real inner product space, norm, length, conjugate linear, orthogonal, perpendicular, unit vector, orthonormal, normalization.

{\color{Blue}~6.1} properties of linear product

Let V be an inner product space. Then for x,y,z ∈ V and c ∈ F, the following statements are true.
(a) \langle x,y+z\rangle=\langle x,y\rangle+\langle x,z\rangle.
(b) \langle x,cy\rangle=\bar{c}\langle x,y\rangle.
(c) \langle x,\mathit{0}\rangle=\langle\mathit{0},x\rangle=0.
(d) \langle x,x\rangle=0 if and only if x=\mathit{0}.
(e) If\langle x,y\rangle=\langle x,z\rangle for all x\in V, then y=z.

{\color{Blue}~6.2} law of norm

Let V be an inner product space over F. Then for all x,y ∈ V and c ∈ F, the following statements are true.
(a) \|cx\|=|c|\cdot\|x\|.
(b) \|x\|=0 if and only if x=0. In any case, \|x\|\ge0.
(c)(Cauchy-Schwarz Inequality)|\langle x,y\rangle|\le\|x\|\cdot\|y\|.
(d)(Triangle Inequality)\|x+y\|\le\|x\|+\|y\|.

orthonormal basis, Gram–Schmidt process, Fourier coefficients, orthogonal complement, orthogonal projection

{\color{Blue}~6.3} span of orthogonal subset

Let V be an inner product space and S=\{v_1,v_2,\ldots,v_k\} be an orthogonal subset of V consisting of nonzero vectors. If y∈span(S), then

y=\sum_{i=1}^n{\langle y,v_i \rangle \over \|v_i\|^2}v_i

{\color{Blue}~6.4} Gram-Schmidt process

Let V be an inner product space and S=\{w_1,w_2,\ldots,w_n\} be a linearly independent subset of V. DefineS'=\{v_1,v_2,\ldots,v_n\}, where v_1=w_1 and

v_k=w_k-\sum_{j=1}^{k-1}{\langle w_k, v_j\rangle\over\|v_j\|^2}v_j

Then S' is an orhtogonal set of nonzero vectors such that span(S')=span(S).

{\color{Blue}~6.5} orthonormal basis

Let V be a nonzero finite-dimensional inner product space. Then V has an orthonormal basis β. Furthermore, if β =\{v_1,v_2,\ldots,v_n\} and x∈V, then

x=\sum_{i=1}^n\langle x,v_i\rangle v_i.

Corollary. Let V be a finite-dimensional inner product space with an orthonormal basis β =\{v_1,v_2,\ldots,v_n\}. Let T be a linear operator on V, and let A=[T]β. Then for any i and j, A_{ij}=\langle T(v_j), v_i\rangle.

{\color{Blue}~6.6} W by orthonormal basis

Let W be a finite-dimensional subspace of an inner product space V, and let y∈V. Then there exist unique vectors u∈W and z∈W such that y=u+z. Furthermore, if \{v_1,v_2,\ldots,v_k\} is an orthornormal basis for W, then

u=\sum_{i=1}^k\langle y,v_i\rangle v_i.

S=\{v_1,v_2,\ldots,v_k\} Corollary. In the notation of Theorem 6.6, the vector u is the unique vector in W that is "closest" to y; thet is, for any x∈W, \|y-x\|\ge\|y-u\|, and this inequality is an equality if and onlly if x=u.

{\color{Blue}~6.7} properties of orthonormal set

Suppose that S=\{v_1,v_2,\ldots,v_k\} is an orthonormal set in an n-dimensional inner product space V. Than
(a) S can be extended to an orthonormal basis \{v_1, v_2, \ldots,v_k,v_{k+1},\ldots,v_n\} for V.
(b) If W=span(S), then S_1=\{v_{k+1},v_{k+2},\ldots,v_n\} is an orhtonormal basis for W(using the preceding notation).
(c) If W is any subspace of V, then dim(V)=dim(W)+dim(W).

Least squares approximation, Minimal solutions to systems of linear equations

{\color{Blue}~6.8} linear functional representation inner product

Let V be a finite-dimensional inner product space over F, and let g:V→F be a linear transformation. Then there exists a unique vector y∈ V such that \rm{g}(x)=\langle x, y\rangle for all x∈ V.

{\color{Blue}~6.9} definition of T*

Let V be a finite-dimensional inner product space, and let T be a linear operator on V. Then there exists a unique function T*:V→V such that \langle\rm{T}(x),y\rangle=\langle x, \rm{T}^*(y)\rangle for all x,y ∈ V. Furthermore, T* is linear

{\color{Blue}~6.10} [T*]β=[T]*β

Let V be a finite-dimensional inner product space, and let β be an orthonormal basis for V. If T is a linear operator on V, then

[T^*]_\beta=[T]^*_\beta.

{\color{Blue}~6.11} properties of T*

Let V be an inner product space, and let T and U be linear operators onV. Then
(a) (T+U)*=T*+U*;
(b) (cT)*=\bar c T* for any c∈ F;
(c) (TU)*=U*T*;
(d) T**=T;
(e) I*=I.

Corollary. Let A and B be n×nmatrices. Then
(a) (A+B)*=A*+B*;
(b) (cA)*=\bar c A* for any c∈ F;
(c) (AB)*=B*A*;
(d) A**=A;
(e) I*=I.

{\color{Blue}~6.12} Least squares approximation

Let A ∈ Mm×n(F) and y∈Fm. Then there exists x_0 ∈ Fn such that (A*A)x_0=A*y and \|Ax_0-Y\|\le\|Ax-y\| for all x∈ Fn

Lemma 1. let A ∈ Mm×n(F), x∈Fn, and y∈Fm. Then

\langle Ax, y\rangle _m =\langle x, A*y\rangle _n

Lemma 2. Let A ∈ Mm×n(F). Then rank(A*A)=rank(A).

Corollary.(of lemma 2) If A is an m×n matrix such that rank(A)=n, then A*A is invertible.

{\color{Blue}~6.13} Minimal solutions to systems of linear equations

Let A ∈ Mm×n(F) and b∈ Fm. Suppose that Ax=b is consistent. Then the following statements are true.
(a) There existes exactly one minimal solution s of Ax=b, and s∈R(LA*).
(b) The vector s is the only solution to Ax=b that lies in R(LA*); that is, if u satisfies (AA*)u=b, then s=A*u.

References

This article is issued from Wikipedia - version of the Wednesday, February 10, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.