Transformation matrix

In linear algebra, linear transformations can be represented by matrices. If T is a linear transformation mapping Rn to Rm and \vec x is a column vector with n entries, then

T( \vec x ) = \mathbf{A} \vec x

for some m×n matrix A, called the transformation matrix of T. There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors.

Uses

Matrices allow arbitrary linear transformations to be represented in a consistent format, suitable for computation.[1] This also allows transformations to be concatenated easily (by multiplying their matrices).

Linear transformations are not the only ones that can be represented by matrices. Some transformations that are non-linear on a n-dimensional Euclidean space Rn, can be represented as linear transformations on the n+1-dimensional space Rn+1. These include both affine transformations (such as translation) and projective transformations. For this reason, 4×4 transformation matrices are widely used in 3D computer graphics. These n+1-dimensional transformation matrices are called, depending on their application, affine transformation matrices, projective transformation matrices, or more generally non-linear transformation matrices. With respect to an n-dimensional matrix, an n+1-dimensional matrix can be described as an augmented matrix.

In the physical sciences, an active transformation is one which actually changes the physical position of a system, and makes sense even in the absence of a coordinate system whereas a passive transformation is a change in the coordinate description of the physical system (change of basis). The distinction between active and passive transformations is important. By default, by transformation, mathematicians usually mean active transformations, while physicists could mean either.

Put differently, a passive transformation refers to description of the same object as viewed from two different coordinate frames.

Finding the matrix of a transformation

If one has a linear transformation T(x) in functional form, it is easy to determine the transformation matrix A by transforming each of the vectors of the standard basis by T, then inserting the result into the columns of a matrix. In other words,

\mathbf{A} = \begin{bmatrix} T( \vec e_1 ) & T( \vec e_2 ) & \cdots & T( \vec e_n ) \end{bmatrix}

For example, the function T(x) = 5x is a linear transformation. Applying the above process (suppose that n = 2 in this case) reveals that

T( \vec{x} ) = 5 \vec{x} = 5 \mathbf{I} \vec{x} = \begin{bmatrix} 5 && 0 \\ 0 && 5 \end{bmatrix} \vec{x}

It must be noted that the matrix representation of vectors and operators depends on the chosen basis; a similar matrix will result from an alternate basis. Nevertheless, the method to find the components remains the same.

To elaborate, vector v can be represented in basis vectors, E = [\vec e_1 \vec e_2 \ldots \vec e_n] with coordinates  [v]_E = [v_1 v_2 \ldots v_n]^T :

\vec v = v_1 \vec e_1 + v_2 \vec e_2 + \ldots + v_n \vec e_n = \sum v_i \vec e_i = E [v]_E

Now, express the result of the transformation matrix A upon \vec v, in the given basis:

A(\vec v) = A(\sum {v_i \vec e_i}) = \sum {v_i A(\vec e_i)} = [A(\vec e_1) A(\vec e_2) \ldots A(\vec e_n)] [v]_E =
\;=\; A \cdot [v]_E = [\vec e_1 \vec e_2 \ldots \vec e_n]
 \begin{bmatrix} a_{1,1} & a_{1,2} & \ldots & a_{1,n} \\
a_{2,1} & a_{2,2} & \ldots & a_{2,n} \\
\vdots &  \vdots &  \ddots &  \vdots \\
a_{n,1} & a_{n,2} & \ldots & a_{n,n} \\
\end{bmatrix}
\begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix}

The a_{i,j} elements of matrix A are determined for a given basis E by applying A to every \vec e_j = [0 0 \ldots (v_j=1) \ldots 0]^T, and observing the response vector A \vec e_j = a_{1,j} \vec e_1 + a_{2,j} \vec e_2 + \ldots + a_{n,j} \vec e_n = \sum a_{i,j} \vec e_i. This equation defines the wanted elements, a_{i,j}, of j-th column of the matrix A.[2]

Eigenbasis and diagonal matrix

Yet, there is a special basis for an operator in which the components form a diagonal matrix and, thus, multiplication complexity reduces to n. Being diagonal means that all coefficients a_{i,j} but a_{i,i} are zeros leaving only one term in the sum \sum a_{i,j} \vec e_i above. The surviving diagonal elements, a_{i,i}, are known as eigenvalues and designated with \lambda_i in the defining equation, which reduces to A \vec e_i = \lambda_i \vec e_i. The resulting equation is known as eigenvalue equation.[3] The eigenvectors and eigenvalues are derived from it via the characteristic polynomial.

With diagonalization, it is often possible to translate to and from eigenbases.

Examples in 2D computer graphics

Most common geometric transformations that keep the origin fixed are linear, including rotation, scaling, shearing, reflection, and orthogonal projection; if an affine transformation is not a pure translation it keeps some point fixed, and that point can be chosen as origin to make the transformation linear. In two dimensions, linear transformations can be represented using a 2×2 transformation matrix.

Rotation

For rotation by an angle θ clockwise about the origin the functional form is x' = x \cos \theta + y \sin \theta and y' =  -x \sin \theta + y \cos \theta. Written in matrix form, this becomes:[4]


\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} \cos \theta &  \sin\theta \\ -\sin \theta & \cos \theta \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}

Similarly, for a rotation counter-clockwise about the origin, the functional form is x' = x \cos \theta - y \sin \theta and y' = x \sin \theta + y \cos \theta and the matrix form is:


\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} \cos \theta &  - \sin\theta \\ \sin \theta & \cos \theta \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}

These formulae assume that the x axis points right and the y axis points up. In formats such as SVG where the y axis points down, these matrices must be swapped.

Shearing

For shear mapping (visually similar to slanting), there are two possibilities.

A shear parallel to the x axis has x' = x + ky and y' = y. Written in matrix form, this becomes:


\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} 1 & k \\ 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}

A shear parallel to the y axis has x' = x and y' = y + kx, which has matrix form:


\begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ k & 1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}

Reflection

To reflect a vector about a line that goes through the origin, let \scriptstyle \vec{l} = (l_x, l_y) be a vector in the direction of the line:

\mathbf{A} = \frac{1}{\lVert\vec{l}\rVert^2} \begin{bmatrix} l_x^2 - l_y^2 & 2 l_x l_y \\ 2 l_x l_y & l_y^2 - l_x^2 \end{bmatrix}

Orthogonal projection

To project a vector orthogonally onto a line that goes through the origin, let \scriptstyle \vec{u} \,=\, (u_x, u_y) be a vector in the direction of the line. Then use the transformation matrix:


\mathbf{A} = \frac{1}{\lVert\vec{u}\rVert^2} \begin{bmatrix} u_x^2 & u_x u_y \\ u_x u_y & u_y^2 \end{bmatrix}

As with reflections, the orthogonal projection onto a line that does not pass through the origin is an affine, not linear, transformation.

Parallel projections are also linear transformations and can be represented simply by a matrix. However, perspective projections are not, and to represent these with a matrix, homogeneous coordinates must be used.

Examples in 3D computer graphics

Rotation

The matrix to rotate an angle θ about the axis defined by unit vector (l,m,n) is[5]

\begin{bmatrix}
ll(1-\cos \theta)+\cos\theta & ml(1-\cos\theta)-n\sin\theta & nl(1-\cos\theta)+m\sin\theta\\
lm(1-\cos\theta)+n\sin\theta & mm(1-\cos\theta)+\cos\theta & nm(1-\cos\theta)-l\sin\theta \\
ln(1-\cos\theta)-m\sin\theta & mn(1-\cos\theta)+l\sin\theta & nn(1-\cos\theta)+\cos\theta
\end{bmatrix}.

Reflection

To reflect a point through a plane ax + by + cz = 0 (which goes through the origin), one can use \mathbf{A} = \mathbf{I}-2\mathbf{NN}^T , where \mathbf{I} is the 3x3 identity matrix and \mathbf{N} is the three-dimensional unit vector for the vector normal of the plane. If the L2 norm of a, b, and c is unity, the transformation matrix can be expressed as:

\mathbf{A} = \begin{bmatrix} 1 - 2 a^2  & - 2 a b & - 2 a c \\ - 2 a b  & 1 - 2 b^2 & - 2 b c  \\ - 2 a c & - 2 b c & 1 - 2c^2 \end{bmatrix}

Note that these are particular cases of a Householder reflection in two and three dimensions. A reflection about a line or plane that does not go through the origin is not a linear transformation; it is an affine transformation.

Composing and inverting transformations

One of the main motivations for using matrices to represent linear transformations is that transformations can then be easily composed (combined) and inverted.

Composition is accomplished by matrix multiplication. If A and B are the matrices of two linear transformations, then the effect of applying first A and then B to a vector x is given by:

\mathbf{B}(\mathbf{A} \vec{x} ) = (\mathbf{BA}) \vec{x}

(This is called the associative property.) In other words, the matrix of the combined transformation A followed by B is simply the product of the individual matrices. Note that the multiplication is done in the opposite order from the English sentence: the matrix of "A followed by B" is BA, not AB.

A consequence of the ability to compose transformations by multiplying their matrices is that transformations can also be inverted by simply inverting their matrices. So, A−1 represents the transformation that "undoes" A.

Other kinds of transformations

Affine transformations

Effect of applying various 2D affine transformation matrices on a unit square. Note that the reflection matrices are special cases of the scaling matrix.
Affine transformations on the 2D plane can be performed in three dimensions. Translation is done by shearing along over the z axis, and rotation is performed around the z axis.

To represent affine transformations with matrices, we can use homogeneous coordinates. This means representing a 2-vector (x, y) as a 3-vector (x, y, 1), and similarly for higher dimensions. Using this system, translation can be expressed with matrix multiplication. The functional form x' = x + t_x; y' = y + t_y becomes:


\begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix}.

All ordinary linear transformations are included in the set of affine transformations, and can be described as a simplified form of affine transformations. Therefore, any linear transformation can also be represented by a general transformation matrix. The latter is obtained by expanding the corresponding linear transformation matrix by one row and column, filling the extra space with zeros except for the lower-right corner, which must be set to 1. For example, the clockwise rotation matrix from above becomes:

\begin{bmatrix} \cos \theta &  \sin \theta & 0 \\ -\sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{bmatrix}

Using transformation matrices containing homogeneous coordinates, translations can be seamlessly intermixed with all other types of transformations. The reason is that the real plane is mapped to the w = 1 plane in real projective space, and so translation in real Euclidean space can be represented as a shear in real projective space. Although a translation is a non-linear transformation in a 2-D or 3-D Euclidean space described by Cartesian coordinates, it becomes, in a 3-D or 4-D projective space described by homogeneous coordinates, a simple linear transformation (a shear).

More affine transformations can be obtained by composition of two or more affine transformations. For example, given a translation T' with vector (t'_x, t'_y), a rotation R by an angle θ counter-clockwise, a scaling S with factors (s_x, s_y) and a translation T of vector (t_x, t_y), the result M of T'RST is:[6]


\begin{bmatrix}
s_x \cos \theta & - s_y \sin \theta & t_x s_x \cos \theta - t_y s_y \sin \theta + t'_x      \\
s_x  \sin \theta & s_y \cos \theta & t_x s_x \sin \theta + t_y s_y \cos \theta + t'_y \\ 
0      & 0 & 1
\end{bmatrix}


When using affine transformations, the homogeneous component of a coordinate vector (normally called w) will never be altered. One can therefore safely assume that it is always 1 and ignore it. However, this is not true when using perspective projections.

Perspective projection

Another type of transformation, of importance in 3D computer graphics, is the perspective projection. Whereas parallel projections are used to project points onto the image plane along parallel lines, the perspective projection projects points onto the image plane along lines that emanate from a single point, called the center of projection. This means that an object has a smaller projection when it is far away from the center of projection and a larger projection when it is closer.

The simplest perspective projection uses the origin as the center of projection, and z = 1 as the image plane. The functional form of this transformation is then x' = x / z; y' = y / z. We can express this in homogeneous coordinates as:


\begin{bmatrix} x_c \\ y_c \\ z_c \\ w_c \end{bmatrix} = 
 \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ w \end{bmatrix}

After carrying out the matrix multiplication, the homogeneous component wc will, in general, not be equal to 1. Therefore, to map back into the real plane we must perform the homogeneous divide or perspective divide by dividing each component by wc:


\begin{bmatrix} x' \\ y' \\ z' \\ 1 \end{bmatrix} = \frac{1}{w_c} \begin{bmatrix} x_c \\ y_c \\ z_c \\ w_c \end{bmatrix}

More complicated perspective projections can be composed by combining this one with rotations, scales, translations, and shears to move the image plane and center of projection wherever they are desired.

See also

References

  1. Gentle, James E. (2007). "Matrix Transformations and Factorizations". Matrix Algebra: Theory, Computations, and Applications in Statistics. Springer. ISBN 9780387708737.
  2. Nearing, James (2010). "Chapter 7.3 Examples of Operators" (PDF). Mathematical Tools for Physics. ISBN 048648212X. Retrieved January 1, 2012.
  3. Nearing, James (2010). "Chapter 7.9: Eigenvalues and Eigenvectors" (PDF). Mathematical Tools for Physics. ISBN 048648212X. Retrieved January 1, 2012.
  4. http://ocw.mit.edu/courses/aeronautics-and-astronautics/16-07-dynamics-fall-2009/lecture-notes/MIT16_07F09_Lec03.pdf
  5. Szymanski, John E. (1989). Basic Mathematics for Electronic Engineers:Models and Applications. Taylor & Francis. p. 154. ISBN 0278000681.
  6. 2D transformation matrices baking -

External links

This article is issued from Wikipedia - version of the Monday, January 18, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.