Transformation matrix

From Wikipedia, the free encyclopedia

In linear algebra, linear transformations can be represented by matrices. If T is a linear transformation mapping Rn to Rm and {\vec  x} is a column vector with n entries, then

T({\vec  x})={\mathbf  {A}}{\vec  x}

for some m×n matrix A, called the transformation matrix of T. There is an alternative expression of transformation matrices involving row vectors that is preferred by some authors.

Uses

Matrices allow arbitrary linear transformations to be represented in a consistent format, suitable for computation.[1] This also allows transformations to be concatenated easily (by multiplying their matrices).

Linear transformations are not the only ones that can be represented by matrices. Some transformations that are non-linear on a n-dimensional Euclidean space Rn, can be represented as linear transformations on the n+1-dimensional space Rn+1. These include both affine transformations (such as translation) and projective transformations. For this reason, 4×4 transformation matrices are widely used in 3D computer graphics. These n+1-dimensional transformation matrices are called, depending on their application, affine transformation matrices, projective transformation matrices, or more generally non-linear transformation matrices. With respect to an n-dimensional matrix, an n+1-dimensional matrix can be described as an augmented matrix.

In the physical sciences, an active transformation is one which actually changes the physical position of a system, and makes sense even in the absence of a coordinate system whereas a passive transformation is a change in the coordinate description of the physical system (change of basis). The distinction between active and passive transformations is important. By default, by transformation, mathematicians usually mean active transformations, while physicists could mean either.

Put differently, a passive transformation refers to observation of the same event from two different coordinate frames.

Finding the matrix of a transformation

If one has a linear transformation T(x) in functional form, it is easy to determine the transformation matrix A by transforming each of the vectors of the standard basis by T, then inserting the result into the columns of a matrix. In other words,

{\mathbf  {A}}={\begin{bmatrix}T({\vec  e}_{1})&T({\vec  e}_{2})&\cdots &T({\vec  e}_{n})\end{bmatrix}}

For example, the function T(x)=5x is a linear transformation. Applying the above process (suppose that n = 2 in this case) reveals that

T({\vec  {x}})=5{\vec  {x}}=5{\mathbf  {I}}{\vec  {x}}={\begin{bmatrix}5&&0\\0&&5\end{bmatrix}}{\vec  {x}}

It must be noted that the matrix representation of vectors and operators depends on the chosen basis; a similar matrix will result from an alternate basis. Nevertheless, the method to find the components remains the same.

To elaborate, vector v can be represented in basis vectors, E=[{\vec  e}_{1}{\vec  e}_{2}\ldots {\vec  e}_{n}] with coordinates [v]_{E}=[v_{1}v_{2}\ldots v_{n}]^{T} :

{\vec  v}=v_{1}{\vec  e}_{1}+v_{2}{\vec  e}_{2}+\ldots +v_{n}{\vec  e}_{n}=\sum v_{i}{\vec  e}_{i}=E[v]_{E}

Now, express the result of the transformation matrix A upon {\vec  v}, in the given basis:

A({\vec  v})=A(\sum {v_{i}{\vec  e}_{i}})=\sum {v_{i}A({\vec  e}_{i})}=[A({\vec  e}_{1})A({\vec  e}_{2})\ldots A({\vec  e}_{n})][v]_{E}=
\;=\;A\cdot [v]_{E}=[{\vec  e}_{1}{\vec  e}_{2}\ldots {\vec  e}_{n}]{\begin{bmatrix}a_{{1,1}}&a_{{1,2}}&\ldots &a_{{1,n}}\\a_{{2,1}}&a_{{2,2}}&\ldots &a_{{2,n}}\\\vdots &\vdots &\ddots &\vdots \\a_{{n,1}}&a_{{n,2}}&\ldots &a_{{n,n}}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}

The a_{{i,j}} elements of matrix A are determined for a given basis E by applying A to every {\vec  e}_{j}=[00\ldots (v_{j}=1)\ldots 0]^{T}, and observing the response vector A{\vec  e}_{j}=a_{{1,j}}{\vec  e}_{1}+a_{{2,j}}{\vec  e}_{2}+\ldots +a_{{n,j}}{\vec  e}_{n}=\sum a_{{i,j}}{\vec  e}_{i}. This equation defines the wanted elements, a_{{i,j}}, of j-th column of the matrix A.[2]

Eigenbasis and diagonal matrix

Yet, there is a special basis for an operator in which the components form a diagonal matrix and, thus, multiplication complexity reduces to n. Being diagonal means that all coefficients a_{{i,j}} but a_{{i,i}} are zeros leaving only one term in the sum \sum a_{{i,j}}{\vec  e}_{i} above. The surviving diagonal elements, a_{{i,i}}, are known as eigenvalues and designated with \lambda _{i} in the defining equation, which reduces to A{\vec  e}_{i}=\lambda _{i}{\vec  e}_{i}. The resulting equation is known as eigenvalue equation.[3] The eigenvectors and eigenvalues are derived from it via the characteristic polynomial.

With diagonalization, it is often possible to translate to and from eigenbases.

Examples in 2D graphics

Most common geometric transformations that keep the origin fixed are linear, including rotation, scaling, shearing, reflection, and orthogonal projection; if an affine transformation is not a pure translation it keeps some point fixed, and that point can be chosen as origin to make the transformation linear. In two dimensions, linear transformations can be represented using a 2×2 transformation matrix.

Rotation

For rotation by an angle θ clockwise about the origin (Note that this definition of clockwise is dependent on the x axis pointing right and the y axis pointing up. In for example SVG, where the y axis points down, the below matrices must be swapped) the functional form is x'=x\cos \theta +y\sin \theta and y'=-x\sin \theta +y\cos \theta . Written in matrix form, this becomes:

{\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}

Similarly, for a rotation counter clockwise about the origin, the functional form is x'=x\cos \theta -y\sin \theta and y'=x\sin \theta +y\cos \theta and the matrix form is:

{\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}

Scaling

For scaling (that is, enlarging or shrinking), we have x'=s_{x}\cdot x and y'=s_{y}\cdot y. The matrix form is:

{\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}s_{x}&0\\0&s_{y}\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}

When \ s_{x}s_{y}=1, then the matrix is a squeeze mapping and preserves areas in the plane.

If s_{x} or s_{y} is greater than 1 in absolute value, the transformation stretches the figures in the corresponding direction; if less than 1, it shrinks them in that direction. Negative values of s_{x} or s_{y} also flips (mirrors) the points in that direction.

Applying this sort of scaling k times is equivalent to applying a single scaling with factors s_{x}^{k} and s_{y}^{k}.

More generally, any symmetric n\times n matrix defines a scaling along two perpendicular axes (the eigenvectors of the matrix) by equal or distinct factors (the eigenvalues corresponding to those eigenvectors).

Shearing

For shear mapping (visually similar to slanting), there are two possibilities.

A shear parallel to the x axis has x'=x+ky and y'=y. Written in matrix form, this becomes:

{\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}1&k\\0&1\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}

A shear parallel to the y axis has x'=x and y'=y+kx, which has matrix form:

{\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}1&0\\k&1\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}

Reflection

To reflect a vector about a line that goes through the origin, let \scriptstyle {\vec  {l}}=(l_{x},l_{y}) be a vector in the direction of the line:

{\mathbf  {A}}={\frac  {1}{\lVert {\vec  {l}}\rVert ^{2}}}{\begin{bmatrix}l_{x}^{2}-l_{y}^{2}&2l_{x}l_{y}\\2l_{x}l_{y}&l_{y}^{2}-l_{x}^{2}\end{bmatrix}}

Orthogonal projection

To project a vector orthogonally onto a line that goes through the origin, let \scriptstyle {\vec  {u}}\,=\,(u_{x},u_{y}) be a vector in the direction of the line. Then use the transformation matrix:

{\mathbf  {A}}={\frac  {1}{\lVert {\vec  {u}}\rVert ^{2}}}{\begin{bmatrix}u_{x}^{2}&u_{x}u_{y}\\u_{x}u_{y}&u_{y}^{2}\end{bmatrix}}

As with reflections, the orthogonal projection onto a line that does not pass through the origin is an affine, not linear, transformation.

Parallel projections are also linear transformations and can be represented simply by a matrix. However, perspective projections are not, and to represent these with a matrix, homogeneous coordinates must be used.

Examples in 3D graphics

Rotation

The matrix to rotate an angle θ about the axis defined by unit vector (l,m,n) is[4]

{\begin{bmatrix}ll(1-\cos \theta )+\cos \theta &ml(1-\cos \theta )-n\sin \theta &nl(1-\cos \theta )+m\sin \theta \\lm(1-\cos \theta )+n\sin \theta &mm(1-\cos \theta )+\cos \theta &nm(1-\cos \theta )-l\sin \theta \\ln(1-\cos \theta )-m\sin \theta &mn(1-\cos \theta )+l\sin \theta &nn(1-\cos \theta )+\cos \theta \end{bmatrix}}.

Reflection

To reflect a point through a plane ax+by+cz=0 (which goes through the origin), one can use {\mathbf  {A}}={\mathbf  {I}}-2{\mathbf  {NN}}^{T}, where {\mathbf  {I}} is the 3x3 identity matrix and {\mathbf  {N}} is the three-dimensional unit vector for the vector normal of the plane. If the L2 norm of a,b, and c is unity, the transformation matrix can be expressed as:

{\mathbf  {A}}={\begin{bmatrix}1-2a^{2}&-2ab&-2ac\\-2ab&1-2b^{2}&-2bc\\-2ac&-2bc&1-2c^{2}\end{bmatrix}}

Note that these are particular cases of a Householder reflection in two and three dimensions. A reflection about a line or plane that does not go through the origin is not a linear transformation; it is an affine transformation.

Composing and inverting transformations

One of the main motivations for using matrices to represent linear transformations is that transformations can then be easily composed (combined) and inverted.

Composition is accomplished by matrix multiplication. If A and B are the matrices of two linear transformations, then the effect of applying first A and then B to a vector x is given by:

{\mathbf  {B}}({\mathbf  {A}}{\vec  {x}})=({\mathbf  {BA}}){\vec  {x}}

(This is called the associative property.) In other words, the matrix of the combined transformation A followed by B is simply the product of the individual matrices. Note that the multiplication is done in the opposite order from the English sentence: the matrix of "A followed by B" is BA, not AB.

A consequence of the ability to compose transformations by multiplying their matrices is that transformations can also be inverted by simply inverting their matrices. So, A−1 represents the transformation that "undoes" A.

Other kinds of transformations

Affine transformations

To represent affine transformations with matrices, we can use homogeneous coordinates. This means representing a 2-vector (x, y) as a 3-vector (x, y, 1), and similarly for higher dimensions. Using this system, translation can be expressed with matrix multiplication. The functional form x'=x+t_{x};y'=y+t_{y} becomes:

{\begin{bmatrix}x'\\y'\\1\end{bmatrix}}={\begin{bmatrix}1&0&t_{x}\\0&1&t_{y}\\0&0&1\end{bmatrix}}{\begin{bmatrix}x\\y\\1\end{bmatrix}}.

All ordinary linear transformations are included in the set of affine transformations, and can be described as a simplified form of affine transformations. Therefore, any linear transformation can be also represented by a general transformation matrix. The latter is obtained by expanding the corresponding linear transformation matrix by one row and column, filling the extra space with zeros except for the lower-right corner, which must be set to 1. For example, the clockwise rotation matrix from above becomes:

{\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}

Using transformation matrices containing homogeneous coordinates, translations can be seamlessly intermixed with all other types of transformations. The reason is that the real plane is mapped to the w = 1 plane in real projective space, and so translation in real Euclidean space can be represented as a shear in real projective space. Although a translation is a non-linear transformation in a 2-D or 3-D Euclidean space described by Cartesian coordinates, it becomes, in a 3-D or 4-D projective space described by homogeneous coordinates, a simple linear transformation (a shear).

When using affine transformations, the homogeneous component of a coordinate vector (normally called w) will never be altered. One can therefore safely assume that it is always 1 and ignore it. However, this is not true when using perspective projections.

Perspective projection

Another type of transformation, of importance in 3D computer graphics, is the perspective projection. Whereas parallel projections are used to project points onto the image plane along parallel lines, the perspective projection projects points onto the image plane along lines that emanate from a single point, called the center of projection. This means that an object has a smaller projection when it is far away from the center of projection and a larger projection when it is closer.

The simplest perspective projection uses the origin as the center of projection, and z = 1 as the image plane. The functional form of this transformation is then x'=x/z; y'=y/z. We can express this in homogeneous coordinates as:

{\begin{bmatrix}x_{c}\\y_{c}\\z_{c}\\w_{c}\end{bmatrix}}={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&1&0\end{bmatrix}}{\begin{bmatrix}x\\y\\z\\w\end{bmatrix}}

After carrying out the matrix multiplication, the homogeneous component wc will, in general, not be equal to 1. Therefore, to map back into the real plane we must perform the homogeneous divide or perspective divide by dividing each component by wc:

{\begin{bmatrix}x'\\y'\\z'\end{bmatrix}}={\frac  {1}{w_{c}}}{\begin{bmatrix}x_{c}\\y_{c}\\z_{c}\end{bmatrix}}

More complicated perspective projections can be composed by combining this one with rotations, scales, translations, and shears to move the image plane and center of projection wherever they are desired.

See also

References

  1. Gentle, James E. (2007). "Matrix Transformations and Factorizations". Matrix Algebra: Theory, Computations, and Applications in Statistics. Springer. ISBN 9780387708737. 
  2. Nearing, James (2010). "Chapter 7.3 Examples of Operators". Mathematical Tools for Physics. ISBN 048648212X. Retrieved January 1, 2012. 
  3. Nearing, James (2010). "Chapter 7.9: Eigenvalues and Eigenvectors". Mathematical Tools for Physics. ISBN 048648212X. Retrieved January 1, 2012. 
  4. Szymanski, John E. (1989). Basic Mathematics for Electronic Engineers:Models and Applications. Taylor & Francis. p. 154. ISBN 0278000681. 

External links

This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.