Levi-Civita symbol

From Wikipedia, the free encyclopedia

The Levi-Civita symbol, also called the permutation symbol or antisymmetric symbol, is a mathematical symbol used in particular in tensor calculus. It is named after the Italian mathematician and physicist Tullio Levi-Civita.

Contents

[edit] Definition

Visualization of a Levi-Civita symbol.
Visualization of a Levi-Civita symbol.

In three dimensions, the Levi-Civita symbol is defined as follows:

\varepsilon_{ijk} =  \begin{cases} +1 & \mbox{if } (i,j,k) \mbox{ is } (1,2,3), (2,3,1) \mbox{ or } (3,1,2), \\ -1 & \mbox{if } (i,j,k) \mbox{ is } (3,2,1), (1,3,2) \mbox{ or } (2,1,3), \\ 0 & \mbox{otherwise: }i=j \mbox{ or } j=k \mbox{ or } k=i, \end{cases}

i.e. it is 1 if (i, j, k) is an even permutation of (1,2,3), −1 if it is an odd permutation, and 0 if any index is repeated.

For example, in linear algebra, the determinant of a 3×3 matrix A can be written

\det A = \sum_{i,j,k=1}^3 \varepsilon_{ijk} a_{1i} a_{2j} a_{3k}

(and similarly for a square matrix of general size, see below)

and the cross product of two vectors can be written as a determinant:

\mathbf{a \times b} =  \begin{vmatrix}   \mathbf{e_1} & \mathbf{e_2} & \mathbf{e_3} \\  a_1 & a_2 & a_3 \\  b_1 & b_2 & b_3 \\  \end{vmatrix} = \sum_{i,j,k=1}^3 \varepsilon_{ijk} \mathbf{e_i} a_j b_k

or more simply:

\mathbf{a \times b} = \mathbf{c},\ c_i = \sum_{j,k=1}^3 \varepsilon_{ijk} a_j b_k

According to the Einstein notation, the summation symbol may be omitted.

The tensor whose components are given by the Levi-Civita symbol (a tensor of covariant rank n) is sometimes called the permutation tensor. It is actually a pseudotensor because under an orthogonal transformation of jacobian determinant −1 (i.e., a rotation composed with a reflection), it gets a -1. Because the Levi-Civita symbol is a pseudotensor, the result of taking a cross product is a pseudovector, not a vector.

[edit] Relation to Kronecker delta

The Levi-Civita symbol is related to the Kronecker delta. In three dimensions, the relationship is given by the following equations:

\varepsilon_{ijk}\varepsilon_{lmn} = \det \begin{vmatrix} \delta_{il} & \delta_{im} & \delta_{in} \\ \delta_{jl} & \delta_{jm} & \delta_{jn} \\ \delta_{kl} & \delta_{km} & \delta_{kn} \\ \end{vmatrix}
= \delta_{il}\left( \delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km}\right) - \delta_{im}\left( \delta_{jl}\delta_{kn} - \delta_{jn}\delta_{kl} \right) + \delta_{in} \left( \delta_{jl}\delta_{km} - \delta_{jm}\delta_{kl} \right) \,
\sum_{i=1}^3 \varepsilon_{ijk}\varepsilon_{imn} = \delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km} ("contracted epsilon identity")
\sum_{i,j=1}^3 \varepsilon_{ijk}\varepsilon_{ijn} = 2\delta_{kn}

[edit] Generalization to n dimensions

The Levi-Civita symbol can be generalized to higher dimensions:

\varepsilon_{ijk\ell\dots} = \left\{ \begin{matrix} +1 & \mbox{if }(i,j,k,\ell,\dots) \mbox{ is an even permutation of } (1,2,3,4,\dots) \\ -1 & \mbox{if }(i,j,k,\ell,\dots) \mbox{ is an odd permutation of } (1,2,3,4,\dots) \\ 0 & \mbox{if any two labels are the same} \end{matrix} \right.

Thus, it is the sign of the permutation in the case of a permutation, and zero otherwise.

Furthermore, it can be shown that

\sum_{i,j,k,\dots=1}^n \varepsilon_{ijk\dots}\varepsilon_{ijk\dots} = n!

is always fulfilled in n dimensions. In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual.
In general n dimensions one can write the product of two Levi-Civita symbols as:

\varepsilon_{ijk\dots}\varepsilon_{mnl\dots} = \det \begin{vmatrix} \delta_{im} & \delta_{in} & \delta_{il} & \dots \\ \delta_{jm} & \delta_{jn} & \delta_{jl} & \dots \\ \delta_{km} & \delta_{kn} & \delta_{kl} & \dots \\ \vdots & \vdots & \vdots \\ \end{vmatrix}.

Now we can contract m indexes, this will add a m! factor to the determinant and we need to omit the relevant Kronecker delta.

[edit] Properties

(superscipts should be considered equivalent with subscripts)

1. When n = 2, we have for all i,j,m,n in {1,2},

\varepsilon_{ij} \varepsilon^{mn} = \delta_i^m \delta_j^n - \delta_i^n \delta_j^m, (1)
\varepsilon_{ij} \varepsilon^{in} = \delta_j^n, (2)
\varepsilon_{ij} \varepsilon^{ij}=2. (3)

2. When n = 3, we have for all i,j,k,m,n in {1,2,3},

\varepsilon_{jmn} \varepsilon^{imn}=2\delta^i_j, (4)
\varepsilon_{ijk} \varepsilon^{ijk}=6. (5)

[edit] Proofs

For equation 1, both sides are antisymmetric with respect of ij and mn. We therefore only need to consider the case i\neq j and m\neq n. By substitution, we see that the equation holds for \varepsilon_{12} \varepsilon^{12}, i.e., for i = m = 1 and j = n = 2. (Both sides are then one). Since the equation is antisymmetric in ij and mn, any set of values for these can be reduced to the above case (which holds). The equation thus holds for all values of ij and mn. Using equation 1, we have for equation 2 \varepsilon_{ij}\varepsilon^{in} = \delta_i^i \delta_j^n - \delta^n_i \delta^i_j

= 2 \delta_j^n - \delta^n_j
= \delta_j^n.

Here we used the Einstein summation convention with i going from 1 to 2. Equation 3 follows similarly from equation 2. To establish equation 4, let us first observe that both sides vanish when i\neq j. Indeed, if i\neq j, then one can not choose m and n such that both permutation symbols on the left are nonzero. Then, with i = j fixed, there are only two ways to choose m and n from the remaining two indices. For any such indices, we have \varepsilon_{jmn} \varepsilon^{imn} = (\varepsilon^{imn})^2 = 1 (no summation), and the result follows. The last property follows since 3! = 6 and for any distinct indices i,j,k in {1,2,3}, we have \varepsilon_{ijk} \varepsilon^{ijk}=1 (no summation). \Box

[edit] Examples

1. The determinant of an n\times n matrix A = (aij) can be written as

\det A = \varepsilon_{i_1\cdots i_n} a_{1i_1} \cdots a_{ni_n},

where each il should be summed over 1,\ldots, n.

Equivalently, it may be written as

\det A = \frac{1}{n!} \varepsilon_{i_1\cdots i_n} \varepsilon_{j_1\cdots j_n} a_{i_1 j_1} \cdots a_{i_n j_n},

where now each il and each jl should be summed over 1,\ldots, n.

2. If A = (A1,A2,A3) and B = (B1,B2,B3) are vectors in R3 (represented in some right hand oriented orthonormal basis), then the ith component of their cross product equals

(A\times B)^i = \varepsilon^{ijk} A^j B^k.

For instance, the first component of A\times B is A2B3A3B2. From the above expression for the cross product, it is clear that A\times B = -B\times A. Further, if C = (C1,C2,C3) is a vector like A and B, then the triple scalar product equals

A\cdot(B\times C) = \varepsilon^{ijk} A^i B^j C^k.

From this expression, it can be seen that the triple scalar product is antisymmetric when exchanging any adjacent arguments. For example, A\cdot(B\times C)= -B\cdot(A\times C).

3. Suppose F = (F1,F2,F3) is a vector field defined on some open set of R3 with Cartesian coordinates x = (x1,x2,x3). Then the ith component of the curl of F equals

(\nabla \times F)^i(x) = \varepsilon^{ijk}\frac{\partial}{\partial x^j} F^k(x).

[edit] Notation

A shorthand notation for anti-symmetrization is denoted by a pair of square brackets. For example, for an n x n matrix, M,

M_{[ab]} = \frac{1}{2}\varepsilon_{ab} \varepsilon^{cd} M_{dc} = \, \frac{1}{2}(M_{ab} - M_{ba})

and for a rank 3 tensor T,

T_{[abc]} = \, \frac{1}{3!}(T_{abc}-T_{acb}+T_{bca}-T_{bac}+T_{cab}-T_{cba})

[edit] References

  • Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation, (1970) W.H. Freeman, New York; ISBN 0-7167-0344-0. (See section 3.5 for a review of tensors in general relativity).


This article incorporates material from Levi-Civita permutation symbol on PlanetMath, which is licensed under the GFDL.