Ricci calculus

From Wikipedia, the free encyclopedia

In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields.[1][2][3] It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), developed by Gregorio Ricci-Curbastro in 1887–96, and subsequently popularized in a paper [4] written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century.[5]

A component of a tensor is a real number which is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows the most efficient expressions of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays.

A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the order of the tensor.

For compactness and convenience, the notational convention implies certain things, notably that of summation over indices repeated within a term and of universal quantification over free indices (those not so summed). Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules.

Notation for indices

Basis-related distinctions

Space–time split

Where a distinction is to be made between the space-like basis elements and a time-like element in the four dimensional spacetime of classical physics, this is conventionally done through indices as follows:[6]

  • The lowercase Latin alphabet a, b, c... is used to indicate restriction to 3-dimensional Euclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately.
  • The lowercase Greek alphabet α, β, γ... is used for 4-dimensional spacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components.

Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space.

Coordinate and index notation

The author(s) will usually make it clear whether a subscript is intended as an index or as a label.

For example, in 3-D Euclidean space and using Cartesian coordinates; the coordinate vector A = (A1, A2, A3) = (Ax, Ay, Az) shows a direct correspondence between the subscripts 1, 2, 3 and the labels x, y, z. In the expression Ai, i is interpreted as an index ranging over the values 1, 2, 3, while the x, y, z subscripts are not variable indices, more like "names" for the components. In the context of spacetime, the index value 0 corresponds to the label t.

Reference to coordinate systems

Indices themselves may be labelled using diacritic-like symbols, such as a hat (^), bar (), tilde (~), or prime (′)

X_{{{\hat  {\phi }}}}\,,Y_{{{\bar  {\lambda }}}}\,,Z_{{{\tilde  {\eta }}}}\,,T_{{\mu '}}\cdots

to denote a possibly different basis (and hence coordinate system) for that index. An example is in Lorentz transformations from one frame of reference to another, where one frame could be unprimed and the other primed, as in:

v^{{\mu '}}=v^{{\nu }}L_{\nu }{}^{{\mu '}}.

This is not to be confused with van der Waerden notation for spinors, which uses hats and overdots on indices to reflect the chiralty of a spinor.

Raised and lowered indices

Covariant tensor components

A lower index (subscript) indicates covariance of the components with respect to that index: A_{{\alpha \beta \gamma \cdots }}

Contravariant tensor components

An upper index (superscript) indicates contravariance of the components with respect to that index: A^{{\alpha \beta \gamma \cdots }}

Mixed-variance tensor components

A tensor may have both upper and lower indices: A_{{\alpha }}{}^{{\beta }}{}_{{\gamma }}{}^{{\delta \cdots }}

Summation

Two indices (one raised and one lowered) with the same symbol within a term are summed over: A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{{\alpha }}B^{\alpha } or A^{\alpha }B_{\alpha }\equiv \sum _{\alpha }A^{{\alpha }}B_{\alpha }\,.

The operation implied by such a summation is called tensor contraction:

A_{\alpha }B^{\beta }\rightarrow A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{{\alpha }}B^{\alpha }\,.

More than one index may occur twice, but only twice within one term, for example:

A_{{\alpha }}{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\equiv \sum _{\alpha }\sum _{\gamma }A_{{\alpha }}{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\,.

As for a non-identity,

A_{{\alpha \gamma }}{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\not \equiv \sum _{\alpha }\sum _{\gamma }A_{{\alpha \gamma }}{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\,

is not considered well-formed, that is, it is meaningless.

Multi-index notation

If a tensor has a list of indices all raised or lowered, one shorthand is to use a capital letter for the list:[7]

A_{{i_{1}\cdots i_{n}}}B^{{i_{1}\cdots i_{n}j_{1}\cdots j_{m}}}C_{{j_{1}\cdots j_{m}}}\equiv A_{I}B^{{IJ}}C_{J}

where I = i1 i2 ... in and J = j1 j2 ... jm.

Sequential summation

Two vertical bars | | around a set of indices (with a contraction):[8]

A_{{|\alpha \beta \gamma |\cdots }}B^{{\alpha \beta \gamma \cdots }}=\sum _{\alpha }\sum _{\beta }\sum _{\gamma }A_{{\alpha \beta \gamma \cdots }}B^{{\alpha \beta \gamma \cdots }}

denotes the summation in which each preceding index is counted up to (and not including) the value of the next index:

\alpha <\beta <\gamma \,.

Only one group of the repeated set of indices has the vertical bars around them (the other contracted indices do not). More than one group can summed in this way:

A_{{|\alpha \beta \gamma |}}{}^{{|\delta \epsilon \cdots \lambda |}}B^{{\alpha \beta \gamma }}{}_{{\delta \epsilon \cdots \lambda |\mu \nu \cdots \zeta |}}C^{{\mu \nu \cdots \zeta }}=\sum _{\alpha }\sum _{\beta }\sum _{\gamma }\sum _{\delta }\sum _{\epsilon }\cdots \sum _{\lambda }\sum _{\mu }\sum _{\nu }\cdots \sum _{\zeta }A_{{\alpha \beta \gamma }}{}^{{\delta \epsilon \cdots \lambda }}B^{{\alpha \beta \gamma }}{}_{{\delta \epsilon \cdots \lambda \mu \nu \cdots \zeta }}C^{{\mu \nu \cdots \zeta }}

where

\alpha <\beta <\gamma \,,\quad \delta <\epsilon <\cdots <\lambda \,,\quad \mu <\nu \cdots <\zeta \,.

This is useful to prevent over-counting in some summations, when tensors are symmetric or antisymmetric.

Alternatively, using the capital letter convention for multi-indices, an underarrow is placed underneath the block of indices:[9]

A_{{{\underset  {\rightharpoondown }{P}}}}{}^{{{\underset  {\rightharpoondown }{Q}}}}B^{P}{}_{{Q{\underset  {\rightharpoondown }{R}}}}C^{R}=\sum _{{\underset  {\rightharpoondown }{P}}}\sum _{{\underset  {\rightharpoondown }{Q}}}\sum _{{\underset  {\rightharpoondown }{R}}}A_{{P}}{}^{{Q}}B^{P}{}_{{QR}}C^{R}

where

{\underset  {\rightharpoondown }{P}}=|\alpha \beta \gamma |\,,\quad {\underset  {\rightharpoondown }{Q}}=|\delta \epsilon \cdots \lambda |\,,\quad {\underset  {\rightharpoondown }{R}}=|\mu \nu \cdots \zeta |
Raising and lowering indices

By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa:

B^{{\gamma }}{}_{{\beta \cdots }}=g^{{\gamma \alpha }}A_{{\alpha \beta \cdots }} and A_{{\alpha \beta \cdots }}=g_{{\alpha \gamma }}B^{{\gamma }}{}_{{\beta \cdots }}

The base symbol in many cases is retained (e.g. using A where B appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.

Correlations between index positions and invariance

This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.[10]

The Kronecker delta is used, see also below.

Basis transformation Component transformation Invariance
Covector, covariant vector, dual vector, 1-form e^{{\bar  {\alpha }}}=L^{{\bar  {\alpha }}}{}_{\beta }e^{\beta } a_{{\bar  {\alpha }}}=a_{\gamma }L^{\gamma }{}_{{\bar  {\alpha }}} a_{{\bar  {\alpha }}}e^{{\bar  {\alpha }}}=a_{\gamma }L^{\gamma }{}_{{\bar  {\alpha }}}L^{{\bar  {\alpha }}}{}_{\beta }e^{\beta }=a_{\gamma }\delta ^{\gamma }{}_{\beta }e^{\beta }=a_{\beta }e^{\beta }
Vector, contravariant vector e_{{\bar  {\alpha }}}=L^{\gamma }{}_{{\bar  {\alpha }}}e_{\gamma } a^{{\bar  {\alpha }}}=a^{\beta }L^{{\bar  {\alpha }}}{}_{\beta } a^{{\bar  {\alpha }}}e_{{\bar  {\alpha }}}=a^{\beta }L^{{\bar  {\alpha }}}{}_{\beta }L^{\gamma }{}_{{\bar  {\alpha }}}e_{\gamma }=a^{\beta }\delta ^{\gamma }{}_{\beta }e_{\gamma }=a^{\gamma }e_{\gamma }

General outlines for index notation and operations

Tensors are equal if and only if every corresponding component is equal, e.g. tensor A equals tensor B if and only if

A^{{\alpha }}{}_{{\beta \gamma }}=B^{{\alpha }}{}_{{\beta \gamma }}

for all α, β and γ. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis).

Free and dummy indices

Indices not in contractions are called free indices.

Indices in contractions are termed dummy indices, or summation indices.

A tensor equation represents many ordinary (real-valued) equations

The components of tensors (like A^{\alpha }, B_{\beta }{}^{\gamma } etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has n free indices, and if the dimensionality of the underlying vector space is m, the equality represents mn equations: each has a specific set of index values.

For instance, if

A^{\alpha }B_{\beta }{}^{\gamma }C_{{\gamma \delta }}+D^{\alpha }{}_{\beta }{}E_{\delta }=T^{\alpha }{}_{\beta }{}_{\delta }

is in 4-dimensions (that is, each index runs from 0 to 3 or 1 to 4), then because there are three free indices (α, β, δ), there are 43 = 64 equations. Three of these are:

A^{0}B_{1}{}^{0}C_{{00}}+A^{0}B_{1}{}^{1}C_{{10}}+A^{0}B_{1}{}^{2}C_{{20}}+A^{0}B_{1}{}^{3}C_{{30}}+D^{0}{}_{1}{}E_{0}=T^{0}{}_{1}{}_{0}
A^{1}B_{0}{}^{0}C_{{00}}+A^{1}B_{0}{}^{1}C_{{10}}+A^{1}B_{0}{}^{2}C_{{20}}+A^{1}B_{0}{}^{3}C_{{30}}+D^{1}{}_{0}{}E_{0}=T^{1}{}_{0}{}_{0}
A^{1}B_{2}{}^{0}C_{{02}}+A^{1}B_{2}{}^{1}C_{{12}}+A^{1}B_{2}{}^{2}C_{{22}}+A^{1}B_{2}{}^{3}C_{{32}}+D^{1}{}_{2}{}E_{2}=T^{1}{}_{2}{}_{2}.

This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation.

Indices are replaceable labels

Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is:

A^{\alpha }B_{\beta }{}^{\gamma }C_{{\gamma \delta }}+D^{\alpha }{}_{\beta }{}E_{\delta }\rightarrow A^{\lambda }B_{\beta }{}^{\mu }C_{{\mu \delta }}+D^{\lambda }{}_{\beta }{}E_{\delta }

as for an erroneous change:

A^{\alpha }B_{\beta }{}^{\gamma }C_{{\gamma \delta }}+D^{\alpha }{}_{\beta }{}E_{\delta }\nrightarrow A^{\lambda }B_{\beta }{}^{\gamma }C_{{\mu \delta }}+D^{\alpha }{}_{\beta }{}E_{\delta }\,.

In the first replacement, λ replaced α and μ replaced γ everywhere, so the expression still has the same meaning. In the second, λ did not fully replace α, and μ did not fully replace γ (incidentally, the contraction on the γ index became a tensor product), which is entirely inconsistent for reasons shown next.

Indices are the same in every term

The same indices on each side of a tensor equation always appear in the same (upper or lower) position throughout every term, except for indices repeated in a term (which implies a summation over that index), for example:

A^{\alpha }B_{\beta }{}^{\gamma }C_{{\gamma \delta }}+D^{\alpha }{}_{\beta }{}E_{\delta }=T^{\alpha }{}_{\beta }{}_{\delta }

as for an erroneous expression:

A^{\alpha }B_{\beta }{}^{\gamma }C_{{\gamma \delta }}+D_{\alpha }{}_{\beta }{}^{\gamma }E^{\delta }.

In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity α, β, δ line up throughout and γ occurs twice in one term due to a contraction (correctly once as an upper index and once as a lower index), so it's a valid as an expression. In the invalid expression, while β lines up, α and δ do not, and γ appears twice in one term (contraction) and once in another term, which is inconsistent.

Brackets and punctuation used once where implied

When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply.

If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets.

Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices, not to intermediately placed covariant indices.

Symmetric and antisymmetric parts

Symmetric part of tensor

Parentheses ( ) around multiple indices denotes the symmetrized part of the tensor. When symmetrizing p indices using σ to range over permutations of the numbers 1 to p, one takes a sum over the permutations of those indices \alpha _{{\sigma (i)}} for i = 1, 2, 3 ... p, and then divides by the number of permutations:

A_{{(\alpha _{1}\alpha _{2}\cdots \alpha _{p})\alpha _{{p+1}}\cdots \alpha _{q}}}={\dfrac  {1}{p!}}\sum _{{\sigma }}A_{{\alpha _{{\sigma (1)}}\cdots \alpha _{{\sigma (p)}}\alpha _{{p+1}}\cdots \alpha _{{q}}}}\,.

For example, two symmetrizing indices mean there are two indices to permute and sum over:

A_{{(\alpha \beta )\gamma \cdots }}={\dfrac  {1}{2!}}\left(A_{{\alpha \beta \gamma \cdots }}+A_{{\beta \alpha \gamma \cdots }}\right)

while for three symmetrizing indices, there are three indices to sum over and permute:

A_{{(\alpha \beta \gamma )\delta \cdots }}={\dfrac  {1}{3!}}\left(A_{{\alpha \beta \gamma \delta \cdots }}+A_{{\gamma \alpha \beta \delta \cdots }}+A_{{\beta \gamma \alpha \delta \cdots }}+A_{{\alpha \gamma \beta \delta \cdots }}+A_{{\gamma \beta \alpha \delta \cdots }}+A_{{\beta \alpha \gamma \delta \cdots }}\right)

The symmetrization is distributive over addition;

A_{{(\alpha }}\left(B_{{\beta )\gamma \cdots }}+C_{{\beta )\gamma \cdots }}\right)=A_{{(\alpha }}B_{{\beta )\gamma \cdots }}+A_{{(\alpha }}C_{{\beta )\gamma \cdots }}

Indices are not part of the symmetrization when they are:

  • not on the same level, for example;
A_{{(\alpha }}B^{{\beta }}{}_{{\gamma )}}={\dfrac  {1}{2!}}\left(A_{{\alpha }}B^{{\beta }}{}_{{\gamma }}+A_{{\gamma }}B^{{\beta }}{}_{{\alpha }}\right)
  • within the parentheses and between vertical bars (i.e. |···|), modifying the previous example;
A_{{(\alpha }}B_{{|\beta |}}{}_{{\gamma )}}={\dfrac  {1}{2!}}\left(A_{{\alpha }}B_{{\beta \gamma }}+A_{{\gamma }}B_{{\beta \alpha }}\right)

Here the α and γ indices are symmetrized, β is not.

Antisymmetric or alternating part of tensor

Square brackets [ ] around multiple indices denotes the antisymmetrized part of the tensor. For p antisymmetrizing indices – the sum over the permutations of those indices \alpha _{{\sigma (i)}} multiplied by the signature of the permutation \operatorname{sgn}(\sigma ) is taken, then divided by the number of permutations:

{\begin{aligned}A_{{[\alpha _{1}\cdots \alpha _{p}]\alpha _{{p+1}}\cdots \alpha _{q}}}&={\dfrac  {1}{p!}}\sum _{{\sigma }}\operatorname{sgn}(\sigma )A_{{\alpha _{{\sigma (1)}}\cdots \alpha _{{\sigma (p)}}\alpha _{{p+1}}\cdots \alpha _{{q}}}}\\&={\dfrac  {1}{(n-p)!}}\varepsilon _{{\alpha _{1}\dots \alpha _{p}\,\beta _{1}\dots \beta _{{n-p}}}}{\dfrac  {1}{p!}}\varepsilon ^{{\gamma _{1}\dots \gamma _{p}\,\beta _{1}\dots \beta _{{n-p}}}}A_{{\gamma _{1}\dots \gamma _{p}\alpha _{{p+1}}\cdots \alpha _{q}}}\\\end{aligned}}

where n is the dimensionality of the underlying vector space and \varepsilon _{{\alpha _{1}\dots \alpha _{n}}}\, is the Levi-Civita symbol.

For example – two antisymmetrizing indices imply:

A_{{[\alpha \beta ]\gamma \cdots }}={\dfrac  {1}{2!}}\left(A_{{\alpha \beta \gamma \cdots }}-A_{{\beta \alpha \gamma \cdots }}\right)

while three antisymmetrizing indices imply:

A_{{[\alpha \beta \gamma ]\delta \cdots }}={\dfrac  {1}{3!}}\left(A_{{\alpha \beta \gamma \delta \cdots }}+A_{{\gamma \alpha \beta \delta \cdots }}+A_{{\beta \gamma \alpha \delta \cdots }}-A_{{\alpha \gamma \beta \delta \cdots }}-A_{{\gamma \beta \alpha \delta \cdots }}-A_{{\beta \alpha \gamma \delta \cdots }}\right)

as for a more specific example, if F represents the electromagnetic tensor, then the equation

0=F_{{[\alpha \beta ,\gamma ]}}={\dfrac  {1}{3!}}\left(F_{{\alpha \beta ,\gamma }}+F_{{\gamma \alpha ,\beta }}+F_{{\beta \gamma ,\alpha }}-F_{{\beta \alpha ,\gamma }}-F_{{\alpha \gamma ,\beta }}-F_{{\gamma \beta ,\alpha }}\right)\,

represents Gauss's law for magnetism and Faraday's law of induction.

As before, the antisymmetrization is distributive over addition;

A_{{[\alpha }}\left(B_{{\beta ]\gamma \cdots }}+C_{{\beta ]\gamma \cdots }}\right)=A_{{[\alpha }}B_{{\beta ]\gamma \cdots }}+A_{{[\alpha }}C_{{\beta ]\gamma \cdots }}

As with symmetrization, indices are not antisymmetrized when they are:

  • not on the same level, for example;
A_{{[\alpha }}B^{{\beta }}{}_{{\gamma ]}}={\dfrac  {1}{2!}}\left(A_{{\alpha }}B^{{\beta }}{}_{{\gamma }}-A_{{\gamma }}B^{{\beta }}{}_{{\alpha }}\right)
  • within the square brackets and between vertical bars (i.e. |···|), modifying the previous example;
A_{{[\alpha }}B_{{|\beta |}}{}_{{\gamma ]}}={\dfrac  {1}{2!}}\left(A_{{\alpha }}B_{{\beta \gamma }}-A_{{\gamma }}B_{{\beta \alpha }}\right)

Here the α and γ indices are antisymmetrized, β is not.

Symmetry and antisymmetry sum

Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices:

A_{{\alpha \beta \gamma \cdots }}=A_{{(\alpha \beta )\gamma \cdots }}+A_{{[\alpha \beta ]\gamma \cdots }}

as can be seen by adding the above expressions for A_{{(\alpha \beta )\gamma \cdots }} and A_{{[\alpha \beta ]\gamma \cdots }}. This does not hold for other than two indices.

Differentiation

For compactness, derivatives may be indicated by adding indices after a comma or semicolon.[11][12]

Partial derivative

To indicate partial differentiation of a tensor field with respect to a coordinate variable x^{\gamma }, a comma is placed before an added lower index of the coordinate variable.

A_{{\alpha \beta \cdots ,\gamma }}=\partial _{\gamma }A_{{\alpha \beta \cdots }}={\dfrac  {\partial }{\partial x^{\gamma }}}A_{{\alpha \beta \cdots }}

This may be repeated (without adding further commas):

A_{{\alpha _{1}\alpha _{2}\cdots \alpha _{p}\,,\,\alpha _{{p+1}}\cdots \alpha _{q}}}=\partial _{{\alpha _{q}}}\cdots \partial _{{\alpha _{{p+2}}}}\partial _{{\alpha _{{p+1}}}}A_{{\alpha _{1}\alpha _{2}\cdots \alpha _{p}}}={\dfrac  {\partial ^{{q-p}}}{\partial x^{{\alpha _{q}}}\cdots \partial x^{{\alpha _{{p+2}}}}\partial x^{{\alpha _{{p+1}}}}}}A_{{\alpha _{1}\alpha _{2}\cdots \alpha _{p}}}.

These components do not transform covariantly. This derivative is characterized by the product rule and the derivatives of the coordinates

x^{{\alpha }}{}_{{,\gamma }}=\delta ^{{\alpha }}{}_{\gamma }

where δ is the Kronecker delta.

Covariant derivative

To indicate covariant differentiation of any tensor field, a semicolon ( ; ) is placed before an added lower (covariant) index. Less common alternatives to the semicolon include a forward slash ( / )[13] or in three-dimensional curved space just one vertical bar ( | ).[14]

For a contravariant vector: A^{{\alpha }}{}_{{;\beta }}=A^{{\alpha }}{}_{{,\beta }}+\Gamma ^{{\alpha }}{}_{{\gamma \beta }}A^{\gamma } where \Gamma ^{{\alpha }}{}_{{\beta \gamma }}\, is a Christoffel symbol of the second kind.

For a covariant vector: A_{{\alpha ;\beta }}=A_{{\alpha ,\beta }}-\Gamma ^{{\gamma }}{}_{{\alpha \beta }}A_{\gamma }\,.

For an arbitrary tensor:[15]

{\begin{aligned}T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s};\gamma }}=T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s},\gamma }}&+\,\Gamma ^{{\alpha _{1}}}{}_{{\delta \gamma }}T^{{\delta \alpha _{2}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s}}}+\cdots +\Gamma ^{{\alpha _{r}}}{}_{{\delta \gamma }}T^{{\alpha _{1}\cdots \alpha _{{r-1}}\delta }}{}_{{\beta _{1}\cdots \beta _{s}}}\\&-\,\Gamma ^{\delta }{}_{{\beta _{1}\gamma }}T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\delta \beta _{2}\cdots \beta _{s}}}-\cdots -\Gamma ^{\delta }{}_{{\beta _{s}\gamma }}T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{{s-1}}\delta }}\,.\end{aligned}}

The components of this derivative of a tensor field transform covariantly, and hence form another tensor field. This derivative is characterized by the product rule and the fact that the derivative of the metric g_{{\mu \nu }}\, is zero:

g_{{\mu \nu ;\gamma }}=0\,.

The covariant formulation of the directional derivative of any tensor field along a vector v^{\gamma } may be expressed as its contraction with the covariant derivative, e.g.:

v^{\gamma }A_{{\alpha ;\gamma }}\,.

One alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol \nabla _{\beta }. For the case of a vector field A^{\alpha }:[16]

\nabla _{\beta }A^{\alpha }={\frac  {\partial A^{\alpha }}{\partial x^{\beta }}}+\Gamma ^{\alpha }{}_{{\gamma \beta }}A^{\gamma }.
Lie derivative

The Lie derivative is another derivative that is covariant, but which should not be confused with the covariant derivative. It is defined even in the absence of a metric. The Lie derivative of a type (r,s) tensor field T along (the flow of) a contravariant vector field X^{\rho } may be expressed as[17]

{\begin{aligned}({\mathcal  {L}}_{X}T)^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s}}}=X^{\gamma }T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s},\gamma }}&-\,X^{{\alpha _{1}}}{}_{{,\gamma }}T^{{\gamma \alpha _{2}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s}}}-\cdots -X^{{\alpha _{r}}}{}_{{,\gamma }}T^{{\alpha _{1}\cdots \alpha _{{r-1}}\gamma }}{}_{{\beta _{1}\cdots \beta _{s}}}\\&+\,X^{{\gamma }}{}_{{,\beta _{1}}}T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\gamma \beta _{2}\cdots \beta _{s}}}+\cdots +X^{{\gamma }}{}_{{,\beta _{s}}}T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{{s-1}}\gamma }}\,.\end{aligned}}

This derivative is characterized by the product rule and the fact that the derivative of the given contravariant vector field X^{\rho } is zero.

({\mathcal  {L}}_{X}X)^{{\rho }}=[X,X]^{{\rho }}=0\,.

The Lie derivative of a type (r,s) relative tensor field \Lambda of weight w\, along (the flow of) a contravariant vector field X^{\rho } may be expressed as[18]

{\begin{aligned}({\mathcal  {L}}_{X}\Lambda )^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s}}}=X^{\gamma }\Lambda ^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s},\gamma }}&-\,X^{{\alpha _{1}}}{}_{{,\gamma }}\Lambda ^{{\gamma \alpha _{2}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s}}}-\cdots -X^{{\alpha _{r}}}{}_{{,\gamma }}\Lambda ^{{\alpha _{1}\cdots \alpha _{{r-1}}\gamma }}{}_{{\beta _{1}\cdots \beta _{s}}}\\&+\,X^{{\gamma }}{}_{{,\beta _{1}}}\Lambda ^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\gamma \beta _{2}\cdots \beta _{s}}}+\cdots +X^{{\gamma }}{}_{{,\beta _{s}}}\Lambda ^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{{s-1}}\gamma }}\\&+\,wX^{{\gamma }}{}_{{,\gamma }}\Lambda ^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{{s}}}}\,.\end{aligned}}

Notable tensors

Kronecker delta

The Kronecker delta is like the identity matrix

\delta _{{\beta }}^{{\alpha }}\,A^{{\beta }}=A^{{\alpha }}\,
\delta _{{\nu }}^{{\mu }}\,B_{{\mu }}=B_{{\nu }}\,

when multiplied and contracted. The components \delta _{{\beta }}^{{\alpha }}\, are the same in any basis and form an invariant tensor of type (1,1), i.e. the identity of the tangent bundle over the identity mapping of the base manifold, and so its trace is an invariant.[19] The dimensionality of spacetime is its trace:

\delta _{{\rho }}^{{\rho }}=\delta _{{0}}^{{0}}+\delta _{{1}}^{{1}}+\delta _{{2}}^{{2}}+\delta _{{3}}^{{3}}=4\,

in four-dimensional spacetime.

Metric tensor

The metric tensor gives the length of any space-like curve

{\text{Length}}=\int _{{y_{1}}}^{{y_{2}}}{\sqrt  {g_{{\alpha \beta }}{\frac  {dx^{{\alpha }}}{dy}}{\frac  {dx^{{\beta }}}{dy}}}}\,dy\,

where y is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve

{\text{Duration}}=\int _{{t_{1}}}^{{t_{2}}}{\sqrt  {{\frac  {-1}{c^{2}}}g_{{\alpha \beta }}{\frac  {dx^{{\alpha }}}{dt}}{\frac  {dx^{{\beta }}}{dt}}}}\,dt\,

where t is any smooth strictly monotone parameterization of the trajectory. See also line element.

The inverse matrix (also indicated with a g) of the metric tensor is another important tensor

g^{{\alpha \beta }}g_{{\beta \gamma }}=\delta _{{\gamma }}^{{\alpha }}\,.
Riemann curvature tensor

If this tensor is defined as

R^{\rho }{}_{{\sigma \mu \nu }}=\Gamma ^{\rho }{}_{{\nu \sigma ,\mu }}-\Gamma _{{\mu \sigma ,\nu }}^{\rho }+\Gamma ^{\rho }{}_{{\mu \lambda }}\Gamma ^{\lambda }{}_{{\nu \sigma }}-\Gamma ^{\rho }{}_{{\nu \lambda }}\Gamma ^{\lambda }{}_{{\mu \sigma }}\,,

then it is the commutator of the covariant derivative with itself:[20][21]

A_{{\nu ;\rho \sigma }}-A_{{\nu ;\sigma \rho }}=A_{{\beta }}R^{{\beta }}{}_{{\nu \rho \sigma }}\,,

since the connection \Gamma ^{\alpha }{}_{{\beta \mu }}\, is torsionless, which means that the torsion tensor \Gamma ^{\lambda }{}_{{\mu \nu }}-\Gamma ^{\lambda }{}_{{\nu \mu }}\, vanishes.

Ricci identities

This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows

{\begin{aligned}T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s};\gamma \delta }}-T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s};\delta \gamma }}=\,&-R^{{\alpha _{1}}}{}_{{\rho \gamma \delta }}T^{{\rho \alpha _{2}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{s}}}-\cdots -R^{{\alpha _{r}}}{}_{{\rho \gamma \delta }}T^{{\alpha _{1}\cdots \alpha _{{r-1}}\rho }}{}_{{\beta _{1}\cdots \beta _{s}}}\\&+\,R^{\sigma }{}_{{\beta _{1}\gamma \delta }}T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\sigma \beta _{2}\cdots \beta _{s}}}+\cdots +R^{\sigma }{}_{{\beta _{s}\gamma \delta }}T^{{\alpha _{1}\cdots \alpha _{r}}}{}_{{\beta _{1}\cdots \beta _{{s-1}}\sigma }}\,\end{aligned}}

which are often referred to as the Ricci identities.[22]

See also

References

  1. Synge J.L., Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. pp. 6–108. 
  2. J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 85–86, §3.5. ISBN 0-7167-0344-0. 
  3. R. Penrose (2007). The Road to Reality. Vintage books. ISBN 0-679-77631-1. 
  4. Ricci, Gregorio; Levi-Civita, Tullio (March 1900), "Méthodes de calcul différentiel absolu et leurs applications", Mathematische Annalen (Springer) 54 (1–2): 125–201, doi:10.1007/BF01454201 
  5. Schouten, Jan A. (1924). R. Courant, ed. Der Ricci-Kalkül – Eine Einführung in die neueren Methoden und Probleme der mehrdimensionalen Differentialgeometrie (Ricci Calculus – An introduction in the latest methods and problems in multi-dimmensional differential geometry). Grundlehren der mathematischen Wissenschaften (in german) 10. Berlin: Springer Verlag. 
  6. C. Møller (1952), The Theory of Relativity, p. 234  is an example of a variation: 'Greek indices run from 1 to 3, Latin indices from 1 to 4'
  7. T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, p. 67, ISBN 978-1107-602601 
  8. Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, ISBN 0-7167-0344-0
  9. T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, p. 67, ISBN 978-1107-602601 
  10. J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 61, 202–203, 232. ISBN 0-7167-0344-0. 
  11. G. Woan (2010). The Cambridge Handbook of Physics Formulas. Cambridge University Press. ISBN 978-0-521-57507-2. 
  12. Covariant derivative – Mathworld, Wolfram
  13. T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, p. 298, ISBN 978-1107-602601 
  14. J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 510, §21.5. ISBN 0-7167-0344-0. 
  15. T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, p. 299, ISBN 978-1107-602601 
  16. D. McMahon (2006). Relativity. Demystified. McGraw Hill. p. 67. ISBN 0-07-145545-0. 
  17. Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds, p. 130 
  18. Lovelock, David; Hanno Rund (1989). Tensors, Differential Forms, and Variational Principles. p. 123. 
  19. Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds, p. 85 
  20. Synge J.L., Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. pp. 83, p. 107. 
  21. P. A. M. Dirac. General Theory of Relativity. pp. 20–21. 
  22. Lovelock, David; Hanno Rund (1989). Tensors, Differential Forms, and Variational Principles. p. 84. 

Books

  • Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds (First Dover 1980 ed.), The Macmillan Company, ISBN 0-486-64039-6 
  • Danielson, Donald A. (2003). Vectors and Tensors in Engineering and Physics (2/e ed.). Westview (Perseus). ISBN 978-0-8133-4080-7. 
  • Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Kluwer Academic Publishers (Springer). ISBN 1-4020-1015-X. 
  • Lovelock, David; Hanno Rund (1989) [1975]. Tensors, Differential Forms, and Variational Principles. Dover. ISBN 978-0-486-65840-7. 
  • C. Møller (1952), The Theory of Relativity (3rd ed.), Oxford University Press 
  • Synge J.L., Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. ISBN 978-0-486-63612-2. 
  • J.R. Tyldesley (1975), An introduction to Tensor Analysis: For Engineers and Applied Scientists, Longman, ISBN 0-582-44355-5 
  • D.C. Kay (1988), Tensor Calculus, Schaum’s Outlines, McGraw Hill (USA), ISBN 0-07-033484-6 
  • T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, ISBN 978-1107-602601 
This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.