Symmetric function
From Wikipedia, the free encyclopedia
In mathematics, a symmetric function of multiple variables is one that is invariant under permutation of its variables; that is, the value of the function does not depend on the order of the n-tuple of arguments.
In most contexts, the term refers to a polynomial with this property: a symmetric polynomial. The theory of symmetric polynomials is part of the theory of polynomial equations, and also a substantial chapter of combinatorics. Examples of symmetric polynomials are
- X1 + X2 + ... + Xn ,
- X13 + X23 + ... + Xn3 ,
and
- X1X2...Xn .
If P(x) is a polynomial with roots
- α1, α2, ..., αn ,
a symmetric function of the roots of P means
- S(α1, α2, ..., αn)
where S is a symmetric function of n variables.
There is a direct relation between certain symmetric functions of the roots and the coefficients in the polynomial P. Assume for simplicity that P is monic, so that
- P(x) = xn + an−1xn−1 + ... + a0 = (x − α1)(x − α2)...(x − αn),
where the ai are in a field K and the αi are in the algebraic closure of K. The sum of all the αi equals −an−1 and their product equals (−1)na0. In fact, it is classical algebra (Viète's formulas) that the intermediate coefficients of P are plus or minus the sums of the products of the roots taken j at a time, for 1 < j < n. (The sign is alternately + and −.) These formulae are the basis of the traditional theory of equations. In symbols the formulas say:
- an−j = (−1)n−j Σ αk(1)...αk(j)
with the summation taken over all index sequences
- 1 ≤ k(1) < ... < k(j) ≤ n.
These sums for j = 1, 2, ..., n are called the elementary symmetric functions of the roots, because the jth elementary symmetric polynomial, written σj, is given by the same formula, but in indeterminates Xi. A basic theorem states that any symmetric polynomial function S of n variables can be expressed as a polynomial in the elementary symmetric functions. In the solution of polynomial equations, the symmetric polynomials of the roots lie in K.
The polynomial relations underlying that assertion are universal (independent of choice of P); and, if we work with the symmetric polynomials created from a monomial, we can eliminate dependence on K, too, to get formulae with integer coefficients. Putting this more algebraically, we can define a subring Symm(n) of Z[X1, X2, ..., Xn] consisting of the integral symmetric polynomials (those invariant under the action of the symmetric group on indices); and then assert that the formulae for σj, for which we retain the notation, are ring generators of Symm(n). What is more, they are independent generators (no algebraic relations hold), so that Symm(n) is abstractly also a polynomial ring on n generators. A great deal of attention was paid, in older algebra textbooks, to algorithmic procedures expressing the procedural content of this (which has been stated as an existence theorem but has computational content).
The most important single application is to the power sums α1k + α2k + ... + αnk, in terms of the aj. The formulae for doing this are attributed to Isaac Newton. They were encountered in K-theory too, where they underlie the Adams operations.
They also support the theory of the Newton polygon, part of the theory of ramification. In Newton's case the point was to work with aj in a formal power series ring; here passage to the algebraic closure is the theory of Puiseux expansions in fractional powers, and the Newton polygon is a device for computing the required exponents.