Formal power series

In mathematics, a formal power series is a generalization of a polynomial, where the number of terms is allowed to be infinite; this implies giving up the possibility of replacing the variable in the polynomial with an arbitrary number. Thus a formal power series differs from a polynomial in that it may have infinitely many terms, and differs from a power series, whose variables can take on numerical values. One way to view a formal power series is as an infinite ordered sequence of numbers. In this case, the powers of the variable are used only to indicate the order of the coefficients. The coefficient of x^5 is just the fifth term in the series, while the coefficient of x^0 is the zeroth term. In combinatorics, formal power series provide representations of numerical sequences and of multisets, and for instance allow concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved; this is known as the method of generating functions. More generally, formal power series can include series with any finite number of variables, and with coefficients in an arbitrary ring.

Introduction

A formal power series can be loosely thought of as an object that is like a polynomial, but with infinitely many terms. Alternatively, for those familiar with power series (or Taylor series), one may think of a formal power series as a power series in which we ignore questions of convergence by not assuming that the variable X denotes any numerical value (not even an unknown value). For example, consider the series

A = 1 - 3X + 5X^2 - 7X^3 + 9X^4 - 11X^5 + \cdots.

If we studied this as a power series, its properties would include, for example, that its radius of convergence is 1. However, as a formal power series, we may ignore this completely; all that is relevant is the sequence of coefficients [1, −3, 5, −7, 9, −11, ...]. In other words, a formal power series is an object that just records a sequence of coefficients. It is perfectly acceptable to consider a formal power series with the factorials [1, 1, 2, 6, 24, 120, 720, 5040, … ] as coefficients, even though the corresponding power series diverges for any nonzero value of X.

Arithmetic on formal power series is carried out by simply pretending that the series are polynomials. For example, if

B = 2X + 4X^3 + 6X^5 + \cdots,

then we add A and B term by term:

A + B = 1 - X + 5X^2 - 3X^3 + 9X^4 - 5X^5 + \cdots.

We can multiply formal power series, again just by treating them as polynomials (see in particular Cauchy product):

AB = 2X - 6X^2 + 14X^3 - 26X^4 + 44X^5 + \cdots.

Notice that each coefficient in the product AB only depends on a finite number of coefficients of A and B. For example, the X5 term is given by

44X^5 = (1\times 6X^5) + (5X^2 \times 4X^3) + (9X^4 \times 2X).

For this reason, one may multiply formal power series without worrying about the usual questions of absolute, conditional and uniform convergence which arise in dealing with power series in the setting of analysis.

Once we have defined multiplication for formal power series, we can define multiplicative inverses as follows. The multiplicative inverse of a formal power series A is a formal power series C such that AC = 1, provided that such a formal power series exists. It turns out that if A has a multiplicative inverse, it is unique, and we denote it by A−1. Now we can define division of formal power series by defining B/A to be the product BA−1, provided that the inverse of A exists. For example, one can use the definition of multiplication above to verify the familiar formula

\frac{1}{1 + X} = 1 - X + X^2 - X^3 + X^4 - X^5 + \cdots.

An important operation on formal power series is coefficient extraction. In its most basic form, the coefficient extraction operator for a formal power series in one variable extracts the coefficient of [X^n], and is written e.g. [X^n]A, so that [X^2]A=5 and [X^5]A=-11. Other examples include

\begin{align}
\left[X^3\right] (B) &= 4, \\ 
\left[X^2 \right] (X + 3 X^2 Y^3 + 10 Y^6) &= 3Y^3, \\
\left[X^2Y^3 \right] ( X + 3 X^2 Y^3 + 10 Y^6) &= 3, \\
\left[X^n \right] \left(\frac{1}{1+X} \right) &= (-1)^n, \\  
\left[X^n \right] \left(\frac{X}{(1-X)^2} \right) &= n.
\end{align}

Similarly, many other operations that are carried out on polynomials can be extended to the formal power series setting, as explained below.

The ring of formal power series

The set of all formal power series in X with coefficients in a commutative ring R form another ring that is written R[[X]], and called the ring of formal power series in the variable X over R.

Definition of the formal power series ring

One can characterize R[[X]] abstractly as the completion of the polynomial ring R[X] equipped with a particular metric. This automatically gives R[[X]] the structure of a topological ring (and even of a complete metric space). But the general construction of a completion of a metric space is more involved than what is needed here, and would make formal power series seem more complicated than they are. It is possible to describe R[[X]] more explicitly, and define the ring structure and topological structure separately, as follows.

Ring structure

As a set, R[[X]] can be constructed as the set R^\N of all infinite sequences of elements of R, indexed by the natural numbers (taken to include 0). Designating a sequence whose term at index n is a_n by (a_n), one defines addition of two such sequences by

(a_n)_{n\in\N} + (b_n)_{n\in\N} = \left( a_n + b_n \right)_{n\in\N}

and multiplication by

(a_n)_{n\in\N} \times (b_n)_{n\in\N} = \left( \sum_{k=0}^n a_k b_{n-k} \right)_{n\in\N}.

This type of product is called the Cauchy product of the two sequences of coefficients, and is a sort of discrete convolution. With these operations, R^\N becomes a commutative ring with zero element (0,0,0,\cdots) and multiplicative identity (1,0,0,\cdots).

The product is in fact the same one used to define the product of polynomials in one indeterminate, which suggests using a similar notation. One embeds R into R[[X]] by sending any (constant) a \in R to the sequence (a,0,0,\cdots) and designates the sequence (0,1,0,0,\cdots) by X; then using the above definitions every sequence with only finitely many nonzero terms can be expressed in terms of these special elements as

(a_0, a_1, a_2, \cdots, a_n, 0, 0, \ldots) = a_0 + a_1 X + \cdots + a_n X^n = \sum_{i=0}^n a_i X^i;

these are precisely the polynomials in X. Given this, it is quite natural and convenient to designate a general sequence (a_n)_{n\in\N} by the formal expression \textstyle\sum_{i\in\N}a_i X^i, even though the latter is not an expression formed by the operations of addition and multiplication defined above (from which only finite sums can be constructed). This notational convention allows reformulation the above definitions as

\textstyle \left(\sum_{i\in\N} a_i X^i\right)+\left(\sum_{i\in\N} b_i X^i\right) = \sum_{i\in\N}(a_i+b_i) X^i

and

\textstyle \left(\sum_{i\in\N} a_i X^i\right) \times \left(\sum_{i\in\N} b_i X^i\right) = \sum_{n\in\N} \left(\sum_{k=0}^n a_k b_{n-k}\right) X^n.

which is quite convenient, but one must be aware of the distinction between formal summation (a mere convention) and actual addition.

Topological structure

Having stipulated conventionally that

(a_0, a_1, a_2, a_3, \ldots) = \sum_{i=0}^\infty a_i X^i, \qquad (1)

one would like to interpret the right hand side as a well-defined infinite summation. To that end, a notion of convergence in R^\N is defined and a topology on R^\N is constructed. There are several equivalent ways to define the desired topology.

d((a_n), (b_n)) = 2^{-k},
where k is the smallest natural number such that a_k\neq b_k; the distance between two equal sequences is of course zero.

Informally, two sequences \{a_n\} and \{b_n\} become closer and closer if and only if more and more of their terms agree exactly. Formally, the sequence of partial sums of some infinite summation converges if for every fixed power of X the coefficient stabilizes: there is a point beyond which all further partial sums have the same coefficient. This is clearly the case for the right hand side of (1), regardless of the values a_n, since inclusion of the term for i=n gives the last (and in fact only) change to the coefficient of X^n. It is also obvious that the limit of the sequence of partial sums is equal to the left hand side.

This topological structure, together with the ring operations described above, form a topological ring. This is called the ring of formal power series over R and is denoted by R[[X]]. The topology has the useful property that an infinite summation converges if and only if the sequence of its terms converges to 0, which just means that any fixed power of X occurs in only finitely many terms.

The topological structure allows much more flexible use of infinite summations. For instance the rule for multiplication can be restated simply as

\textstyle
\left(\sum_{i\in\N} a_i X^i\right) \times \left(\sum_{i\in\N} b_i X^i\right) = \sum_{i,j\in\N} a_i b_j X^{i+j},

since only finitely many terms on the right affect any fixed X^n. Infinite products are also defined by the topological structure; it can be seen that an infinite product converges if and only if the sequence of its factors converges to 1.

Alternative topologies

The above topology is the finest topology for which \textstyle\sum_{i=0}^\infty a_i X^i always converges as a summation to the formal power series designated by the same expression, and it often suffices to give a meaning to infinite sums and products, or other kinds of limits that one wishes to use to designate particular formal power series. It can however happen occasionally that one wishes to use a coarser topology, so that certain expressions become convergent that would otherwise diverge. This applies in particular when the base ring R already comes with a topology other than the discrete one, for instance if it is also a ring of formal power series.

Consider the ring of formal power series

\Z[[X]][[Y]]

then the topology of above construction only relates to the indeterminate Y, since the topology that was put on \Z[[X]] has been replaced by the discrete topology when defining the topology of the whole ring. So

\sum_{i\in\N}XY^i

converges to the power series suggested, which can be written as \textstyle\frac{X}{1-Y}; however the summation

\sum_{i\in\N}X^iY

would be considered to be divergent, since every term affects the coefficient of Y (which coefficient is itself a power series in X). This asymmetry disappears if the power series ring in Y is given the product topology where each copy of \Z[[X]] is given its topology as a ring of formal power series rather than the discrete topology. As a consequence, for convergence of a sequence of elements of \Z[[X]][[Y]] it then suffices that the coefficient of each power of Y converges to a formal power series in X, a weaker condition that stabilizing entirely; for instance in the second example given here the coefficient of Yconverges to \textstyle\frac{1}{1-X}, so the whole summation converges to \textstyle\frac{Y}{1-X}.

This way of defining the topology is in fact the standard one for repeated constructions of rings of formal power series, and gives the same topology as one would get by taking formal power series in all inderteminates at once. In the above example that would mean constructing \Z[[X,Y]], and here a sequence converges if and only if the coefficient of every monomial X^iY^j stabilizes. This topology, which is also the I-adic topology, where I=(X,Y) is the ideal generated by X and Y, still enjoys the property that a summation converges if and only if its terms tend to 0.

The same principle could be used to make other divergent limits converge. For instance in \R[[X]] the limit

\lim_{n\to\infty}\left(1+\frac{X}{n}\right)^n

does not exist, so in particular it does not converge to \textstyle\exp(X)=\sum_{n\in\N}\frac{X^n}{n!}. This is because for i\geq 2 the coefficient \tbinom{n}{i}/n^i of X^i does not stabilize as n\to \infty. It does however converge in the usual topology of \R, and in fact to the coefficient \textstyle\frac1{i!} of \exp(X). Therefore, if one would give \R[[X]] the product topology of \R^\N where the topology of \R is the usual topology rather than the discrete one, then the above limit would converge to \exp(X). This more permissive approach is not however the standard when considering formal power series, as it would lead to convergence considerations that are as subtle as they are in analysis, while the philosophy of formal power series is on the contrary to make convergence questions as trivial as they can possibly be. With this topology it would not be the case that a summation converges if and only if its terms tend to 0.

Universal property

The ring R[[X]] may be characterized by the following universal property. If S is a commutative associative algebra over R, if I is an ideal of S such that the I-adic topology on S is complete, and if x is an element of I, then there is a unique \Phi: R[[X]]\to S with the following properties:

Operations on formal power series

One can perform algebraic operations on power series to generate new power series.[1][2] Besides the ring structure operations defined above, we have the following.

Power series raised to powers

If n is a natural number we have

 \left( \sum_{k=0}^\infty a_k X^k \right)^n =  \sum_{m=0}^\infty c_m X^m,

where

 c_0 = a_0^n, \quad c_m = \frac{1}{m a_0} \sum_{k=1}^m (kn - m+k) a_{k} c_{m-k},

for m ≥ 1. (This formula can only be used if m and a0 are invertible in the ring of scalars.)

In the case of formal power series with complex coefficients, the complex powers are well defined at least for series f with constant term equal to 1. In this case, fα can be defined either by composition with the binomial series (1+x)α, or by composition with the exponential and the logarithmic series, fα := exp(αlog(f)), or as the solution of the differential equation f(fα)′ = αfαf′ with constant term 1, the three definitions being equivalent. The rules of calculus (fα)β = fαβ and fαgα = (fg)α easily follow.

Inverting series

The series

A = \sum_{n=0}^\infty a_n X^n

in R[[X]] is invertible in R[[X]] if and only if its constant coefficient a_0 is invertible in R. This condition is necessary, for the following reason: if we suppose that A has an inverse B = b_0 + b_1 x + \cdots then the constant term a_0b_0 of A \cdot B is the constant term of the identity series, i.e., it is 1. This condition is also sufficient; we may compute the coefficients of the inverse series B via the explicit recursive formula

\begin{align}
b_0 &= \frac{1}{a_0}\\
b_n &= -\frac{1}{a_0} \sum_{i=1}^n a_i b_{n-i}\qquad \text{for } n \ge 1.
\end{align}

An important special case is that the geometric series formula is valid in K[[X]]:

(1 - X)^{-1} = \sum_{n=0}^\infty X^n.

If R=K is a field, then a series is invertible if and only if the constant term is non-zero, i.e., if and only if the series is not divisible by X. This says that K[[X]] is a discrete valuation ring with uniformizing parameter X.

Dividing series

The computation of a quotient f/g=h

 \frac{\sum_{n=0}^\infty b_n X^n }{\sum_{n=0}^\infty a_n X^n } =\sum_{n=0}^\infty c_n X^n,

assuming the denominator is invertible (that is, a_0 is invertible in the ring of scalars), can be performed as a product f and the inverse of g, or directly equating the coefficients in f=gh:

c_n = \frac{1}{a_0}\left(b_n - \sum_{k=1}^n a_k c_{n-k}\right).

Extracting coefficients

The coefficient extraction operator applied to a formal power series

f(X) = \sum_{n=0}^\infty a_n X^n

in X is written

 \left[ X^m \right] f(X)

and extracts the coefficient of Xm, so that

 \left[ X^m \right] f(X) = \left[ X^m \right] \sum_{n=0}^\infty a_n X^n = a_m.

Composition of series

Given formal power series

f(X) = \sum_{n=1}^\infty a_n X^n = a_1 X + a_2 X^2 + \cdots
g(X) = \sum_{n=0}^\infty b_n X^n = b_0 + b_1 X + b_2 X^2 + \cdots,

one may form the composition

g(f(X)) = \sum_{n=0}^\infty b_n (f(X))^n = \sum_{n=0}^\infty c_n X^n,

where the coefficients cn are determined by "expanding out" the powers of f(X):

c_n:=\sum_{k\in\N, \vert j\vert=n} b_k a_{j_1} a_{j_2} \cdots a_{j_k}.

Here the sum is extended over all (k, j) with k in N and j\in\N_+^k with |j|:=j_1+\cdots+j_k=n.

A more explicit description of these coefficients is provided by Faà di Bruno's formula, at least in the case where the coefficient ring is a field of characteristic 0.

A point here is that this operation is only valid when f(X) has no constant term, so that each c_n depends on only a finite number of coefficients of f(X) and g(X). In other word the series for g(f(X)) converges in the topology of R[[X]].

Example

Assume that the ring R has characteristic 0. If we denote by \exp(X) the formal power series

\exp(X) = 1 + X + \frac{X^2}{2!} + \frac{X^3}{3!} + \frac{X^4}{4!} + \cdots,

then the expression

\exp(\exp(X) - 1) = 1 + X + X^2  + \frac{5X^3}6 + \frac{5X^4}8 + \cdots

makes perfect sense as a formal power series. However, the statement

\exp(\exp(X)) = e \exp(\exp(X) - 1) = e + eX + eX^2 + \frac{5eX^3}{6} + \cdots

is not a valid application of the composition operation for formal power series. Rather, it is confusing the notions of convergence in R[[X]] and convergence in R; indeed, the ring R may not even contain any number e with the appropriate properties.

Composition inverse

Whenever a formal series \textstyle f(X)=\sum_k f_k X^k \in R[[X]] has f0 = 0 and f1 being an invertible element of R, there exists a series \textstyle g(X)=\sum_k g_k X^k that is the composition inverse of f, meaning that composing f with g gives the series representing the identity function (whose first coefficient is 1 and all other coefficients are zero). The coefficients of g may be found recursively by using the above formula for the coefficients of a composition, equating them with those of the composition identity X (that is 1 at degree 1 and 0 at every degree greater than 1) . In the case when the coefficient ring is a field of characteristic 0, the Lagrange inversion formula provides a powerful tool to compute the coefficients of g, as well as the coefficients of the (multiplicative) powers of g.

Formal differentiation of series

Given a formal power series

f = \sum_{n\geq 0} a_n X^n

in R[[X]], we define its formal derivative, denoted Df or f′, by

 Df = \sum_{n \geq 1} a_n n X^{n-1}.

The symbol D is called the formal differentiation operator. The motivation behind this definition is that it simply mimics term-by-term differentiation of a polynomial.

This operation is R-linear:

D(af + bg) = a \cdot Df + b \cdot Dg

for any a, b in R and any f, g in R[[X]]. Additionally, the formal derivative has many of the properties of the usual derivative of calculus. For example, the product rule is valid:

D(fg) = f \cdot (Dg) + (Df) \cdot g,

and the chain rule works as well:

D(f\circ g ) = \left( Df\circ g\right) \cdot Dg,

whenever the appropriate compositions of series are defined (see above under composition of series).

Thus, in these respects formal power series behave like Taylor series. Indeed, for the f defined above, we find that

(D^k f)(0) = k! a_k,

where Dk denotes the kth formal derivative (that is, the result of formally differentiating k times).

Properties

Algebraic properties of the formal power series ring

R[[X]] is an associative algebra over R which contains the ring R[X] of polynomials over R; the polynomials correspond to the sequences which end in zeros.

The Jacobson radical of R[[X]] is the ideal generated by X and the Jacobson radical of R; this is implied by the element invertibility criterion discussed above.

The maximal ideals of R[[X]] all arise from those in R in the following manner: an ideal M of R[[X]] is maximal if and only if M\cap R is a maximal ideal of R and M is generated as an ideal by X and M\cap R.

Several algebraic properties of R are inherited by R[[X]]:

Topological properties of the formal power series ring

The metric space (R[[X]], d) is complete.

The ring R[[X]] is compact if and only if R is finite. This follows from Tychonoff's theorem and the characterisation of the topology on R[[X]] as a product topology.

Applications

Formal power series can be used to solve recurrences occurring in number theory and combinatorics. For an example involving finding a closed form expression for the Fibonacci numbers, see the article on Examples of generating functions.

One can use formal power series to prove several relations familiar from analysis in a purely algebraic setting. Consider for instance the following elements of Q[[X]]:

 \sin(X)  := \sum_{n \ge 0} \frac{(-1)^n} {(2n+1)!} X^{2n+1}
 \cos(X) := \sum_{n \ge 0} \frac{(-1)^n} {(2n)!} X^{2n}

Then one can show that

\sin^2(X) + \cos^2(X) = 1,
\frac{\partial}{\partial X} \sin(X) = \cos(X),
\sin (X+Y) = \sin(X) \cos(Y) + \cos(X) \sin(Y).

The last one being valid in the ring Q[[X,Y]].

For K a field, the ring K[[X1, ..., Xr]] is often used as the "standard, most general" complete local ring over K in algebra.

Interpreting formal power series as functions

In mathematical analysis, every convergent power series defines a function with values in the real or complex numbers. Formal power series can also be interpreted as functions, but one has to be careful with the domain and codomain. If f = an Xn is an element of R[[X]], S is a commutative associative algebra over R, I is an ideal in S such that the I-adic topology on S is complete, and x is an element of I, then we can define

f(X) = \sum_{n\ge 0} a_n X^n.

This latter series is guaranteed to converge in S given the above assumptions on X. Furthermore, we have

 (f+g)(X) = f(X) + g(X) \,

and

 (fg)(X) = f(X) g(X). \,

Unlike in the case of bona fide functions, these formulas are not definitions but have to be proved.

Since the topology on R[[X]] is the (X)-adic topology and R[[X]] is complete, we can in particular apply power series to other power series, provided that the arguments don't have constant coefficients (so that they belong to the ideal (X)): f(0), f(X2X) and f((1−X)−1  1) are all well defined for any formal power series fR[[X]].

With this formalism, we can give an explicit formula for the multiplicative inverse of a power series f whose constant coefficient a = f(0) is invertible in R:

f^{-1} = \sum_{n \ge 0} a^{-n-1} (a-f)^n.

If the formal power series g with g(0) = 0 is given implicitly by the equation

f(g) = X

where f is a known power series with f(0) = 0, then the coefficients of g can be explicitly computed using the Lagrange inversion formula.

Generalizations

Formal Laurent series

A formal Laurent series over a ring R is defined in a similar way to a formal power series, except that we also allow finitely many terms of negative degree (this is different from the classical Laurent series), that is series of the form

f = \sum_{n\in\Z} a_n X^n

where a_n=0 for all but finitely many negative indices n. Multiplication of such series can be defined. Indeed, similarly to the definition for formal power series, the coefficient of Xk of two series with respective sequences of coefficients \{a_n\} and \{b_n\} is

\sum_{i\in\Z}a_ib_{k-i},

which sum is effectively finite because of the assumed vanishing of coefficients at sufficiently negative indices, and which sum zero for sufficiently negative k for the same reason.

For a non-zero formal Laurent series, the minimal integer n such that  a_n\neq 0 is called the order of f, denoted ord(f). (The order of the zero series is +\infty.) The formal Laurent series form the ring of formal Laurent series over R, denoted by R((X)). It is equal to the localization of R[[X]] with respect to the set of positive powers of X. It is a topological ring with the metric d(f,g)=2^{-\mathrm{ord}(f-g)}.

If R=K is a field, then K((X)) is in fact a field, which may alternatively be obtained as the field of fractions of the integral domain K[[X]].

One may define formal differentiation for formal Laurent series in a natural way (term-by-term). Precisely, the formal derivative of the formal Laurent series f above is

f' = Df = \sum_{n\in\Z} na_n X^{n-1}

which is again an element of K((X)). Notice that if f is a non-constant formal Laurent series, and K is a field of characteristic 0, then one has

\mathrm{ord}(f')=\mathrm{ord}(f)-1.

However, in general this is not the case since the factor n for the lowest order term could be equal to 0 in R.

Formal residue

Assume that K is a field of characteristic 0. Then the map

D\colon K(\!(X)\!)\to K(\!(X)\!)

is a K-derivation that verifies

\ker D=K
\mathrm{im}\,D=\{f\in K((X)) : [X^{-1}]f=0\}.

The latter shows that the coefficient of X^{-1} in f is of particular interest; it is called formal residue of f and denoted \mathrm{Res}(f). The map

\mathrm{Res}\colon K(\!(X)\!)\to K

is K-linear, and by the above observation one has an exact sequence

0 \rightarrow K \rightarrow  K(\!(X)\!) \xrightarrow{D} K(\!(X)\!) \;\xrightarrow{ \mathrm{Res}    }\; K \rightarrow 0.

Some rules of calculus. As a quite direct consequence of the above definition, and of the rules of formal derivation, one has, for any f, g\in K((X))

i. \mathrm{Res}(f')=0;
ii. \mathrm{Res}(fg')=-\mathrm{Res}(f'g);
iii. \mathrm{Res}(f'/f)=\mathrm{ord}(f),\qquad \forall f\neq0;
iv. \mathrm{Res}\left(( f\circ g) g'\right) = \mathrm{ord}(g)\mathrm{Res}(f), if \mathrm{ord}(g)>0;
v. [X^n]f(X)=\mathrm{Res}\left(X^{-n-1}f(X)\right).

Property (i) is part of the exact sequence above. Property (ii) follows from (i) as applied to (fg)'=f'g+fg'. Property (iii): any f can be written in the form f=x^mg, with m=\mathrm{ord}(f) and \mathrm{ord}(g)=0: then f'/f = mX^{-1}+g'/g. Since \mathrm{ord}(g)=0, the element g is invertible in K[[X]]\subset \mathrm{im}(D) = \mathrm{ker}(\mathrm{Res}), whence \mathrm{Res}(f'/f)=m. Property (iv): Since \mathrm{im}(D) = \mathrm{ker}(\mathrm{Res}), we can write f=f_{-1}X^{-1}+F', with F \in K[[X]]. Consequently, (f\circ g)g'= f_{-1}g^{-1}g'+(F'\circ g)g' = f_{-1}g'/g + (F' \circ g)g' and (iv) follows from (i) and (iii). Property (v) is clear from the definition.

The Lagrange inversion formula

As mentioned above, any formal series fK[[X]] with f0 = 0 and f1 ≠ 0 has a composition inverse g in K[[X]]. The following relation between the coefficients of gn and f−k holds ("Lagrange inversion formula"):

\textstyle k[X^k] g^n=n[X^{-n}]f^{-k}.

In particular, for n = 1 and all k  1,

[X^k] g=\frac{1}{k}  \mathrm{Res}\left( f^{-k}\right).

Since the proof of the Lagrange inversion formula is a very short computation, it is worth reporting it here. By the above rules of calculus,

\begin{align}
k[X^k] g^n & =k\mathrm{Res}\left(g^n X^{-k-1}  \right) =k\mathrm{Res}\left(X^n f^{-k-1}f\,'\right) =-\mathrm{Res}\left(X^n (f^{-k})'\right) \\[6pt]
& =\mathrm{Res}\left(\left(X^n\right)' f^{-k}\right) =n\mathrm{Res}\left(X^{n-1}f^{-k}\right) =n[X^{-n}]f^{-k}.
\end{align}

Generalizations. One may observe that the above computation can be repeated plainly in more general settings than K((X)): a generalization of the Lagrange inversion formula is already available working in the C((X))-modules XαC((X)), where α is a complex exponent. As a consequence, if f and g are as above, with f_1=g_1=1, we can relate the complex powers of f/X and g/X: precisely, if α and β are non-zero complex numbers with negative integer sum, m=−α−β ∈ N, then

\frac{1}{\alpha}[X^m]\left( \frac{f}{X} \right)^\alpha=-\frac{1}{\beta}[X^m]\left( \frac{g}{X} \right)^\beta.

For instance, this way one finds the power series for complex powers of the Lambert function.

Power series in several variables

Formal power series in any number of indeterminates (even infinitely many) can be defined. If I is an index set and XI is the set of indeterminates Xi for iI, then a monomial Xα is any finite product of elements of XI (repetitions allowed); a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted \textstyle\sum_\alpha c_\alpha X^\alpha. The set of all such formal power series is denoted R[[XI]], and it is given a ring structure by defining

\left(\sum_\alpha c_\alpha X^\alpha\right)+\left(\sum_\alpha d_\alpha X^\alpha\right)=\sum_\alpha(c_\alpha+d_\alpha)X^\alpha

and

\left(\sum_\alpha c_\alpha X^\alpha\right)\times\left(\sum_\alpha d_\alpha X^\alpha\right)=\sum_{\alpha,\beta} c_\alpha d_\beta X^{\alpha+\beta}

Topology

The topology on R[[XI]] is such that a sequence of its elements converges only if for each monomial Xα the corresponding coefficient stabilizes. If I is finite, then this the J-adic topology, where J is the ideal of R[[XI]] generated by all the indeterminates in XI. This does not hold if I is infinite. For example, if I = N, then the sequence (fn)nN with  f_n = X_n + X_{n+1} + X_{n+2} + \ldots does not converge with respect to any J-adic topology on R, but clearly for each monomial the corresponding coefficient stabilizes.

As remarked above, the topology on a repeated formal power series ring like R[[X]][[Y]] is usually chosen in such a way that it becomes isomorphic as a topological ring to R[[X,Y]].

Operations

All of the operations defined for series in one variable may be extended to the several variables case.

In the case of the formal derivative, there are now separate partial derivative operators, which differentiate with respect to each of the indeterminates. They all commute with each other.

Universal property

In the several variables case, the universal property characterizing R[[X1, ..., Xr]] becomes the following. If S is a commutative associative algebra over R, if I is an ideal of S such that the I-adic topology on S is complete, and if x1, ..., xr are elements of I, then there is a unique Φ : R[[X1, ..., Xn]]S with the following properties:

Non-commuting variables

The several variable case can be further generalised by taking non-commuting variables Xi for iI, where I is an index set and then a monomial Xα is any word in the XI; a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted \textstyle\sum_\alpha c_\alpha X^\alpha. The set of all such formal power series is denoted R«XI», and it is given a ring structure by defining addition pointwise

\left(\sum_\alpha c_\alpha X^\alpha\right)+\left(\sum_\alpha d_\alpha X^\alpha\right)=\sum_\alpha(c_\alpha+d_\alpha)X^\alpha

and multiplication by

\left(\sum_\alpha c_\alpha X^\alpha\right)\times\left(\sum_\alpha d_\alpha X^\alpha\right)=\sum_{\alpha,\beta} c_\alpha d_\beta X^{\alpha} \cdot X^{\beta}

where · denotes concatenation of words. These formal power series over R form the Magnus ring over R.[3][4]

On a semiring

In theoretical computer science, the following definition of a formal power series is given: let Σ be an alphabet (finite set) and S be a semiring. In this context, a formal power series is any mapping r from the set of strings generated by Σ (denoted as Σ) to the semiring S. The values of such a mapping r are (somewhat idiosyncratically) denoted as (r, w) were w ∈ Σ. Then the mapping r itself is conventionally written as r = \sum_{w \in \Sigma^*} (r,w)w. Given this notation, the values (r, w) are also called the coefficients of the series. Similarly to the non-commuting [ring] case discussed in the section above this, the notation for the collection of all power series given a fixed alphabet and semiring is SΣ.[5]

Replacing the index set by an ordered abelian group

Main article: Hahn series

Suppose G is an ordered abelian group, meaning an abelian group with a total ordering < respecting the group's addition, so that a<b if and only if a+c<b+c for all c. Let I be a well-ordered subset of G, meaning I contains no infinite descending chain. Consider the set consisting of

\sum_{i \in I} a_i X^i

for all such I, with a_i in a commutative ring R, where we assume that for any index set, if all of the a_i are zero then the sum is zero. Then R((G)) is the ring of formal power series on G; because of the condition that the indexing set be well-ordered the product is well-defined, and we of course assume that two elements which differ by zero are the same.

Various properties of R transfer to R((G)). If R is a field, then so is R((G)). If R is an ordered field, we can order R((G)) by setting any element to have the same sign as its leading coefficient, defined as the least element of the index set I associated to a non-zero coefficient. Finally if G is a divisible group and R is a real closed field, then R((G)) is a real closed field, and if R is algebraically closed, then so is R((G)).

This theory is due to Hans Hahn, who also showed that one obtains subfields when the number of (non-zero) terms is bounded by some fixed infinite cardinality.

Examples and related topics

Notes

  1. Sec 0.313, I.S. Gradshteyn (И.С. Градштейн), I.M. Ryzhik (И.М. Рыжик); Alan Jeffrey, Daniel Zwillinger, editors. Table of Integrals, Series, and Products, seventh edition. Academic Press, 2007. ISBN 978-0-12-373637-6. Errata. (Several previous editions as well.)
  2. Ivan Niven, "Formal Power Series", American Mathematical Monthly, volume 76, number 8, October 1969, pages 871–889.
  3. Koch, Helmut (1997). Algebraic Number Theory. Encycl. Math. Sci. 62 (2nd printing of 1st ed.). Springer-Verlag. p. 167. ISBN 3-540-63003-1. Zbl 0819.11044.
  4. Moran, Siegfried (1983). The Mathematical Theory of Knots and Braids: An Introduction. North-Holland Mathematics Studies 82. Elsevier. p. 211. ISBN 0-444-86714-7. Zbl 0528.57001.
  5. Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28. doi:10.1007/978-3-642-01492-5_1, p. 12

References

Further reading


This article is issued from Wikipedia - version of the Tuesday, February 09, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.