Properties of polynomial roots

From Wikipedia, the free encyclopedia

In mathematics, a polynomial is a function of the form


  p(x) = a_0 + a_1 x + \cdots a_n x^n, \quad x\in \mathbb{C}

where the coefficients a_0, \ldots, a_n are complex numbers. The fundamental theorem of algebra states that polynomial p has n roots. The aim of this page is to list various properties of these roots.

Contents

[edit] Continuous dependence of coefficients

The n roots of a polynomial of degree n depend continuously on the coefficients. This means that there are n continuous functions r_1,\ldots, r_n depending on the coefficients that parametrize the roots with correct multiplicity.

This result implies that the eigenvalues of a matrix depend continuously on the matrix. A proof can be found in Tyrtyshnikov(1997).

The problem of approximating the roots given the coefficients is ill-conditioned. See, for example, Wilkinson's polynomial.

[edit] Complex conjugate root theorem

The complex conjugate root theorem states that if the coefficients of a polynomial are real, then the roots appear in pairs of the type  a\pm ib.

For example, the equation x2 + 1 = 0 has roots \pm i.

[edit] Bounds on polynomial roots

[edit] General complex roots

A very general class of bounds on the magnitude of roots is implied by the Rouché theorem. If there is a positive real number R and a coefficient index k such that

|a_k|\,R^k > |a_0|+\dots+|a_{k-1}|\,R^{k-1}+|a_{k+1}|\,R^{k+1}+\dots+|a_n|\,R^n

then there are exactly k (counted with multiplicity) roots of absolute value less then R. For k=0,n there is always a solution to this inequality, for example

  • for k=n,
R=1+\frac1{|a_n|}\max\{|a_0|,|a_1|,\dots, |a_{n-1}|\} or
R=\max\left(1,\,\frac1{|a_n|}\left(|a_0|+|a_1|+\dots+|a_{n-1}|\right)\right)
are upper bounds for the size of all roots,
  • for k=0,
R=\frac{|a_0|}{|a_0|+\max\{|a_1|,|a_2|,\dots, |a_{n}|\}} or
R=\frac{|a_0|}{\max(|a_0|,\,|a_1|+|a_2|+\dots+|a_{n}|)}

are lower bounds for the size of all of the roots.

  • for all other indizes, the function
h(R)=|a_0|\,R^{-k}+\dots+|a_{k-1}|\,R^{-1}-|a_k|+|a_{k+1}|\,R+\dots+|a_n|\,R^{n-k}
is convex on the positive real numbers, thus the minimizing point is easy to determine numerically. If the minimal value is negative, one has found additional information on the location of the roots.

One can increase the separation of the roots and thus the ability to find additional separating circles from the coefficients, by applying the root squaring operation of the Dandelin-Graeffe iteration to the polynomial.

A different approach is by using the Gershgorin circle theorem applied to some companion matrix of the polynomial, as it is used in the Weierstraß-(Durand-Kerner) method. From initial estimates of the roots, that might be quite random, one gets unions of circles that contain the roots of the polynomial.

[edit] Gauss-Lucas theorem

Main article: Gauss-Lucas theorem

The Gauss-Lucas theorem states that the convex hull of the roots of a polynomial contains the roots of the derivative of the polynomial.

A sometimes useful corollary is that if all roots of a polynomial have positive real part, then so do the roots of all derivatives of the polynomial.

A related result is the Bernstein's inequality. It states that for a polynomial P of degree n with derivative P′ we have

\max_{|z| \leq 1} \big|P'(z)\big| \le n \max_{|z| \leq 1} \big|P(z)\big|.

[edit] See also

[edit] References

  • E.E. Tyrtyshnikov, A Brief Introduction to Numerical Analysis, Birkhäuser Boston, 1997