Graeffe's method

From Wikipedia, the free encyclopedia

In mathematics, Graeffe's method is an algorithm for finding multiple roots of a polynomial. It was developed independently by Karl Heinrich Gräffe, Dandelin, and Lobachevsky. The method separates the roots of a polynomial by squaring them repeatedly, and uses the Vieta relations in order to approximate the roots.

Contents

[edit] Graeffe iteration

Let p(x) be an nth degree polynomial.

p(x) = (xx1)(xx2)...(xxn)

Then

p( − x) = ( − 1)n(x + x1)(x + x2)...(x + xn)

Hence

q(x^2) = p(x)p(-x) = (-1)^n(x^2-x_1^2)(x^2-x_2^2)\dots(x^2-x_n^2)

The roots of q(x2) (when viewed as a polynomial in the variable x2) are x_1^2, x_2^2,...,x_n^2. We have squared the roots of our original polynomial p(x). Iterating this procedure several times separates the roots with respect to their magnitudes. Repeating k times gives a polynomial in the variable y := x^{2^k} of degree n. Write this polynomial as

qk(y) = yn + a1yn − 1 + ... + an

with roots y_1=x_1^{2^k},\,y_2=x_2^{2^k},\,...,\,y_n=x_n^{2^k}.

[edit] Classical Graeffe's method

Next the Vieta relations are used

a1 = − (y1 + y2 + ... + yn)
a2 = y1y2 + y1y3 + ... + yn − 1yn
an = ( − 1)n(y1y2...yn)

If the roots x_1,\dots,x_n are sufficiently separated, say by a factor c > 1, |x_m|\ge c|x_{m+1}|, then the iterated powers y1,y2,...,yn of the roots are separated by the factor c^{2^k}, which quickly becomes very big.

The coefficients of the iterated polynomial can then be approximated by their leading term,

a_1 \approx -y_1
a_2 \approx y_1 y_2 and so on.

Finally logarithms are used in order to find the roots of the original polynomial.

Graeffe's method works best for polynomials with simple real roots, though it can be adapted for polynomials with complex roots and double roots. This method is problematic since the coefficients of the iterated polynomials span very quickly many orders of magnitude, which implies serious numerical errors. One second concern is that many different polynomials lead to the same Graeffe iterates.


[edit] Tangential Graeffe method

If \varepsilon is an „algebraic infinitesimal“ with \varepsilon^2=0, then the polynomial p(x+\varepsilon)=p(x)+\varepsilon\,p'(x) has roots x_m-\varepsilon, with powers

(x_m-\varepsilon)^{2^k}=x_m^{2^k}-\varepsilon\,{2^k}\,x_m^{2^k-1}=y_m+\varepsilon\,\dot y_m.

Thus the value of xm is easily obtained as fraction x_m=-\tfrac{2^k\,y_m}{\dot y_m}.

This kind of computation with infinitesimals is easy to implement analogous to the computation with complex numbers. If one assumes complex coordinates or an initial shift by some randomly chosen complex number, then all roots of the polynomial will be distinct and consequently recoverable with the iteration.

[edit] Renormalization

Every polynomial can be scaled in domain and range such that in the result the first and the last coefficient have size one and all intermediate coefficients a size smaller then one. This implies that all roots are located in a ring between the radii 1/2 and 2. If the size of the inner coefficiens is bounded by M, then the size of the inner coefficients after one stage of the Graeffe iteration is bounded by nM2. After k stages one gets the bound n^{2^k-1}M^{2^k} for the inner coefficients If the initial coefficients

To overcome the limit posed by the growth of the powers, Malajovich/Zubelli propose to represent coefficients and intermediate results in the kth stage of the algorithm by a scaled polar form

c=\alpha\,e^{-2^k\,r}

where \alpha=\frac{c}{|c|} is a complex number of unit length and r = − 2 klog | c | is a positive real. Splitting off the power 2k in the exponent reduces the absolute value of c to the corresponding dyadic root. Since this results in preserving the magnitude of the (representation of the) initial coefficients, this process was named renormalization.

Multiplication of two numbers of this type is straightforward, whereas addition is performed following the factorization c_3=c_1+c_2=|c_1|\cdot\left(\alpha_1+\alpha_2\tfrac{|c_2|}{|c_1|}\right), where c1 is chosen as the larger of both numbers, that is, r1 < r2. Thus

\alpha_3=\tfrac{s}{|s|} and r_3=r_1+2^{-k}\,\log{|s|} with s=\alpha_1+\alpha_2\,e^{2^k(r_1-r_2)}.

The coefficients a_0,a_1,\dots,a_n of the final stage k of the Graeffe iteration, for some reasonably large value of k, are represented by pairs m,rm), m=0,\dots,n. By identifying the corners of the convex envelope of the point set Failed to parse (Cannot write to or create math output directory): \{(m,r_m):\;m=0,\dots,n\}

one can determine the multiplicities of the roots of the polynomial. Combining this renormalization with the tangent iteration one can extract directly from the coefficients at the corners of the envelope the roots of the original polynomial.

[edit] See also

[edit] References

Languages