Talk:Durand-Kerner method

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: Start Class Low Priority  Field: Applied mathematics

See Talk:Root-finding algorithm for some discussion relating to this method. -- Jitse Niesen (talk) 12:58, 10 January 2006 (UTC)

Jitse, thanks! Moving the method here was the right thing to do! Oleg Alexandrov (talk) 16:30, 10 January 2006 (UTC)
PS The title is not perfectly correct. It should have been not Jacoby's method, rather Bo Jacoby's method. :) Oleg Alexandrov (talk) 16:30, 10 January 2006 (UTC)

This seems to be the same method: The method of Weierstrass (a.k.a the Durand-Kerner method). Fredrik Johansson - talk - contribs 18:21, 25 January 2006 (UTC)

That's great Fredrik! Go ahead and include the new reference and edit the title, preserving the links. Bo Jacoby 10:07, 26 January 2006 (UTC)
I moved this to "Durand-Kerner method" because that name turns up the most Google hits by far. So does anyone know what each of Weierstrass, Durand and Kerner has to do with it? I've been unsuccessful in finding any historical information. Fredrik Johansson - talk - contribs 11:58, 26 January 2006 (UTC)
There is something at http://netlib3.cs.utk.edu/cgi-bin/mfs/02/89/v89n10.html . I've been planning to go to the library to chase references but I haven't found the time yet. -- Jitse Niesen (talk) 14:00, 26 January 2006 (UTC)

Your good work, Fredrik, and the authority of the name Weierstrass has finally saved this article from the risk of being deleted by Oleg. Thank you ! Bo Jacoby 13:11, 26 January 2006 (UTC)

The references to Durand and Kerner ought to be sufficient, but I'm curious about the Weierstrass link. My calculus professor said he recognized "Durand-Kerner" when I asked today, but that seemed to be about it. There are some experts in numerical analysis on campus who probably know more; I'll try to find out. Mailing the author of the page I linked to above might also be a good idea. Fredrik Johansson - talk - contribs 16:21, 26 January 2006 (UTC)
In unrelated work, I came across this article in ACM ToMS, which seems to be freely available (at least, I could download it). It discusses the method and mentions the link to Weierstrass. -- Jitse Niesen (talk) 13:25, 31 January 2006 (UTC)

Thank you Jitse. The new reference talks about how well the method finds multiple roots. Well, I don't think that multiple roots should be found numerically (and approximately), because they can be isolated from the polynomial algebraically (and exactly). The point is this. If f has a multiple root, p, then p is also root in the derivative f' , which can be computed algebraically from f. Use the Euclid algoritm to compute f2 := the greatest common divisor of f and f' . The roots of f2 are exactly the multiple roots of f. Now any triple root of f will be double root of f2, so f3 is calculated as the greatest common divisor between f2 and f2' , and so on until the degree is 1. Let fn have only simple roots. Substitute fm-k:=fm-k / fmk, for k=1..m-1, for m=n..2. All these polynomial divisions give zero remainder. Now all the fi have only simple roots, and f=f1×f22×...×fnn. The polunomials fi are now subject to numerical root-finding. Bo Jacoby 15:15, 31 January 2006 (UTC)

This may be okay if you know in advance that the polynomial has multiple roots. If you do not, then you can theoretically use Euclid's algorithm to find the gcd, but I'm not sure this works in the presence of round-off errors (for essentially the same reason as you can't compare to zero in numerical algorithm). Furthermore, understanding how the algorithm performs with multiple roots helps you understand how it performs when roots are clustered. -- Jitse Niesen (talk) 18:52, 1 February 2006 (UTC)
The polynomial gcd is done in integer arithmetic without round-off errors. If the polynomial has no double roots, then f2 has degree zero. A cluster of roots slow down convergence until it is infiltrated by p,q,r,s. The real problem is the precise evaluation of the polynomial inside a cluster. Polynomials with reasonably small integer coefficients do not have clusters of roots, as small fractions have big denominators. Bo Jacoby 08:56, 2 February 2006 (UTC)
True if your polynomial has small integer coefficients; these are the easiest to handle. -- Jitse Niesen (talk) 13:52, 2 February 2006 (UTC)


Contents

[edit] Initialization

The recent references have expensive initializations.

  1. Random numbers should be used with care, noting that one cannot reproduce a calculation that depend on random numbers.
  2. Computing points evenly distributed on the unit circle is as costly as computing the roots.

So stick to the simple suggestion of the article. But the references support the method. Bo Jacoby 13:17, 30 January 2006 (UTC)

The simple way seems practical enough, but the advantages and disadvantages of more complex initialization schemes should probably be discussed in the article. Fredrik Johansson - talk - contribs 13:31, 30 January 2006 (UTC)


[edit] Connection to Newton nethod

I've now been informed that this is a quasi-Newton method, the product in the denominator being an approximation of the derivative. I've also been shown that vanilla Newton can be used for simultaneous root-finding in an identical manner, and received the question of in which way Durand-Kerner is better in this setting. Fredrik Johansson - talk - contribs 16:50, 1 February 2006 (UTC)

Great. Lets have an article on vanilla newton. (d/dx)((x-p)(x-q)(x-r)(x-s)) = (x-q)(x-r)(x-s)+(x-p)(x-r)(x-s)+(x-p)(x-q)(x-s)+(x-p)(x-q)(x-r), and surely if x=p this reduces to (x-q)(x-r)(x-s). Bo Jacoby 08:56, 2 February 2006 (UTC)
To answer the question posed to Fredrik, quasi-Newton is generally used because it's faster. As the expression given by Bo shows, the exact derivative is given by a rather long formula, and the approximation (x-q)(x-r)(x-s) is easier to evaluate. On the downside, it might slow down convergence.
The ToMS article says that Durand-Kerner can also be seen as a Newton method (and not just quasi-Newton), but in a rather complicated way and I haven't studied the details. -- Jitse Niesen (talk) 13:52, 2 February 2006 (UTC)

[edit] substitutions parallel or sequential

The article is not clear whether the new roots should be updated sequentially or in parallel. I've tried both and haven't found any significant advantage either way, but I'm wondering whether someone can give reasonable justification for either. --njh 00:12, 14 June 2006 (UTC)

My observations confirm yours: no significant advantage either way. In the article I inserted semicolons after the substitutions, just to fix that the algorithm stated was ment to be sequential, but it doesn't matter. If you use the APL programming language or the J programming language or another array programming language you would prefer to do it in parallel. If you use C programming language or fortran or pascal programming language or another scalar programming language you would prefer to do it sequentially. The article root-finding algorithm says that global convergence properties are still under investigation, so please share your experience with the curious world. Bo Jacoby 19:14, 14 June 2006 (UTC)

[edit] computational cost

I was going to also ask, but felt it perhaps off topic, is there any theoretical knowledge about the computational cost of complex polynomial factorization? For example, we can compute the roots of a linear, quadratic, cubic, quartic and quintic polynomial in a fixed sequence of root operations and basic arithmetic, so if we can perform all of these in roughly linear time we can find the root of an arbitrary such polynomial in linear time. Also note that the 'big O' factor of these increases in a non-linear way with increasing degree (the quartic is only a little more work than the cubic). So is there any known bound on the complexity of single variable polynomial root finding? (or alternatively, eigenvalue computation) There were a few results on undecidability of simultaneous polynomials, and on diophantane quadratics, but I don't know of any results for complex domain, single variable polynomials. I do know that all existing algorithms give no guarantees at all for convergence on higher degree polys.

My personal experiences with global converge have been on relatively low order polynomials (up to around degree 10) used in graphics applications. My program generates thousands of polynomials whilst interacting, and occasionally one of these stops and thinks for a bit. So I'm looking for algorithms that have predictable worst case performance, perhaps being able to estimate the number of steps for a required root tolerance. --njh 08:46, 16 June 2006 (UTC)

The important question on worst case behaviour of root-finding algorithms has a pessimistic answer: For any root-finding algorithm there exists polynomial equations that cannot be solved. Why is it so? Well, consider a simple case, the square root of e2·π·i·x where x is a real number, 0 ≤ x ≤ 1. The equation is z2−e2·π·i·x = 0. The roots are z1 = +eπ·i·x and z2 = −eπ·i·x . A 'good' root-finding algorithm is a continuous function, f, (because it is a finite number of iterations of the initial guesses, and each iteration is a continuous function), with a value f(x) approximating (+eπ·i·x, −eπ·i·x). When x moves from x = 0 to x = 1, then the continuous approximation of (+eπ·i·x , −eπ·i·x ) moves from about (+1,−1) to about (−1,+1). But on the other hand, f(0) = f(1), because solving the same equation, z2 − 1 = 0, must give exactly the same approximate solution. So, no good root-finding algorithm exists! There will always exist some number x in the interval 0 ≤ x ≤ 1 for which f(x) does not approximate neither (+eπ·i·x, −eπ·i·x) nor (−eπ·i·x, +eπ·i·x). This is the sad truth. It does not mean, however, that no useful root-finding algorithm exists. The unsolvable equations are exceptional. The Durand-Kerner method solves almost all nth degree equations (in the sense of Lebesgue), but it makes no sense to ask for worst case behaviour.
Note that the root operations required in the classical formulas for equations of low degree, (2,3,4, not 5), are not easier than the general case. The above example shows that there is not even a good algorithm for finding the square root. Bo Jacoby 18:13, 31 August 2006 (UTC)
Ok, that makes sense, and confirms my vague feelings. Does this mean that even for IEEE 754 floating point there are some inputs to sqrt that are slow or inexact? --njh 18:40, 31 August 2006 (UTC)
Hi njh! You need to take the square root of a complex number. An algorithm that gives sqrt(1) = +1 is insufficient for complex numbers. You must also find the other square root: sqrt(1) = −1. So a good algoritm must return a vector containing both roots: sqrt(1) = (+1,−1). Another vector containing both roots is sqrt(1) = (−1,+1). The equation x2−1 = 0 does not care, so the algorithm must take the decision. The two vectors represent the same multiset, (namely {+1,−1} with curly brackets), but they are not the same complex vector and they don't even approximate one another. A root-finding algorithm will compute approximations to one of the two vectors, but not to the other vector. Now a continuous change of variable a, starting in 1, following the unit circle around 0, and ending back in 1, will have the square root sqrt(a) moving continuously from the vector (+1,−1) to (−1,+1). Do you see the disaster? If the root-finding algorithm returns sqrt(1) = (+1,−1) , then it cannot also return sqrt(1) = (−1,+1). So a unique square root is not continuous. It must discontinuously switch the components of the vector. Such a discontinuity cannot be constructed by means of a finite number of continuous iterations. The composition of continuous functions is continuous, you know.
Another problem is that the precision of floating point arithmetic may be insufficient for the precise evaluation of polynomials of moderately high degree. See Wilkinson's polynomial. Bo Jacoby 19:36, 31 August 2006 (UTC)

I realised you meant complex number after I had posted and gone to bed :). My reading of your followup is that you are mainly concerned that the csqrt function has discontinuities? Any program that relies on csqrt giving predictable results is thus in for a surprise.

Ok, so continuity of the solution function is impossible. A program can handle discontinuity quite happily. So we accept that and simply require finding all the roots in some order (say lexicographical order). Does that problem then go away? Accuracy is certainly a problem still, but we could give error estimates at the same time as evaluating them. I guess I'm now looking for a practical solution - I'd be happy with an algorithm that could find me the most likely solution with a pessimistic estimate of the condition number, error radius, or similar.--njh 00:08, 1 September 2006 (UTC)

The problem does not go away. A finite number of rational iterations produces a continuous, singlevalued function, while the square root is either discontinuous or multivalued. A continuous function cannot approximate a discontinuous function close to the discontinuity. The program cannot detect the discontinuity. It either stops without having found the roots, or goes on for ever. (Try for example Newton's method for the equation x2+1=0 by starting with x=1. You will never find the roots +i and −i). For any algoritm and for any initial guess there exists unsolvable equations. There is no bound on the complexity of single variable polynomial root finding. However, unsolvable equations are extremely rare. You may construct them from the algorithm and initial guesses, but you do not find them by chance. Bo Jacoby 12:07, 4 September 2006 (UTC)
Hi, you should better change the image in your explanation. Consider the set of possible initial tuples instead of parametrized polynomials. Each iteration step is a continuous map of one tuple to the next iterate. The combination of a sufficiently high number of iterations is still a continuous map. But by the hypothesis of convergence it has only values close to the n! permutations of the solution tuple. And initial tuples close to a permutation surely converge to this permutation (conditions for convergence were given at least by Weierstrass, Presic and Carstensen). So all permutations are approximated, the iterated map is nearly everywhere constant, thus the conclusion of "bad" initial tupels. The image for this situation should be a multidimensional analogue to the Newton fractal.--LutzL 15:02, 4 September 2006 (UTC)
Yes, you state that "For any algoritm and for any equation there exists useless initial guesses". I state that "For any algoritm and for any initial guess there exists unsolvable equations". The realistic situation is that someone else delivers a computer program including both algorithm and initial guesses, and the user provides the equations to be solved. Bo Jacoby 12:41, 11 September 2006 (UTC)
As to the initial question: Factorizing a polynomial of degree n in precision s in linear factors for simple isolated zeros and higher degree factors for multiple zeros or clusters (wrt. the desired precision) costs O(n log(n+s)loglog(n)) bit operations, as one can learn in papers by Victor Pan. This algorithm is, as far as I know, unimplemented. There is an implementation of Schönhage's original splitting circle method in pari-gp, this method has O(n² log(n+s)loglog(n)).--LutzL 15:02, 4 September 2006 (UTC)
The strict upper limit for the computational cost of factoring polynomials is doubtful considering the continuity argument above. Test Schönhage's program solving x2=e2·π·i·a. Use bisection to find the value of a, 0<a<1, for which the algoritm breaks down. Bo Jacoby 12:41, 11 September 2006 (UTC)
I think that this directs to an important distinction between sequenctial and factoring algorithms that could reintroduce what has been deleted in root-finding algorithm: In sequential algorithms, even in a border case as Durand-Kerner, the approximation in each step depends continuously on the problem incl. the initial approximations. So you get your breakdown cases, at least theoretically. side remark: I doubt very much that they will occur in practice. One will only get approximate behaviour to breakdown, which is a very long chaotic part of the iteration. One caveat: In the pure Newton's method there are cases with stable oszillatory behaviour. I don't know of any result that this could happen with the W(D-K) method. Circle splitting methods are problem-adapted and non-continuous. The choice of the splitting circle is one of a finite number (finite number of center points to choose from, finite number of numbers of points inside the circle), and thus it has jumps by design. side remark: An analogy is that in computing the square root of a complex number z=x+iy, one chooses the first one depending on the sign of the imaginary part, \textstyle{}_{\sqrt{\frac{|z|+x}2}+\mathrm{sign}^+(y)\sqrt{\frac{|z|-x}2}}, \textstyle{}_{\mathrm{sign}^+(0):=1}, and thus gets a non-continuous function (as it has to be). W(D-K) is a border case of the splitting circle method because in the case of well isolated zeros the last step deals just with degree-one factors, which again results in the Weierstrass-iteration as special realization of Newton's method to the refinement of a numerical factorization of a polynomial.
--LutzL 13:29, 11 September 2006 (UTC)
I don't think that the theoretical difficulty can be avoided at all. A simple case is the equation x2+1=0, using real initial guesses. The roots are non-real but the iterated approximations are all real. So convergence is impossible. This situation occurs often in practice. It applies to the Durand-Kerner method as well as to Newton's method. Choosing nonreal initial guesses makes it disappear from practice, but not from theory. (Just choose an equation of degree 2, each root having the same distance to the two initial guesses). Bo Jacoby 15:53, 23 September 2006 (UTC)
The splitting circle method guaranties a correct result in the given time bounds. It just allows the result to contain clusters of zeros. It does not use initial guesses. It chooses the best splitting circle from a collection of four midpoints. Those are redundant and are shown to contain at least one working splitting circle. In your example it corresponds to checking 4 sets of starting values. Schönhages techreport from 1982 is finally online, please make the effort and read it.--LutzL 12:04, 26 September 2006 (UTC)
What does Schönhage's splitting circle algorithm say about the solution to x2+1=0 ? Is it +i or −i or (+i,−i) or (−i,+i) ? How can the algorithm choose between roots without using some kind of initial guesses ? Bo Jacoby 14:03, 28 September 2006 (UTC)
I still don't understand the problem - why can't it just answer (-i, i), the only guarantees being a) that all the roots are found, b) that the roots are in lexicographical order. It seems to me that the splitting circle algorithm says 'all the roots are inside this disk', then splits that into smaller regions, counting the number of roots in each. Either the roots are separable at the given precision (hence no problem), or they are not, in which case they are considered the same. Does your argument still apply to finding roots on a segment of the real line (say [0,1])? --njh 23:03, 28 September 2006 (UTC)
The continuity argument applies. Consider the equation x2=a where a is a point on the unit circle, a=e2·π·i·v, 0≤v≤1. The two solutions are x=e2·π·i·w where w=v/2 or w=(v+1)/2. (Draw the two lines w=v/2 and w=(v+1)/2 for 0≤v≤1 in the v-w-plane). Consider an algorithm that computes one of the roots. Which one is computed? For v=0 it computes the same value as for v=1 because the two values of v lead to the same equation x2=1. Now a continuous change 0≤v≤1 should lead to a continuous change 0≤w≤1/2 or 1/2≤w≤1, but there is no continuous change 0≤w≤1 when w=v/2 or w=(v+1)/2. So there is a point of discontinuity. At this point the algorithm cannot produce an approximation to a correct root. Perhaps it returns an incorrect result, or perhaps it never returns. Am I clear? Bo Jacoby 13:05, 10 October 2006 (UTC)
This argument only applies if the algorithm is continuous in its input (polynomial and additional structure). The splitting circle method is not continuous, as I said several times above. So you just get a jump in the order of the roots if you vary a. The same would apply to the W(D-K) method of one started from a fixed set of different initial configurations, computed the same number of steps for each one and chooses then the one that satisfies best the conditions for sure convergence. To determine the number and kind of initial configurations and the number of steps would be a task for the future. The choice of the best configuration introduces a discontinuity that allows to avoid the situation that you describe.--LutzL 14:36, 10 October 2006 (UTC)
The discontinuity of the method of splitting circle does not save us when the root is on the boundary of a circle. Then one cannot decide whether it is inside or outside the circle. As one cannot cover the complex plane with disjoint open sets, either the sets are not open or else they are not disjoint. Of course the situation, that the roots are on the boundaries of the circles, is improbable. Nevertheless it is possible, and so no strict limit on the computational cost exists. Bo Jacoby 13:51, 12 October 2006 (UTC)
Please stop talking nonsense and read a little about this topic. Since you are interested in this area, this is not a too heavy demand. Roots on or close to splitting circle boundaries are excluded, this is a basic property of a splitting circle.--LutzL 15:38, 12 October 2006 (UTC)
OK. Please provide a link to the proof that you suggest me to read. Also please explore the behaviour of your favorite algorithm in the vicinity of a point of discontinuity. You seem to believe that the computing cost is bounded, and I believe that it is not. It should be possible to explain either point of view. Bo Jacoby 19:45, 18 October 2006 (UTC)
Sorry, I thought that wiki-editors would be good in assembling information parts to get an answer. Please go to Splitting circle method, down to literature and follow the links to Schönhage 1982 and Pan 2002. If you find it in your university's library, or have web access to it, then also Pan 1997 makes a good read. As a bounded bit-cost algorithm there is also the bisection-exclusion algorithm, which is explained in Pan 1996 and had a thourough examination by Yakoubsohn in 2005
--LutzL 07:41, 19 October 2006 (UTC)

[edit] Relation to Newton's method

I do not understand the new subsections relating the Durand-Kerner method to other methods. Is the D-K method a special case of Newton's method, or an approximation to N's method? LutzL's work is hard to understand. Please explain. Bo Jacoby 18:42, 31 August 2006 (UTC)

It's a special case of the n-dimensional Newton's method, where n is the degree. Not, as I interpret your question, of the one-dimensional Newton iteration that is used to find one root at a time. I'll try to expand it. As I understand the history, the derivation as Newton's method was also the way Durand and Kerner took. I'm not sure about Weierstrass, in his article he just gives the iteration formula without derivation.--LutzL 07:30, 1 September 2006 (UTC)

I fail to understand. The problem solved by the Durand-Kerner method is finding n solutions to one polynomial equation having one variable and degree n, while the problem solved by the n-dimensional Newton's method is finding one solution to n polynomial equations having n variables and any degrees. Bo Jacoby 11:16, 4 September 2006 (UTC)

Hi, did you read the modified text in the article? There it is now explained that one solves a system of n identities between coefficients of monic or nomalized degree n polynomials. One is the given polynomial, the other the product of n linear factors. After several transformations the DK-method results--LutzL 14:48, 4 September 2006 (UTC)

Thanks. Now I read the new text and I understand. I was not aware of this interpretation of the D-K-method, and it is interesting.

The subscript is used in two ways in the formula

g_{\vec z}(X)=X^n+g_{n-1}(\vec z)X^{n-1}+\dots+g_0(\vec z).

Perhaps it is nicer to write

g({\vec z},X)=X^n+g_{n-1}(\vec z)X^{n-1}+\dots+g_0(\vec z)

restricting subscrips to mean indexing by integers.

The alphas and c's in the following formula are neither defined nor used.

\begin{matrix}
c_0&=&g_0(\vec z)&=&(-1)^n\alpha_n(\vec z)&=&(-1)^nz_1\ldots z_n\\
c_1&=&g_1(\vec z)&=&(-1)^{n-1}\alpha_{n-1}(\vec z)\\
&\vdots&\\
c_{n-1}&=&g_{n-1}(\vec z)&=&-\alpha_1(\vec z)&=&-(z_1+z_2+\ldots+z_n)
\end{matrix}
.

The message seems simply to be this.

\begin{matrix}
g_0(\vec z)&=&(-1)^nz_1\ldots z_n\\
&\vdots&\\
g_{n-1}(\vec z)&=&-(z_1+z_2+\ldots+z_n)
\end{matrix}
.

Bo Jacoby 19:32, 7 September 2006 (UTC)

The alphas come from the notation in elementary symmetric polynomial, I'm more familiar with sigmas in this place. The c's are the coefficients of the polynomial f, defined directly above. The notation of the g was choosen in this way to have an easy way to indicate the derivative wrt. X. In your proposal this would have to look like \textstyle g(\vec{z})'(X) or \textstyle \frac{\partial}{\partial X}g(\vec z,X).
One tries to make the \textstyle g_k(z) as close to the \textstyle c_k as possible. Weierstrass showed that if the discriminant of f does not vanish, then this is always possible via a homotopy method. The more recent results show that if the initial Weierstrass updates are small enough, \textstyle \max_{1\le k\le n} |w_k|\le\frac1{5n}\min_{1\le j<k\le n}|z_k-z_j|, then at least linear convergence follows (Presic 1980, Carstensen ca. 1985), by Newton the convergence will be quadratic from some point on.--LutzL 07:15, 8 September 2006 (UTC)

There is an unnecessary change of notation beginning with this subsection. The general and exact notation is a computer program which is difficult to read. The dot notation, ··· , for et cetera leaves to the reader to guess. A readable special case giving a sufficient clue to the general case is preferable. So the formula should be

(xP)(xQ)(xR)(xS) = x4 + ax3 + bx2 + cx + d,

for all x, implying

P + Q + R + S = −a
PQ + PR + PS + QR + QS + RS = +b
PQR + PQS + PRS + QRS = −c
PQRS = +d

These equations in the unknowns P,Q,R,S are to be solved by Newton's method.

The results regarding speed of convergence depend on assumptions, (that the initial Weierstrass updates are small enough). As the assumptions are hard to verify in practice, the results are not really useful. Bo Jacoby 12:08, 11 September 2006 (UTC)


Yes, one could copy the short formula into "your" section to motivate what is going on in the next section. - The use of three dots is a very old mathematical tradition, so anyone should be familiar with it. However, it is not in its most usual and "intuitive" form as of yet. - The assumptions given are very easy to verify. It is only difficult to find initial approximations that satisfy them. But to steer a homotopy method a la Weierstrass they are very practical. And they are the best recently known results regarding local convergence, which is reason enough to mention them.--LutzL 12:43, 11 September 2006 (UTC)

[edit] Roots with multiplicity

A small point. As a casual consumer of math wiki pages, I think it would be nice if you could mention that the algorithm as given doesn't work if the polynomial has duplicate roots. It doesn't seem to be mentioned anywhere on the page. Thanks for the exceptional quality of these and similar pages. Robin Davies --74.104.47.37 04:19, 2 December 2006 (UTC)

The algorithm even works in the presence of multiple roots or clusters of close-by roots, provided the approximations at the beginning are all different. It's only that the convergence to the multiple roots is linear instead of quadratic, and the last steps become even more numerically instable, because one has to divide a small quantity by a small quantity with a result that is a small quantity.--LutzL 10:15, 4 December 2006 (UTC)

[edit] An Ada implementation

After helpful discussions with Bo Jacoby, I have implemented this algorithm as a generic library package in Ada. My results are here. I'd be grateful for any comments. Thanks! John Matthews JMatthews 18:54, 7 November 2007 (UTC)

Hi, on my machine (Pentium(R) 4 CPU 3.00GHz, gnat-4.2) it only starts breaking at degree 49.--LutzL 15:57, 9 November 2007 (UTC)

Lutz: Thanks for this result. My initial tests were on PowerPC, but my results on Athlon are similar to yours. I'll update my page. John Matthews JMatthews 05:42, 12 November 2007 (UTC)

[edit] LutzL's edit

Hi LutzL. Your addition reads: "One could also choose the initial values for p,q,r,s randomly in a way that they are inside some circle containing also the roots of f(X), e.g. the circle around the origin with radius 1 + max | a | , | b | , | c | , | d | ), (where 1,a,b,c,d are the coefficients of f(X)) but are not too close to each other, which becomes increasingly a concern as the degree of the polynomial increases". What is the point? Computing random numbers take more time and space than computing the powers of (0.4+0.9*i). Do random numbers have any advantage in general? Random numbers outside the circle can also be used. The high powers of (0.4+0.9*i) are close to 0, because |0.4+0.9*i|<1, but not equal to 0, nor equal to one another, so they are perfectly suitable as initial guesses. Choosing powers of some complex constant, k, where |k|>1, gives the risk of floating point overflow. If you know anything about the roots a priori you can probably make better initial guesses, but the reason why you are using a root-finding algorithm is of course that you do not know the roots. So let's not confuse the readers. Bo Jacoby 00:35, 11 November 2007 (UTC).