Talk:Horner scheme
From Wikipedia, the free encyclopedia
Contents |
[edit] A notation for the Horner scheme
I posted the following in Talk:Ruffini's rule, but got no response. I'm reposting the content here, since it may be more relevant here:
My professor showed me this algorithm for evaluating polynomials (more) quickly, and it looks very much like synthetic division:
Suppose f(x) = 3x6 + 5x3 + 2x2 + 1 (I just made it up), and I want to know what f(2) is. I setup the problem thusly,
2 | 3 0 0 5 2 0 1 | | 6 12 24 58 120 240 |------------------------------------------- 3 6 12 29 60 120 241
The 2 at the very left is the parameter, and the rest of the first row is the coefficients of the polynomial. The first coefficient drops down directly. Then I multiply it by the parameter to get 6, which goes to 2nd row, 2nd column. I add the columns (0 + 6) and drop down the sum. Multiply it by the parameter again yields 12, which goes to 2nd row, 3rd column, ... Repeat until the last column, and the last sum is f(2)
This algorithm needed 7 multiplies and 6 adds. Computing f(x) = 3(2)6 + 5(2)3 + 2(2)2 + 1 directly would need 14 multiplies and 4 adds.
Is this a variant of synthetic division, thus belongs to the aritcle? If not, where else is more appropriate? IMHO, it's pretty neat and deserves mention somewhere.
madoka 00:31, Oct 12, 2004 (UTC)
This is precisely the Horner scheme. The long-division-like notation, as far as I know, is your prof's idea.
Loisel 13:42, 14 Oct 2004 (UTC)
- Actually, I was taught this table layout by a classical mathematician of the first order. He mentioned that people use this method in Europe. I assume from his background that he meant Eastern Europe. Tparameter 18:29, 9 November 2006 (UTC)
It would be nice to see synthetic division written somewhere in this article.. not everyone knows it as Ruffini's rule. Or maybe an example in the typical notation used with synthetic division. Otherwise great article! :-)
[edit] Numerical efficiency
Horner's scheme is more efficient than naive evaluation since it requires less operations. I have always read that it is also more convenient from the floating-point arithmetic point of view, but I don't see why. Can someone provide an easy example of a polynomial and a number where Horner's scheme performs better (in the sense of the relative error) than naive evaluation. Just a little example where I can follows the calculations by hand.
[edit] Old stuff
I think the first description is gibberish. I'll delete it unless someone protests. Loisel 05:17, 6 May 2004 (UTC)
- Done. Loisel 04:52, 11 May 2004 (UTC)
[edit] error in formula
The formula
must be written
to make the number of ")" match the number of "(". Bo Jacoby 10:45, 7 March 2006 (UTC)
[edit] More numerical efficiency
It occurs to me, reading this article, that the claim about numerical efficiency is misleading. Naive evaluation of a polynomial, say
does not necessarily require
as claimed in the article, unless one is indeed very naive. A better way to do it, and the way people actually do it, is to compute the powers successively and save the results. This requires only n − 1 multiplications, since one is in effect computing only xn. One then multiplies each by its coefficient, another n multiplications, and adds the results: n additions. It follows then that the naive algorithm requires only 2n − 1 multiplications and n additions, so Horner's method is, in terms of time complexity, not even a big-O improvement. It merely improves the "coefficient".
That's time complexity; what about space complexity? Horner's method requires only storing one number at a time, namely bi when computing bi + 1, after which it is no longer needed. These numbers are all about magnitude anxn, or in terms of bits, about logan + nlogx. So one requires O(n) storage space to execute Horner's algorithm. The naive algorithm again naively requires storing all the numbers before multiplying with the coefficients and then adding, which would appear to be a space complexity requirement of O(n2) bits (that's 1 + 2 + ...), but that's just as silly as multiplying each power of x separately. One should instead perform "nested summation", namely computing successively which requires only remembering the previous value of x and the previous iteration's partially computed sum: additional space complexity, again O(n), or more specifically, about 2nlogx. So Horner is again just an improvement of a factor of 2.
One objection I foresee is that someone will claim that this is "changing" the naive computation by optimizing it. That's a reasonable claim for the time complexity computation, since a priori I do need all the powers of x and I'm just choosing to do it cleverly. My point is, though, that if you wanted to do it on paper you would compute those powers by the method I describe; it's only when writing in some kind of programming language that you'd fall into the (easy) trap of writing something stupid like (code snippet from a likely implementation in C):
for (i = 0; i <= n; ++i) {
poly += a[i] * pow(x, i);
}
which requires recomputing each one. And that's an issue with the representation of the algorithm rather than the algorithm itself; I maintain that the naive algorithm as it is conceived by anyone who sits down to do it is as I have described. Still, the point stands: the naive algorithm does call, apparently, for computing each power separately. It is less clear that the same objection applies to the computation of space complexity: consider that by definition of a sum, we have
- where the big sigma is defined in general by:
In other words, the definition of repeated addition is as "nested addition"; this is not nearly the same league of optimization as Horner's "nested multiplication", which requires actually factoring distributed products in the polynomial. This is just the associative law, and in fact it is not even an optimization because it is required by definition of addition as a binary operation. The biggest choice made here is to write the sum in order of increasing powers of x, which is already done in the statement of the problem.
This is not to diminish the value of this method. It's just to say that it isn't as much better as it would seem, unless the person doing the naive computation is really a total naif. I would correct the article, and I'm certain that I'm not the only person to have thought of this so it's probably not original research, but this is totally not my field and I don't know a reference that would have it. So I put it here instead, in case one of you knows. Ryan Reich 21:11, 13 July 2007 (UTC)
- Hell, as soon as I write this I check back at the section in the article and find that something else makes reference to Knuth Vol. 2. So I check Knuth (I always do intend to read it...) and find that even before the very first algorithm in section 4.6.4, he makes precisely the observation I just made. He doesn't discuss the space complexity at all, though, except for dismissing it with a comment about accessing results in memory that really is a comment on the time complexity, so I still don't have a reference to that. I'm putting it in anyway and hoping someone else can fix this. So references are still welcome. Ryan Reich 21:15, 13 July 2007 (UTC)