Talk:Nth root algorithm
From Wikipedia, the free encyclopedia
I've been playing around with finding (integer) nth roots for large n. Unfortunately, the following implementation of Newton's method (in Python) is ridiculously slow:
def nthroot(y, n): x, xp = 1, -1 while abs(x - xp) > 1: xp, x = x, x - x/n + y/(n * x**(n-1)) while x**n > y: x -= 1 return x
For example, nthroot(12345678901234567890123456789012345**100,100) takes several minutes to finish.
Using binary search, the solution can be found in seconds.
Am I just doing something wrong, or is Newton's method inherently inefficient for integers and such large n? - Fredrik | talk 23:22, 24 May 2005 (UTC)
- Brief answer (let me know if you need more information): Are you sure that the standard datatype of Python can handle 125-digit integers? I think you need a special library for this. Besides, it may be necessary to start the iteration with a more sensible initial guess than x = 1 (generally, Newton's method may not converge unless the initial guess is close to the answer, but extracting N-th roots may be a special case). -- Jitse Niesen 19:39, 6 Jun 2005 (UTC)
-
-
- I thought a bit more about it now. Basically, you are right: Newton's method is inefficient for large n. There are two reasons for it. Firstly, the computation of x**(n-1) in exact arithmetics is expensive if x is a large integer, so every iteration takes a long time. Secondly, the method needs a lot of iterations. For instance, nthroot(1234567890123456789012345**10, 10) requires 4725 iterations (note that I replaced 100 in your example with 10), while a binary search requires about 100 iterations. However, the method improves if you give a good initial guess. After replacing the first line of the nthroot function with x, xp = int(y**(1.0/n)), -1, the above example gets the correct answer in only 3 iterations.
- The current article is rather bad: it only presents the algorithm, which is trivial if you know Newton's method. There is no analysis, no discussion on how to get the initial guess and no references. I am not convinced that the algorithm is used that often; I thought the common approach is to use exponential and logarithms. Bignum arithmetics is a separate issue. I hope you can find the time to improve the article. -- Jitse Niesen 23:37, 7 Jun 2005 (UTC)
-
-
-
-
- The catch with that solution is that y**(1.0/n) overflows for large y. Fortunately this can be avoided by using B**int(log(y, B)/n) with some integer B instead. However, this still leaves Newton (as implemented above) significantly slower than bsearch for very large n (a test with n=300 is nearly instantaneous with bsearch, but takes several seconds with Newton). I'll do some more detailed benchmarking when I get time, hopefully soon. For reference, I'll provide my bsearch implementation here: - Fredrik | talk 00:24, 8 Jun 2005 (UTC)
-
-
# Returns a tuple of the (floor, ceil) values of the exact solution def nthroot_bsearch(x, n): guess = 1 # Initial guess must be less than the result step = 1 while 1: w = (guess+step)**n if w == x: return (guess+step,) * 2 elif w < x: step <<= 1 elif step == 1: return guess, guess+1 else: guess += step >> 1 step = 1
Contents |
[edit] More timing results
Newton steps and time bsearch steps and time 2-root of 123^13 5 in 0.000124 s 45 in 0.000358 s 3-root of 123^13 4 in 0.000081 s 30 in 0.000198 s 15-root of 123^13 1 in 0.000053 s 6 in 0.000077 s 37-root of 123^13 185 in 0.003073 s 2 in 0.000072 s 68-root of 123^13 0 in 0.000040 s 1 in 0.000046 s 111-root of 123^13 0 in 0.000029 s 0 in 0.000030 s 150-root of 123^13 0 in 0.000028 s 0 in 0.000029 s 2-root of 123^15 5 in 0.000083 s 52 in 0.000356 s 3-root of 123^15 7 in 0.000203 s 34 in 0.000229 s 15-root of 123^15 97 in 0.000851 s 6 in 0.000076 s 37-root of 123^15 536 in 0.008276 s 2 in 0.000063 s 68-root of 123^15 0 in 0.000039 s 1 in 0.000046 s 111-root of 123^15 0 in 0.000029 s 0 in 0.000030 s 150-root of 123^15 0 in 0.000028 s 0 in 0.000029 s 2-root of 123^74 9 in 0.000191 s 256 in 0.002559 s 3-root of 123^74 8 in 0.000207 s 171 in 0.001480 s 15-root of 123^74 13 in 0.000239 s 34 in 0.001284 s 37-root of 123^74 679 in 0.015252 s 13 in 0.000179 s 68-root of 123^74 1471 in 0.057066 s 7 in 0.000138 s 111-root of 123^74 4564 in 0.710314 s 4 in 0.000125 s 150-root of 123^74 5358 in 1.115691 s 3 in 0.000112 s 2-root of 123^137 9 in 0.000343 s 475 in 0.006759 s 3-root of 123^137 7 in 0.000333 s 317 in 0.008288 s 15-root of 123^137 28 in 0.000767 s 63 in 0.000929 s 37-root of 123^137 517 in 0.015948 s 25 in 0.000565 s 68-root of 123^137 2814 in 0.278242 s 13 in 0.000247 s 111-root of 123^137 4291 in 0.634543 s 8 in 0.000209 s 150-root of 123^137 4360 in 0.722420 s 6 in 0.000179 s 2-root of 123^150 9 in 0.000388 s 520 in 0.007006 s 3-root of 123^150 8 in 0.000509 s 347 in 0.007097 s 15-root of 123^150 30 in 0.000885 s 69 in 0.001091 s 37-root of 123^150 29 in 0.000684 s 28 in 0.000420 s 68-root of 123^150 705 in 0.025949 s 15 in 0.000300 s 111-root of 123^150 2709 in 0.244051 s 9 in 0.000241 s 150-root of 123^150 13713 in 10.464777 s 6 in 0.000187 s
This clearly shows Newton being superior for small n, and bsearch superior for large n. Interesting, eh? Fredrik | talk 10:47, 25 August 2005 (UTC)
[edit] Efficiency depends on choosing a good initial guess
Today I needed to write a routine to take the integer nth root of a number (i.e. . I think the problem Frederik is seeing above is due to choosing a bad initial guess. If you set
(where log is the log base 2) then Newton is very efficient. And it has the advantage that you can use truncating integer arithmetic when calculating
since each xk is larger than the required answer. For example, taking the 100th root of 12345678901234567890123456789012345^100 with this initial guess (Frederik's first comment) only requires 59 iterations. dbenbenn | talk 10:34, 19 March 2006 (UTC)
[edit] Other possible way to do the iteration
As far as I can see, it could be substantially faster to perform Newton iteration on rather than , by using an iteration like:
- Make an initial guess x0
- Set
- Repeat step 2 until the desired precision is reached.
since you that way can get rid of the per-iteration division (which is much slower than multiplication on most modern computers). I'm a bit afraid that there could be issues with convergence radius or the WP:NOR guideline, which is why I am not just inserting it into the article itself. 80.202.214.26 16:44, 25 May 2006 (UTC)
[edit] Nth-root Extremely Simple Arithmetical Iterations
It is astonishing to realize that the arithmetical methods shown at:
Nth-Root simple arithmetical methods
do not appear in any text on root-solving since Babylonian times up to now, including of course modern scholar journals. So worrying, indeed, that "experts" on root-solving seem unaware on these trivial arithmetical high-order nth-root-solving methods.
Domingo Gomez Morin. Civil Engineer. Structural Engineer