Shifting ''n''th root algorithm
The shifting nth root algorithm is an algorithm for extracting the nth root of a positive real number which proceeds iteratively by shifting in n digits of the radicand, starting with the most significant, and produces one digit of the root on each iteration, in a manner similar to long division.
Algorithm
Notation
Let B be the base of the number system you are using, and n be the degree of the root to be extracted. Let x be the radicand processed thus far, y be the root extracted thus far, and r be the remainder. Let α be the next n digits of the radicand, and β be the next digit of the root. Let x' be the new value of x for the next iteration, y' be the new value of y for the next iteration, and r' be the new value of r for the next iteration. These are all integers.
Invariants
At each iteration, the invariant will hold. The invariant will hold. Thus y is the largest integer less than or equal to the nth root of x, and r is the remainder.
Initialization
The initial values of x, y, and r should be 0. The value of α for the first iteration should be the most significant aligned block of n digits of the radicand. An aligned block of n digits means a block of digits aligned so that the decimal point falls between blocks. For example, in 123.4 the most significant aligned block of 2 digits is 01, the next most significant is 23, and the third most significant is 40.
Main loop
On each iteration we shift in n digits of the radicand, so we have and we produce 1 digit of the root, so we have . We want to choose β and r' so that the invariants described above hold. It turns out that there is always exactly one such choice, as will be proved below.
The first invariant says that:
or
So, pick the largest integer β such that
and let
Such a β always exists, since if then the condition is , but , so this is always true. Also, β must be less than B, since if then we would have
but the second invariant implies that
and since and are both multiples of the difference between them must be at least , and then we have
but by definition of α, so this can't be true, and is a monotonically increasing function of β, so it can't be true for larger β either, so we conclude that there exists an integer γ with such that an integer r' with exists such that the first invariant holds if and only if .
Now consider the second invariant. It says:
or
Now, if β is not the largest admissible β for the first invariant as described above, then is also admissible, and we have
This violates the second invariant, so to satisfy both invariants we must pick the largest β allowed by the first invariant. Thus we have proven the existence and uniqueness of β and r'.
To summarize, on each iteration:
- Let α be the next aligned block of digits from the radicand
- Let
- Let β be the largest β such that
- Let
- Let
Now, note that , so the condition
is equivalent to
and
is equivalent to
Thus, we don't actually need , and since and , or , or , so by using instead of we save time and space by a factor of 1/n. Also, the we subtract in the new test cancels the one in , so now the highest power of y we have to evaluate is rather than .
Summary
- Initialize r and y to 0.
- Repeat until desired precision is obtained:
- Let α be the next aligned block of digits from the radicand.
- Let β be the largest β such that
- Let .
- Let
- Assign and
- is the largest integer such that , and , where is the number of digits of the radicand after the decimal point that have been consumed (a negative number if the algorithm hasn't reached the decimal point yet).
Paper-and-pencil nth roots
As noted above, this algorithm is similar to long division, and it lends itself to the same notation:
1. 4 4 2 2 4 —————————————————————— _ 3/ 3.000 000 000 000 000 \/ 1 = 3(10×0)2×1 +3(10×0)×12 +13 — 2 000 1 744 = 3(10×1)2×4 +3(10×1)×42 +43 ————— 256 000 241 984 = 3(10×14)2×4 +3(10×14)×42 +43 ——————— 14 016 000 12 458 888 = 3(10×144)2×2 +3(10×144)×22 +23 —————————— 1 557 112 000 1 247 791 448 = 3(10×1442)2×2 +3(10×1442)×22 +23 ————————————— 309 320 552 000 249 599 823 424 = 3(10×14422)2×4 +3(10×14422)×42 +43 ——————————————— 59 720 728 576
Note that after the first iteration or two the leading term dominates the , so we can get an often correct first guess at β by dividing by .
Performance
On each iteration, the most time-consuming task is to select β. We know that there are B possible values, so we can find β using comparisons. Each comparison will require evaluating . In the kth iteration, y has k digits, and the polynomial can be evaluated with multiplications of up to digits and additions of up to digits, once we know the powers of y and β up through for y and n for β. β has a restricted range, so we can get the powers of β in constant time. We can get the powers of y with multiplications of up to digits. Assuming n-digit multiplication takes time and addition takes time , we take time for each comparison, or time to pick β. The remainder of the algorithm is addition and subtraction that takes time , so each iteration takes . For all k digits, we need time .
The only internal storage needed is r, which is digits on the kth iteration. That this algorithm doesn't have bounded memory usage puts an upper bound on the number of digits which can be computed mentally, unlike the more elementary algorithms of arithmetic. Unfortunately, any bounded memory state machine with periodic inputs can only produce periodic outputs, so there are no such algorithms which can compute irrational numbers from rational ones, and thus no bounded memory root extraction algorithms.
Note that increasing the base increases the time needed to pick β by a factor of , but decreases the number of digits needed to achieve a given precision by the same factor, and since the algorithm is cubic time in the number of digits, increasing the base gives an overall speedup of . When the base is larger than the radicand, the algorithm degenerates to binary search, so it follows that this algorithm is not useful for computing roots with a computer, as it is always outperformed by much simpler binary search, and has the same memory complexity.
Examples
Square root of 2 in binary
1. 0 1 1 0 1 ------------------ _ / 10.00 00 00 00 00 1 \/ 1 + 1 ----- ---- 1 00 100 0 + 0 -------- ----- 1 00 00 1001 10 01 + 1 ----------- ------ 1 11 00 10101 1 01 01 + 1 ---------- ------- 1 11 00 101100 0 + 0 ---------- -------- 1 11 00 00 1011001 1 01 10 01 1 ---------- 1 01 11 remainder
Square root of 3
1. 7 3 2 0 5 ---------------------- _ / 3.00 00 00 00 00 \/ 1 = 20×0×1+1^2 - 2 00 1 89 = 20×1×7+7^2 ---- 11 00 10 29 = 20×17×3+3^2 ----- 71 00 69 24 = 20×173×2+2^2 ----- 1 76 00 0 = 20×1732×0+0^2 ------- 1 76 00 00 1 73 20 25 = 20×17320×5+5^2 ---------- 2 79 75
Cube root of 5
1. 7 0 9 9 7 ---------------------- _ 3/ 5. 000 000 000 000 000 \/ 1 = 300×(0^2)×1+30×0×(1^2)+1^3 - 4 000 3 913 = 300×(1^2)×7+30×1×(7^2)+7^3 ----- 87 000 0 = 300×(17^2)*0+30×17×(0^2)+0^3 ------- 87 000 000 78 443 829 = 300×(170^2)×9+30×170×(9^2)+9^3 ---------- 8 556 171 000 7 889 992 299 = 300×(1709^2)×9+30×1709×(9^2)+9^3 ------------- 666 178 701 000 614 014 317 973 = 300×(17099^2)×7+30×17099×(7^2)+7^3 --------------- 52 164 383 027
Fourth root of 7
1. 6 2 6 5 7 --------------------------- _ 4/ 7.0000 0000 0000 0000 0000 \/ 1 = 4000×(0^3)×1+400×(0^2)×(1^2)+40×0×(1^3)+1^4 - 6 0000 5 5536 = 4000×(1^3)×6+600×(1^2)×(6^2)+40×1×(6^3)+6^4 ------ 4464 0000 3338 7536 = 4000×(16^3)×2+600*(16^2)×(2^2)+40×16×(2^3)+2^4 --------- 1125 2464 0000 1026 0494 3376 = 4000×(162^3)×6+600×(162^2)×(6^2)+40×162×(6^3)+6^4 -------------- 99 1969 6624 0000 86 0185 1379 0625 = 4000×(1626^3)×5+600×(1626^2)×(5^2)+ ----------------- 40×1626×(5^3)+5^4 13 1784 5244 9375 0000 12 0489 2414 6927 3201 = 4000×(16265^3)×7+600×(16265^2)×(7^2)+ ---------------------- 40×16265×(7^3)+7^4 1 1295 2830 2447 6799
See also
External links
- Why the square root algorithm works "Home School Math". Also related pages giving examples of the long-division-like pencil and paper method for square roots.