Multiplication algorithm
From Wikipedia, the free encyclopedia
A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are in use. Efficient multiplication algorithms have been around since the advent of the decimal system.
Contents |
[edit] Long multiplication
If a positional numeral system is used, a natural way of multiplying numbers is taught in grade schools as long multiplication: multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. This has the advantage of being fairly accurate when performed by a skilled person, and the disadvantages that the method is complex, requires memorization of a multiplication table or in earlier times possession of a set of Napier's bones, neat penmanship and mental focus; not necessarily all present in young students.
Humans usually use this algorithm in base 10, computers in base 2 (where multiplying by the single digit of the multiplier reduces to a simple series of logical and operations). Humans will write down all the products and then add them together; computers (and abacus operators) will sum the products as soon as each one is computed. Some chips implement this algorithm for various integer and floating-point sizes in hardware or in microcode. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally: the time complexity of multiplying two n-digit numbers using long multiplication is Θ(n2).
[edit] Example
This example uses long multiplication to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product).
23958233 5830 × ------------ 00000000 (= 0 × 23,958,233) 71874699 (= 3 × 239,582,330) 191665864 (= 8 × 2,395,823,300) 119791165 (= 5 × 23,958,233,000) ------------ 139676498390 (= 139,676,498,390)
[edit] Lattice multiplication
Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from the additions. It was introduced to Europe in 1202 in Fibonacci's Liber Abaci. Leonardo described the operation as mental, using his right and left hands to carry the intermediate calculations. Napier's bones, or Napier's rods also used this method, as published by Napier in 1617, the year of his death.
As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in Muhammad ibn Musa al-Khwarizmi's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002.
- During the multiplication phase, the lattice is filled in with two-digit products of the corresponding digits labeling each row and column: the tens digit goes in the top-left corner.
- During the addition phase, the lattice is summed on the diagonals.
- Finally, if a carry phase is necessary, the answer as shown along the left and bottom sides of the lattice is converted to normal form by carrying ten's digits as in long addition or multiplication.
[edit] Example
This example uses lattice multiplication to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product). Notice 23,958,233 is along the top of the lattice and 5,830 is along the right side. The products fill the lattice and the sum of those products (on the diagonal) are along the left and bottom sides. Then those sums are totaled as shown.
2 3 9 5 8 2 3 3 +---+---+---+---+---+---+---+---+- |1 /|1 /|4 /|2 /|4 /|1 /|1 /|1 /| | / | / | / | / | / | / | / | / | 5 01|/ 0|/ 5|/ 5|/ 5|/ 0|/ 0|/ 5|/ 5| +---+---+---+---+---+---+---+---+- |1 /|2 /|7 /|4 /|6 /|1 /|2 /|2 /| | / | / | / | / | / | / | / | / | 8 02|/ 6|/ 4|/ 2|/ 0|/ 4|/ 6|/ 4|/ 4| +---+---+---+---+---+---+---+---+- |0 /|0 /|2 /|1 /|2 /|0 /|0 /|0 /| | / | / | / | / | / | / | / | / | 3 17|/ 6|/ 9|/ 7|/ 5|/ 4|/ 6|/ 9|/ 9| +---+---+---+---+---+---+---+---+- |0 /|0 /|0 /|0 /|0 /|0 /|0 /|0 /| | / | / | / | / | / | / | / | / | 0 24|/ 0|/ 0|/ 0|/ 0|/ 0|/ 0|/ 0|/ 0| +---+---+---+---+---+---+---+---+- 26 15 13 18 17 13 09 00 |
01 02 17 24 26 15 13 18 17 13 09 00 ============= 139676498390 |
= 139,676,498,390 |
[edit] Peasant or binary multiplication
In base 2, long multiplication reduces to a nearly trivial operation. For each '1' bit in the multiplier, shift the multiplicand an appropriate amount and then sum the shifted values. Depending on computer processor architecture and choice of multiplier, it may be faster to code this algorithm using hardware bit shifts and adds rather than depend on multiplication instructions, when the multiplier is fixed and the number of adds required is small.
This algorithm is also known as Peasant multiplication, because it has been widely used among those who are unschooled and thus have not memorized the multiplication tables required by long multiplication. The algorithm was also in use in ancient Egypt.
On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product.
The main advantages of this method are that it can be taught in the time it takes to read aloud the preceding paragraph, no memorization is required, and it can be performed using tokens such as poker chips if paper and pencil are not available. It does however take more steps than long multiplication so it can be unwieldy when large numbers are involved.
[edit] Examples
This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33.
11 3 5 6 2121 24 --- 33
Describing the steps explicitly:
- 11 and 3 are written at the top
- 11 is halved (5.5) and 3 is doubled (6). The fractional portion is discarded (5.5 becomes 5).
- 5 is halved (2.5) and 6 is doubled (12). The fractional portion is discarded (2.5 becomes 2). The figure in the left column (2) is even, so the figure in the right column (12) is discarded.
- 2 is halved (1) and 12 is doubled (24).
- All not-scratched-out values are summed: 3 + 6 + 24 = 33.
The method works because multiplication is distributive, so:
A more complicated example, using the figures from the earlier examples (23,958,233 and 5,830):
5830239582332915 47916466 1457 95832932 72819166586436438333172818276666345691 1533326912 45 3066653824 22613330264811 12266615296 5 24533230592 2490664611841 98132922368 ------------ 139676498390
[edit] Multiplication algorithms for computer algebra
[edit] Karatsuba multiplication
For systems that need to multiply numbers in the range of several thousand digits, such as computer algebra systems and bignum libraries, long multiplication is too slow. These systems may employ Karatsuba multiplication, which was discovered in 1962. The heart of Karatsuba's method lies in the observation that two-digit multiplication can be done with only three rather than the four multiplications classically required:
- compute x1y1, call the result A
- compute x2y2, call the result B
- compute (x1 + x2)(y1 + y2), call the result C
- compute C - A - B; this number is equal to x1y2 + x2y1.
To compute these three products of m-digit numbers, we can employ the same trick again, effectively using recursion. Once the numbers are computed, we need to add them together, which takes about n operations.
Karatsuba multiplication has a time complexity of Θ. The number (log23) is approximately 1.585, so this method is significantly faster than long multiplication. Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values of n; typical implementations therefore switch to long multiplication if n is below some threshold.
[edit] Toom-Cook
Another method of multiplication is called Toom-Cook or Toom3. The Toom-Cook method splits each number to be multiplied into multiple parts. Karatsuba is a special case of Toom-Cook using two parts. A three-way Toom-Cook can do a size-N3 multiplication for the cost of five size-N multiplications, improvement by a factor of 9/5 compared to the Karatsuba method's improvement by a factor of 4/3.
Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers.
[edit] Fourier transform methods
The idea, due to Strassen (1968), is the following: We choose the largest integer w that will not cause overflow during the process outlined below. Then we split the two numbers into groups of w bits :
We can then say that
by setting bj=0 for j > m, k=i+j and {ck} as the convolution of {ai} and {bj}. Using the convolution theorem ab can be computed by
- Computing the fast Fourier transforms of {ai} and {bj},
- Multiplying the two results entry by entry,
- Computing the inverse Fourier transform and
- Adding the part of ck that is greater than 2w to ck+1
The fastest known method based on this idea was described in 1971 by Schönhage and Strassen (Schönhage-Strassen algorithm) and has a time complexity of Θ(n ln(n) ln(ln(n))).
Applications of this algorithm includes GIMPS.
Using number-theoretic transforms instead of discrete Fourier transforms avoids rounding error problems by using modular arithmetic instead of complex numbers.
[edit] Polynomial multiplication
All the above multiplication algorithms can also be expanded to multiply polynomials.
[edit] See also
[edit] External links
- Step By Step Long Polynomial Multiplication WebGraphing.com