Carry-save adder
From Wikipedia, the free encyclopedia
A carry-save adder is a type of digital adder, used in computer microarchitecture to compute the sum of three or more n-bit numbers in binary. It differs from other digital adders in that it outputs two numbers of the same dimensions as the inputs, one which is a sequence of partial sum bits and another which is a sequence of carry bits.
Contents |
[edit] Motivation
Consider the sum:
12345678
.
+ 87654322
=100000000
Using the arithmetic we learned as children, we go from right to left, "8+2=0, carry 1", "7+2+1=0, carry 1", "6+3+1=0, carry 1", and so on to the end of the sum. Although we know the last digit of the result at once, we cannot know the first digit until we have gone through every digit in the calculation, passing the carry from each digit to the one on its left. Thus adding two n-digit numbers has to take a time proportional to n, even if the machinery we are using would otherwise be capable of performing many calculations simultaneously.
In electronic terms, using binary bits, this means that even if we have n one-bit adders at our disposal, we still have to allow a time proportional to n to allow a possible carry to propagate from one end of the number to the other. Until we have done this,
- We do not know the result of the addition.
- We do not know whether the result of the addition is larger or smaller than a given number (for instance, we do not know whether it is positive or negative).
A carry look-ahead adder can reduce the delay. In principle the delay can be reduced so that it is proportional to logn, but for large numbers this is no longer the case, because even when carry look-ahead is implemented, the distances that signals have to travel on the chip increase in proportion to n, and propagation delays increase at the same rate. Once we get to the 512-bit to 2048-bit number sizes that are required in public-key cryptography, carry look-ahead is not of much help.
[edit] The basic concept
Here is an example of a binary sum:
10111010101011011111000000001101
.
+ 11011110101011011011111011101111
Carry-save arithmetic works by abandoning the binary notation while still working to base 2. It computes the sum digit by digit, as
10111010101011011111000000001101
.
+ 11011110101011011011111011101111
= 21122120202022022122111011102212
The notation is unconventional but the result is still unambiguous. Moreover, given n adders (here, n=32), the result can be calculated in a single tick of the clock, since each digit result does not depend on any of the others.
If the adder is required to add two numbers and produce a result, carry-save addition is useless, since the result still has to be converted back into binary and this still means that carries have to propagate from right to left. But in large-integer arithmetic, addition is a very rare operation, and adders are mostly used to accumulate partial sums in a multiplication.
[edit] Carry-save accumulators
Supposing that we have two bits of storage per digit, we can store the values 0, 1, 2, or 3 in each digit position. It is therefore obvious that one more binary number can be added to our carry-save result without overflowing our storage capacity: but then what?
The key to success is that at the moment of each partial addition we add three bits:
- 0 or 1, from the number we are adding.
- 0 if the digit in our store is 0 or 2, or 1 if it is 1 or 3.
- 0 if the digit to its right is 0 or 1, or 1 if it is 2 or 3.
To put it another way, we are taking a carry digit from the position on our right, and passing a carry digit to the left, just as in conventional addition; but the carry digit we pass to the left is the result of the previous calculation and not the current one. In each clock cycle, carries only have to move one step along, and not n steps as in conventional addition.
Because signals have to move less far, the clock can tick much faster.
There is still a need to convert the result to binary at the end of a calculation, which effectively just means letting the carries travel all the way through the number just as in a conventional adder. But if we have done 512 additions in the process of performing a 512-bit multiplication, the cost of that final conversion is effectively split across those 512 additions, so each addition bears 1/512 of the cost of that final "conventional" addition.
[edit] Drawbacks
At each stage of a carry-save addition,
- We know the result of the addition at once.
- We still do not know whether the result of the addition is larger or smaller than a given number (for instance, we do not know whether it is positive or negative).
This latter point is a drawback when using carry-save adders to implement modular multiplication (multiplication followed by division, keeping the remainder only). If we cannot know whether the intermediate result is greater or less than the modulus, how can we know whether to subtract the modulus or not?
Montgomery multiplication, which depends on the rightmost digit of the result, is one solution; though rather like carry-save addition itself, it carries a fixed overhead, so that a sequence of Montgomery multiplications saves time but a single one does not. Fortunately exponentiation, which is effectively a sequence of multiplications, is the most common operation in public-key cryptography.
[edit] Technical details
The carry-save unit consists of n full adders, each of which computes a single sum and carry bit based solely on the corresponding bits of the three input numbers. Given the three n - bit numbers a, b, and c, it produces a partial sum ps and a shift-carry sc:
The entire sum can then be computed by:
- Shifting the carry sequence sc left by one place.
- Appending a 0 to the front (most significant bit) of the partial sum sequence ps.
- Using a ripple carry adder to add these two together and produce the resulting n + 1-bit value.
When adding together three or more numbers, using a carry-save adder followed by a ripple carry adder is faster than using two ripple carry adders. This is because a ripple carry adder cannot compute a sum bit without waiting for the previous carry bit to be produced, and thus has a delay equal to that of n full adders. A carry-save adder, however, produces all of its output values in parallel, and thus has the same delay as a single full-adder. Thus the total computation time (in units of full-adder delay time) for a carry-save adder plus a ripple carry adder is n + 1, whereas for two ripple carry adders it would be 2n.