Multiply-accumulate

From Wikipedia, the free encyclopedia

In computing, especially digital signal processing, multiply-accumulate is a common operation that computes the product of two numbers and adds that product to an accumulator.

\ a \leftarrow a + b \times c

When done with floating point numbers it might be performed with two roundings (typical in many DSPs) or with a single rounding. When performed with a single rounding, it is called a fused multiply-add (FMA) or fused multiply-accumulate (FMAC).

Modern computers may contain a dedicated multiply-accumulate unit, or "MAC-unit", consisting of a multiplier implemented in combinational logic followed by an adder and an accumulator register which stores the result when clocked. The output of the register is fed back to one input of the adder, so that on each clock the output of the multiplier is added to the register. Combinational multipliers require a large amount of logic, but can compute a product much more quickly than the method of shifting and adding typical of earlier computers. The first processors to be equipped with MAC-units were digital signal processors, but the technique is now common in general-purpose processors too.

[edit] In floating-point arithmetic

When done with integers, the operation is typically exact (computed modulo some power of 2). However, floating-point numbers have only a certain amount of mathematical precision. That is, digital floating-point arithmetic is generally not associative or distributive. (See Floating point#Accuracy problems.)

Therefore, it makes a difference to the result whether the multiply-add is performed with two roundings, or in one operation with a single rounding. When performed with a single rounding, the operation is termed a fused multiply-add.

[edit] Fused multiply-add

A fused multiply-add is a floating-point multiply-add operation performed in one step, with a single rounding. That is, where an unfused multiply-add would compute the product b\times c, round it to N significant bits, add the result to a, and round back to N significant bits, a fused multiply-add would compute the entire sum a+b\times c to its full precision before rounding the final result down to N significant bits.

When implemented in a microprocessor, this is typically faster than a multiply operation followed by an add. It also allows for getting the bottom half of the multiplication. E.g.,

H = FMA(A, B, 0.0);     /* compute the N most significant bits of the product A*B */
L = FMA(A, B, -H);      /* compute the N next most significant bits */

A fused multiply-add is implemented on the SPARC64, PowerPC, PA-RISC 2.0 and Itanium processor families, and will be implemented in AMD processors with SSE5 instruction set support. Because of this instruction there is no need for a hardware divide or square root unit, since they can both be implemented efficiently in software using the FMA.

A fast FMA can speed up and improve the accuracy of many computations which involve the accumulation of products:

The FMA operation will likely be added to IEEE 754 in IEEE 754r.

The 1999 standard of the C programming language supports the FMA operation through the fma standard math library function.