Fixed-point arithmetic

From Wikipedia, the free encyclopedia

In computing, a fixed-point number representation is a real data type for a number that has a fixed number of digits before and after the radix point (e.g. "." in English decimal notation). Fixed-point number representation stands in contrast to the more flexible (and more computationally demanding) floating point number representation.

Fixed-point numbers are useful for representing fractional values in native two's complement format if the executing processor has no floating point unit (FPU) or if fixed-point provides improved performance or accuracy for the application at hand. Most low-cost embedded microprocessors and microcontrollers do not have an FPU.

Contents

[edit] Introduction and basic features

[edit] Mathematical definition, number range

Based on the verbal definition above, a fixed-point number may be written as M.F, where M represents the magnitude, i.e. the integer part, '.' is the radix point, and F represents the fractional part.

In terms of binary numbers, each magnitude bit represents a power of two, while each fractional bit represents an inverse power of two. Thus the first fractional bit is ½, the second is ¼, the third is ⅛ and so on. For signed fixed point numbers in two's complement format, the upper bound is given by 2m − 1 − 2 f, and the lower bound is given by − 2m − 1, where m and f are the number of bits in M and F respectively. For unsigned values, the range is 0 to 2m − 2 f.

[edit] Comparison with floating point and BCD

Fixed point numbers can represent fractional powers of two exactly, but, like floating point numbers, cannot exactly represent fractional powers of ten. If exact fractional powers of ten are desired, then binary-coded decimal (BCD) format should be used. For example, one-tenth (.1) and one-hundredth (.01) can be represented only approximately by two's complement fixed point or floating point representations, while they can be represented exactly in BCD representations. However, BCD does not make as efficient use of bits as two's complement notation, nor is it as computationally fast.

Integer fixed-point values are always exactly represented so long as they fit within the range determined by the magnitude bits. This is in contrast to floating-point representations, which have separate exponent and value fields which can allow specification of integer values larger than the number of bits in the value field, causing the represented value to lose precision.

[edit] Programming language implementations

A common use of fixed-point BCD numbers is for storing monetary values, where the inexact values of floating-point numbers are often a liability. Historically, fixed-point representations were the norm for decimal data types; for example, in PL/I or COBOL. The Ada programming language includes built-in support for both fixed-point (in two variants: ordinary and decimal) and floating-point. JOVIAL also provides both floating- and fixed-point types.

Very few computer languages include built-in support for fixed point values, because for most applications, floating-point representations are usually faster and accurate enough. Floating-point representations are easier to use than fixed-point representations, because they can handle a wider dynamic range and do not require programmers to specify the number of digits after the radix point. However, if they are needed, fixed-point numbers can be implemented even in programming languages like C and C++, which do not commonly include such support.

With the publication of ISO/IEC TR 18037:2004, fixed-point data types are specified for the C programming language; vendors are expected to implement the language extensions for fixed point arithmetic in coming years.

[edit] Nomenclature

There are various notations used to represent word length and radix point in a fixed point number. One common notion is the "Q" prefix, where the number following the Q specifies the fractional bits to the right of the radix point. For example, Q15 represents a number with 15 fractional bits. This notation is ambiguous since it does not specify the word length, however it is usually assumed that the word length is either 16 or 32 bits depending on the target processor in use. The unambiguous form of the "Q" notation is a Q followed by a dotted pair of numbers that represent the number of integer bits to the left of the decimal point followed by the number of fractional bits. Since the entire word is a 2's complement integer, a sign bit is implied. For example, Q1.30 describes a number with 1 integer bit and 30 fractional bits stored as a 32-bit 2's complement integer. Q format notation references from Texas Instruments[1]Appendix A.2 and from Mathworks [2]. Another notation, the "fx" prefix, is similar but uses the word length as the second item in the dotted pair. For example, fx1.16 describes a number with 1 magnitude bit and 15 fractional bits in a 16 bit word.

[citation needed]

[edit] Precision loss and overflow

Because fixed point operations can produce results that have more bits than the operands, there is opportunity for information loss. For instance, the result of fixed point multiplication could potentially have as many bits as the sum of the number of bits in the two operands. In order to fit the result into the same number of bits as the operands, the answer must be rounded or truncated. If this is the case, the choice of which bits to keep is very important. When multiplying two fixed point numbers with the same format, for instance with I  integer bits, and Q fractional bits, the answer could have up to 2×I  integer bits, and 2×Q  fractional bits.

For simplicity, many coders of fixed-point multiply procedures use the same result format as the operands. This has the effect of keeping the middle bits; the I-number of least significant integer bits, and the Q-number of most significant fractional bits. Fractional bits lost below this value represent a precision loss which is common in fractional multiplication. If non-zero integer bits are lost, however, the value will be radically inaccurate. This is considered to be an overflow, and needs to be avoided in embedded calculations. It is recommended that a model based operator simulation tool like VisSim be used to detect and avoid such overflows by use of appropriate result word size and radix point, proper scaling gains, and magnitude limiting of intermediate results.

Some operations, like divide, often have built in result limiting so that any positive overflow results in the largest possible number that can be represented by the current format. Likewise, negative overflow results in the largest negative number represented by the current format. This built in limiting is often referred to as saturation.

Some processors support a hardware overflow flag that can generate an interrupt on the occurrence of an overflow, but it is usually too late to salvage the proper result at this point.

[edit] Current common uses of fixed-point arithmetic

  • VisSim A visually programmed block diagram language that supports a fixed-point block set to allow simulation and automatic code generation of fixed-point operations. Both word size and radix point can be specified on an operator basis.
  • GnuCash is an application for tracking money. It is written in C and switched from a floating-point representation of money to a fixed-point implementation as of version 1.6. This change was made to trade the less predictable rounding errors of floating-point representations for more control over rounding (for example, to the nearest cent).
  • Tremor and Toast are software libraries that decode the Ogg Vorbis and GSM Full Rate audio formats respectively. These codecs use fixed-point arithmetic because many audio decoding hardware devices do not have an FPU (to save money) and audio decoding requires enough performance that a software implementation of floating-point on low-speed devices would not produce output in real time.
  • All 3D graphics engines on Sony's original PlayStation and Nintendo's Game Boy Advance (only 2D) and Nintendo DS (2D and 3D) video game systems use fixed-point arithmetic for the same reason as Tremor and Toast: to gain throughput on an architecture without an FPU.
  • Fractint represents numbers as Q3.29 fixed-point numbers,[3] again to speed up drawing on old PCs with 386 or 486SX processors, which lacked an FPU.
  • TeX font metric files use 32-bit signed fixed-point numbers, with 12 bits to the left of the decimal, extensively.
  • PostgreSQL has a special numeric type for exact storage of numbers with up to 1000 digits.

[edit] External links