Denormal number

From Wikipedia, the free encyclopedia

In computer science, denormal numbers or denormalized numbers (now often called subnormal numbers) fill the gap around zero in floating point arithmetic: any non-zero number which is smaller than the smallest normal number is 'sub-normal'.

For example, if the smallest positive 'normal' number is 1×β-n (where β is the base of the floating-point system, usually 2 or 10), then any smaller positive numbers that can be represented are denormal.

The production of a denormal is sometimes called gradual underflow because it allows a calculation to lose precision slowly when the result is small.

As implemented in the IEEE floating-point standard binary formats, denormal numbers are encoded with a biased exponent of 0, but are interpreted with the value of the smallest allowed exponent, which is one greater (i.e., as if it were encoded as a 1).

In the proposed IEEE 754 revision, denormal numbers are renamed subnormal numbers, and are supported in both binary and decimal formats. In the latter, they require no special encoding because the format supports unnormalized numbers directly.

[edit] Background

Denormal numbers were implemented in the Intel 8087 while the IEEE 754 standard was being written. This implementation demonstrated that denormals could be supported in a practical implementation. Some implementations of floating point units do not directly support denormal numbers in hardware, but rather trap to some kind of software support. While this may be transparent to the user, it can result in calculations which produce or consume denormal numbers being much slower than similar calculations on normal numbers.

[edit] Further reading

See also various papers on William Kahan's web site [1] for examples of where denormal numbers help improve the results of calculations.

Languages