Normal number (computing)
From Wikipedia, the free encyclopedia
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format.
The magnitude of the smallest normal number in a format is given by bemin, where b is the base (radix) of the format (usually 2 or 10) and emin depends on the size and layout of the format.
Similarly, the magnitude of the largest normal number in a format is given by
- bemax × (b − b1−p),
where p is the precision of the format in digits and emax is (−emin)+1.
In the IEEE 754 binary and proposed decimal formats, p, emin, and emax have the following values:
Format | p | emin | emax |
---|---|---|---|
binary 32-bit | 24 | −126 | 127 |
binary 64-bit | 53 | −1022 | 1023 |
binary 128-bit | 113 | −16382 | 16383 |
decimal 32-bit | 7 | −95 | 96 |
decimal 64-bit | 16 | −383 | 384 |
decimal 128-bit | 34 | −6143 | 6144 |
For example, in the smallest decimal format, the range of positive normal numbers is 10−95 through 9.999999 × 1096.
Non-zero numbers smaller in magnitude than the smallest normal number are called denormal (or subnormal) numbers. Zero is neither normal nor subnormal.