Single-precision floating-point format is a computer number format that occupies 4 bytes (32 bits) in computer memory and represents a wide dynamic range of values by using a floating point.
In IEEE 754-2008 the 32-bit base 2 format is officially referred to as binary32. It was called single in IEEE 754-1985. In older computers, other floating-point formats of 4 bytes were used.
One of the first programming languages to provide single- and double-precision floating-point data types was Fortran. Before the widespread adoption of IEEE 754-1985, the representation and properties of the double float data type depended on the computer manufacturer and computer model.
Single-precision binary floating-point is used due to its wider range over fixed point (of the same bit-width), even if at the cost of precision.
Single precision is known as float in C, C++, C#, Java[1], and Haskell, and as single in Pascal, Visual Basic, and MATLAB. However, float in Python, Ruby, PHP, and OCaml and single in versions of Octave prior to 3.2 refer to double-precision numbers.
Floating-point precisions |
---|
IEEE 754: |
Contents |
The IEEE 754 standard specifies a binary32 as having:
Sign bit determines the sign of the number, which is the sign of the significand as well. Exponent is either an 8 bit signed integer from −127 to 128 or an 8 bit unsigned integer from 0 to 255 which is the accepted biased form in IEEE 754 binary32 definition. For this case an exponent value of 127 represents the actual zero.
The true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit (to the left of the binary point) with value 1 unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format but the total precision is 24 bits (equivalent to log10(224) ≈ 7.225 decimal digits). The bits are laid out as follows:
The real value assumed by a given 32 bit binary32 data with a given biased exponent e and a 23 bit fraction is where more precisely we have :
The single-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 127; also known as exponent bias in the IEEE 754 standard.
Thus, in order to get the true exponent as defined by the offset binary representation, the offset of 127 has to be subtracted from the stored exponent.
The stored exponents 00H and FFH are interpreted specially.
Exponent | Significand zero | Significand non-zero | Equation |
---|---|---|---|
00H | zero, −0 | subnormal numbers | (−1)signbits×2−126× 0.significandbits |
01H, ..., FEH | normalized value | (−1)signbits×2exponentbits−127× 1.significandbits | |
FFH | ±infinity | NaN (quiet, signalling) |
The minimum positive (subnormal) value is 2−149 ≈ 1.4 × 10−45. The minimum positive normal value is 2−126 ≈ 1.18 × 10−38. The maximum representable value is (2−2−23) × 2127 ≈ 3.4 × 1038.
In general refer to the IEEE 754 standard itself for the strict conversion (including the rounding behaviour) of a real number into its equivalent binary32 format.
Here we can show how to convert a base 10 real number into an IEEE 754 binary32 format using the following outline :
Conversion of the fractional part:
consider 0.375, the fractional part of 12.375. To convert it into a binary fraction, multiply the fraction by 2, take the integer part and re-multiply new fraction by 2 until a fraction of zero is found or until the precision limit is reached which is 23 fraction digits for IEEE 754 binary32 format.
0.375 x 2 = 0.750 = 0 + 0.750 => b−1 = 0, the integer part represents the binary fraction digit. Re-multiply 0.750 by 2 to proceed
0.750 x 2 = 1.500 = 1 + 0.500 => b−2 = 1
0.500 x 2 = 1.000 = 1 + 0.000 => b−3 = 1, fraction = 0.000, terminate
We see that (0.375)10 can be exactly represented in binary as (0.011)2. Not all decimal fractions can be represented in a finite digit binary fraction. For example decimal 0.1 cannot be represented in binary exactly. So it is only approximated.
Therefore (12.375)10 = (12)10 + (0.375)10 = (1100)2 + (0.011)2 = (1100.011)2
Also IEEE 754 binary32 format requires that you represent real values in format, (see Normalized number, Denormalized number) so that 1100.011 is shifted to the left by 3 digits to become
Finally we can see that:
From which we deduce:
From these we can form the resulting 32 bit IEEE 754 binary32 format representation of 12.375 as: 0-10000010-10001100000000000000000 = 41460000H
Note: consider converting 68.123 into IEEE 754 binary32 format: Using the above procedure you expect to get 42883EF9H with the last 4 bits being 1001 However due to the default rounding behaviour of IEEE 754 format what you get is 42883EFAH whose last 4 bits are 1010 .
Ex 1: Consider decimal 1 We can see that :
From which we deduce :
From these we can form the resulting 32 bit IEEE 754 binary32 format representation of real number 1 as: 0-01111111-00000000000000000000000 = 3f800000H
Ex 2: Consider a value 0.25 . We can see that :
From which we deduce :
From these we can form the resulting 32 bit IEEE 754 binary32 format representation of real number 0.25 as: 0-01111101-00000000000000000000000 = 3e800000H
Ex 3: Consider a value of 0.375 . We saw that
Hence after determining a representation of 0.375 as we can proceed as above :
From these we can form the resulting 32 bit IEEE 754 binary32 format representation of real number 0.375 as: 0-01111101-10000000000000000000000 = 3ec00000H
These examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, (biased) exponent, and significand.
3f80 0000 = 1 c000 0000 = −2 7f7f ffff ≈ 3.4028234 × 1038 (max single precision) 0000 0000 = 0 8000 0000 = −0 7f80 0000 = infinity ff80 0000 = −infinity 3eaa aaab ≈ 1/3
By default, 1/3 rounds up instead of down like double precision, because of the even number of bits in the significand. So the bits beyond the rounding point are 1010...
which is more than 1/2 of a unit in the last place.
We start with the hexadecimal representation of the value, 41c80000, in this example, and convert it to binary
41c8 000016 = 0100 0001 1100 1000 0000 0000 0000 00002
then we break it down into three parts; sign bit, exponent and significand.
Sign bit: 0 Exponent: 1000 00112 = 8316 = 131 Significand: 100 1000 0000 0000 0000 00002 = 48000016
We then add the implicit 24th bit to the significand
Significand: 1100 1000 0000 0000 0000 00002 = C8000016
and decode the exponent value by subtracting 127
Raw exponent: 8316 = 131 Decoded exponent: 131 − 127 = 4
Each of the 24 bits of the significand (including the implicit 24th bit), bit 23 to bit 0, represents a value, starting at 1 and halves for each bit, as follows
bit 23 = 1 bit 22 = 0.5 bit 21 = 0.25 bit 20 = 0.125 bit 19 = 0.0625 . . bit 0 = 0.00000011920928955078125
The significand in this example has three bits set, bit 23, bit 22 and bit 19. We can now decode the significand by adding the values represented by these bits.
Decoded significand: 1 + 0.5 + 0.0625 = 1.5625 = C80000/223
Then we need to multiply with the base, 2, to the power of the exponent to get the final result
1.5625 × 24 = 25
Thus
41c8 0000 = 25
This is equivalent to:
where is the sign bit, is the exponent, and is the significand.