Arithmetic precision
From Wikipedia, the free encyclopedia
The precision of a value describes the number of digits that are used to express that value. In a scientific setting this would be the total number of digits (sometimes called the significant digits) or, less commonly, the number of fractional digits or places (the number of digits following the point). This second definition is useful in financial and engineering applications where the number of digits in the fractional part has particular importance.
In both cases, the term precision can be used to describe the position at which an inexact result will be rounded. For example, in floating-point arithmetic, a result is rounded to a given or fixed precision, which is the length of the resulting significand. In financial calculations, a number is often rounded to a given number of places (for example, to two places after the point for many world currencies).
As an illustration, the decimal quantity 12.345 can be expressed with various numbers of significant digits or decimal places. If insufficient precision is available then the number is rounded in some manner to fit the available precision. The following table shows the results for various total precisions and decimal places, with the results rounded to nearest where ties round up or to an even digit (the most common rounding modes).
Precision |
Rounded to significant digits |
Rounded to decimal places |
---|---|---|
Five | 12.345 | 12.34500 |
Four | 12.35 | 12.3450 |
Three | 12.4 | 12.345 |
Two | 12 | 12.35 |
One | 1E+1 † | 12.4 |
Zero | n/a | 12 |
† The notation 1E+1 means: 1 × 10+1.
[edit] See also
- Round-off error
- Precision (computer science)
- IEEE754 (IEEE floating point standard)