Unit in the last place
From Wikipedia, the free encyclopedia
In computer science, Unit in the Last Place or Unit of Least Precision (ULP) is the gap between two very close floating-point numbers. To be exact ulp(x) is the gap between the two floating-point numbers closest to the value x.
The amount of error in the evaluation of a floating-point operation is often expressed in ULP. An average error of 1 ULP is often seen as a tolerable error.
Contents |
[edit] Example
If the ULP is less than or equal to 1, then adding 1 to a floating-point number will succeed. Otherwise, adding 1 will have no effect. This is demonstrated in the following Haskell code typed at an interactive prompt:
> until (\x -> x == x+1) (+1) 0 :: Float 1.6777216e7 > it-1 1.6777215e7 > it+1 1.6777216e7 > it+1 1.6777216e7
Here, we start with 32-bit single-precision 0 and keep adding 1 until the operation is idempotent. The result is about 17 million.
Another example, in Python, is (typed at an interactive prompt):
>>> x = 1.0 >>> while (x != x + 1.0): ... x = x * 10.0 ... >>> print x 1e+16
using double-precision floating-point.
[edit] Language support
Since Java 1.5, the Java standard library has included Math.ulp(double)
and Math.ulp(float)
functions.
[edit] References
- On the definition of ulp(x) by Jean-Michel Muller, INRIA Technical Report 5504.