Numerical error
From Wikipedia, the free encyclopedia
In software engineering and mathematics, numerical error is either of two kinds of error in a calculation. The first is caused by the finite precision of computations involving floating-point values and the second (sometimes called the theoretical truncation error) is the difference between the exact mathematical solution and the approximate solution obtained when simplifications are made to the mathematical equations to make them more amenable to calculation.
Floating-point numerical error is often measured in ULP (unit in the last place).
[edit] See also
[edit] References
- Accuracy and Stability of Numerical Algorithms, Nicholas J. Higham, ISBN 0-89871-355-2