Loss of significance

From Wikipedia, the free encyclopedia

Loss of significance is an undesirable effect in calculations using floating-point arithmetic. It occurs when an operation on two numbers increases relative error substantially more than it increases absolute error, for example in subtracting two large and nearly equal numbers. The effect is that the number of accurate (significant) digits in the result is reduced unacceptably. Ways to avoid this effect are studied in numerical analysis.

In floating-point arithmetic, only a limited number of digits of the number are maintained; floating-point numbers can only approximate most real numbers.

Consider the real number

0.1234567891234567890.

A floating-point representation of this number on a machine that keeps 10 floating-point digits would be

0.1234567891,

which is fairly close — the difference is very small in comparison with either of the two numbers.

Now perform the calculation

0.1234567891234567890 − 0.1234567890.

The real answer, accurate to 10 digits, is

0.0000000001234567890.

However, on the 10-digit floating-point machine, the calculation yields

0.1234567891 − 0.1234567890 = 0.0000000001.

Whereas the original numbers are accurate in all of the first (most significant) 10 digits, their floating-point difference is only accurate in its first digit. This amounts to loss of information.

It is possible to do computations using an exact representation of rational numbers and keep all significant digits, but this is often prohibitively slower than floating-point arithmetic. Furthermore, it usually only postpones the problem: What if the data is accurate to only 10 digits? The same effect will occur.

One of the most important parts of numerical analysis is to avoid or minimize loss of significance in calculations. If the underlying problem is well-posed, there should be a stable algorithm for solving it. The art is in finding a stable algorithm.

[edit] Loss of significant bits

Let x and y be positive normalized floating point numbers.

In the subtraction xy, r significant bits are lost where

q \le r \le p
2^{-p} \le 1 - \frac{y}{x} \le 2^{-q}

for some positive integers p and q.

[edit] Instability of the quadratic equation

For example, consider the venerable quadratic equation

ax2 + bx + c = 0.

The quadratic equation gives the two solutions as

 x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}.

The case a = 1, b = 200, c = − 0.000015 will serve to illustrate the problem:

x2 + 200x − 0.000015 = 0.

We have

\sqrt{b^2 - 4 a c} = \sqrt{200^2 + 4 \times 1 \times 0.000015} = 200.00000015...

In real arithmetic, the roots are

( − 200 − 200.00000015) / 2 = − 200.000000075,
( − 200 + 200.00000015) / 2 = .000000075.

In 10-digit floating-point arithmetic,

( − 200 − 200.0000001) / 2 = − 200.00000005,
( − 200 + 200.0000001) / 2 = .00000005.

Notice that the solution of greater magnitude is accurate to ten digits, but the first nonzero digit of the solution of lesser magnitude is wrong.

Because of the subtraction that occurs in the quadratic equation, it does not constitute a stable algorithm to calculate the two roots.

[edit] A better algorithm

A better algorithm for solving quadratic equations is based on two observations: that one solution is always accurate when the other is not, and that given one solution of the quadratic, the other is easy to find.

If

 x_1 = \frac{-b + \sqrt{b^2 - 4ac}}{2a}

and

 x_2 = \frac{2c}{-b + \sqrt{b^2 - 4ac}}

then we have the identity

x_1 x_2 = c / a \ .

The algorithm is as follows. Use the quadratic formula to find the solution of greater magnitude, which does not suffer from loss of precision. Then use this identity to calculate the other root. Since no subtraction is involved, no loss of precision occurs.

Applying this algorithm to our problem, and using 10-digit floating-point arithmetic, the solution of greater magnitude, as before, is x1 = − 200.00000005. The other solution is then

x2 = c / ( − 200.00000005) = 0.000000075,

which is accurate.

Languages