Measurement uncertainty
From Wikipedia, the free encyclopedia
Measurement uncertainty is the worst-case error involved in physical measurement. Any physical measurement comprises two parts: an estimate of the true value of the measurand and the uncertainty of this estimate.
For technical reasons the estimate of the physical quantity to be measured turns out to be erroneous and does not coincide with the true value of the measurand. Contingent on the accuracy of the measurement process the estimate misses the true value more or less whereat the experimenter does not know whether the estimate is greater or smaller than the true value in question.
The measurement uncertainty is an integral part of any measurement result. The experimenter requests the interval
- estimate measurement uncertainty
to cover or "localize" the true value of the measurand.
The true values of physical quantities are and remain unknown. As an example, let us consider the law of falling bodies:
This law only holds by using true (error free) values for the distance s, the gravitational acceleration g and the time t. Otherwise the mathematical formula would turn out to be inconsistent.
Physical laws, should they be considered valid, need to be proven experimentally. As true values are not measurable but rather have to be confined or localized by means of intervals, metrology judges the analysis and accounting of measurement errors of utmost importance.
Measurement uncertainties are estimated by means of well defined procedures. These procedures depend on the particular error model used by the experimentalist to manage the physical perturbations disturbing the measurement process.
Unfortunately, the classical error calculus—outlined by Carl Friedrich Gauss no longer satisfies the demands of contemporary metrology. Gauss exclusively considered random errors and expected experimenters to eliminate so-called unknown systematic errors which always influence the outcome, as well as random errors. Pure physical reasons from unknown systematic errors proved to be unremovable. Consequently, the Gaussian error calculus had to be revised.
Within the scope of Legal Metrology and Calibration Services, measurement uncertainties are specified according to the ISO Guide or GUM for short [1]. It carries the Gaussian formalism forward by means of artifice concerning the treatment of unknown systematic errors.
To complement the GUM, another approach has been proposed. This revises the Gaussian error calculus on a different basis, [2]-[5].
The Guide's artifice randomizes unknown systematic errors by means of a postulated rectangular distribution density. Consequently, Gauss' original starting point, to consider only random errors, is effectively reinstated, at least in a formal sense.
To make the Guide's uncertainties sufficiently large, uncertainties are multiplied by an ad hoc factor kP. Normal recommendation is to set kP = 2.
However, it is not so simple to assign probabilties to the defined uncertainties, due to the observation that probabilities could only be taken out of the convolution of the normal probability density with the postulated rectangular density. Unfortunately, the parameters of the normal density are not known.
The second approach to revise the Gaussian error calculus maps unknown systematic errors as stipulated by physics, i.e. as quantities which are constant in time. In particular, they are not formalized by means of probabilities.
Consequently, the alternative approach relies on true values and biased estimators putting the basic relationships of errors calculus on a new basis.
[edit] Literature
[1] ISO, International Standardization Organization, Guide to the Expression of Uncertainty in Measurement, 1 Rue Varambé, Case Postale 56, CH 1221, Geneva, Switzerland.
[2] Grabe, M., Measurement Uncertainties in Science and Technology, Springer, April 2005.
[3] Principles of Metrological Statistics, Metrologia 23 1986/87 213-219
[4] Estimation of Measurement Uncertainties—an Alternative to the ISO Guide , Metrologia 38, 2001 97-106
[5] The Alternative Error Model and its Impact on Traceability and Key Comparison, Joint BIPM-NPL Workshop on the Evaluation of Interlaboratory Comparison Data, NPL, Teddington, 19 September 2002