Measurement uncertainty
From Wikipedia, the free encyclopedia
This article or section is written like a personal reflection or essay and may require cleanup. Please help improve it by rewriting it in an encyclopedic style. (January 2008) |
The measurement uncertainty describes a region about an observed value of a physical quantity where the true value of that physical quantity is expected. Measurement uncertainty relates the random error associated with taking measurements and comprises two parts: an observed value of the measurand and the uncertainty of this value.
observed value measurement uncertainty
Measurement uncertainty refers to measured values and to the propagation of uncertainty to values which are a function of measured values. Its proper application is fundamental to good experimentation and observation in sciences and engineering.
Contents |
[edit] Background
At least since the late 1970s, the classical Gaussian error calculus has been considered incomplete. As is well established, Gauss exclusively considered random errors. Though Gauss also discussed a second type of error, which today is called unknown systematic error, he eventually dismissed suchlike perturbations, arguing that it would be up to experimenters to get rid of them.
To recall, by its very nature, an unknown systematic error is a time-constant perturbation, unknown with respect to magnitude and sign. Any suchlike measurement error can only be assessed by an interval the limits of which have to be ascertained on the part of the experimenter. As may be shown, it proves possible to keep the limits of such an interval symmetric to zero, e.g. .
Unfortunately, contrary to Gauss's assumption, it turned out that unknown systematic errors proved to be non-eliminable. Consequently, the Gaussian error calculus had to be revised.
Measurement uncertainties have to be estimated by means of declared procedures. These procedures, however, are intrinsically tied to the error model referred to. Currently, error models and consequently the procedures to assess measurement uncertainties are considered highly controversial. As a matter of fact, today the metrological community is deeply divided over the question as to how to proceed. For the time being, all that can be done is to put the diverging positions side by side.
[edit] The GUM
Within the scope of Legal Metrology and Calibration Services, measurement uncertainties are specified according to the ISO Guide to the Expression of Uncertainty in Measurement or GUM for short [1]. In essence, the GUM maintains the classical Gaussian formalism. GUM's idea is to transfer time-constant unknown systematic errors formally into random errors. In fact, the GUM "randomizes" systematic errors by means of a postulated rectangular distribution density. Consequently, Gauss' original starting point, i.e. considering only random errors, is ostensibly reinstated, formally. This proceeding, however, has evoked some displeasure.
GUM's problems are threefold:
- firstly, no claim is imposed on the formalism that a given uncertainty is expected to localize the true value of a measurand
- secondly, the uncertainty is to be made "safe" by means of a so-called kP - factor. This factor should be obtained from a convolution of the distribution densities of the random errors with the postulated probability densities of the unknown systematic errors. However, as the theoretical parameters of the densities of the random errors are unknown, suchlike convolutions are, in fact, impossible. It is also unclear how postulated densities for time-constant quantities could be appropriate.
- thirdly, the GUM leaves unresolved the effect of the kP - factor as the GUM does not ask the uncertainty to cover or to localize the true value of the measurand.
To resume: The GUM claims to safeguard uncertainties by means of probabilities which are undefinable. Even if such probabilities were available, the GUM would fail to declare which purpose they might serve: Which kind of statement is to be made safe?
Notwithstanding these observations, it might appear of interest to explore the statements of the GUM a bit further:
To keep uncertainties "reliable", the GUM proposes to multiply uncertainties by an ad hoc factor kP = 2. First, no scientific argument can be given for this choice, second, this directive produces a contradiction, as can be shown. Disregarding the presence of random errors for the moment, the effect of the systematic error produces , a value which exceeds the boundaries
taken to limit the possible values of the unknown systematic error .
At the same time, frequently, the ad hoc factor kP = 2 is too small to account for the influence of random errors. In most cases, the Student-factor exceed 2.
As the uncertainty components due to random and systematic errors are combined geometrically, the position of the true value may get lost entirely.
Whether a given formalism can localize true values can only be decided by means of computer simulations. Naturally, under the conditions of simulations, the true values of "measurands" are known a priori. This means that "measurement uncertainties" obtained from simulated data make it possible to verify whether or not the so obtained uncertainties do localize the a priori given true values.
The localization properties of the GUM turn out to be more dubious, the more the unknown systematic errors exhaust the limits of the pertaining intervals. On the other hand, the experimenter has no knowledge at all about the actual numerical values of the systematic errors he is faced with. Consequently, he is left unsure as to whether or not the actually obtained uncertainty does successfully localize the true value of his measurand.
A point of particular concern refers to the setting of weights in least squares adjustments. As is known, weights cause two effects: firstly, they shift the numerical values of the estimators, and, secondly, they reduce the respective uncertainties. This, in fact, may conjure up an objectionable scenario: the experimenter cannot know whether a given estimator has been shifted towards or away from its true value. But, as measurement uncertainties appear reduced, due to the applied weights, it may happen that a weighting procedure cancels the localizations of true values -- should they have existed prior to the setting of weights.
[edit] An alternative approach
In contrast to the proceeding of the GUM, a diverging approach has been proposed [2] - [5]. This ansatz reformulates the Gaussian error calculus on a different basis, namely by admitting biases expressing the influence of the time-constant unknown systematic errors. Biases call into question nearly all classical procedures of data evaluation such as Analysis of Variance, but in particular those in use to assess measurement uncertainties.
The alternative concept maps unknown systematic errors as stipulated by physics, namely as quantities constant in time. Unknown systematic errors are not treated by means of postulated probability densities.
Right from the outset, the flow of random and systematic errors get strictly separated. While the influence of random errors is brought to bear by a slight, but, in fact, rather useful modification of the classical Gaussian error calculus, the influence of systematic errors is carried forward by uniquely designed, path-independent, worst-case estimations.
Uncertainties of this type are reliable and robust and withstand computer simulations, even under unfavourable conditions [2].
With regard to the setting of weights in least squares adjustments, the alternative approach safeguards the localization of the true values of the measurands for any choice of weights.
The Gauss-Markov theorem breaks down in the presence of biases and the breakdown automatically deprives experimenters of weights. In the alternative approach proposed in [2], the localization of true values is valid for any choice of weights, and therefore, the experimenter can choose any set of weights by trial and error. Repeating the choices, observing and comparing the produced uncertainties he can achieve a reduction of measurement uncertainties without having to be concerned with a possible delocalization of true values.
[edit] See also
[edit] Literature
[edit] GUM
[1] ISO, International Standardization Organization, Guide to the Expression of Uncertainty in Measurement, GUM, 1 Rue Varambé, Case Postale 56, CH 1221, Geneva, Switzerland.
[edit] Alternative approach
[2] Grabe, M., Measurement Uncertainties in Science and Technology, Springer, April 2005.
[3] Principles of Metrological Statistics, Metrologia 23 1986/87 213-219
[4] Estimation of Measurement Uncertainties—an Alternative to the ISO Guide , Metrologia 38, 2001 97-106
[5] The Alternative Error Model and its Impact on Traceability and Key Comparison, Joint BIPM-NPL Workshop on the Evaluation of Interlaboratory Comparison Data, NPL, Teddington, 19 September 2002
[edit] External links
[edit] GUM and its application
- UKAS LAB12 - The Expression of Uncertainty in Testing
- UKAS M3003 - The Expression of Uncertainty and Confidence in Measurement
- The NIST Reference on Constants, Units, and Uncertainty, from the Physics Laboratory of the National Institute of Standards and Technology (U.S.), accessed March 30, 2007
- "Measurement Good Practice Guide", National Physical Laboratory, UK, accessed March 20, 2007] (full version available here)