Measurement uncertainty

From Wikipedia, the free encyclopedia

The measurement uncertainty quantifies the distance between the actually measured value of a physical quantity and the true value of the same physical quantity. The result of any physical measurement comprises two parts: an estimate of the true value of the measurand and the uncertainty of this estimate.


Due to technical reasons, the actual value of a measured physical quantity may turn out to be erroneous and may not coincide with the true value of the measurand. The estimate may be greater or smaller than the true value.

The measurement uncertainty is an integral part of any measurement result. The experimenter requests the interval

                        estimate \pm measurement uncertainty 

to "localize" the true value of the measurand. The true values of physical quantities are and remain unknown.

To illustrate the meaning of true values, let us consider the law of falling bodies:

                                  s=\frac{1}{2}gt^2.

This law only holds using the true (error free) values of the distance s, the gravitational acceleration g and the time t. Otherwise the mathematical formula would be inconsistent.

Physical laws need to be proven experimentally. As true values have to be localized by means of intervals, metrology judges the analysis and accounting of measurement errors of utmost importance.

Measurement uncertainties have to be estimated by means of declared procedures. These procedures, however, are intrinsically tied to the error model referred to. Currently, error models and consequently the procedures to assess measurement uncertainties are considered highly controversial. As a matter of fact, today the metrological community is deeply divided over the question as to how to proceed. For the time being, all that can be done is to put the diverginig positions side by side.



Contents

[edit] Background

At least since the late 1970s, the classical Gaussian error calculus has been considered incomplete. As is well established, Gauss exclusively considered random errors. Though Gauss also discussed a second type of error, which today is called unknown systematic error, he eventually dismissed suchlike perturbations, arguing that it would be up to experimenters to get rid of them.

To recall, by its very nature, an unknown systematic error is a time-constant perturbation, unknown with respect to magnitude and sign. Any suchlike measurement error can only be assessed by an interval the limits of which have to be ascertained on the part of the experimenter. As may be shown, it proves possible to keep the limits of such an interval symmetric to zero, e.g. -f_s \ldots +f_s.

Unfortunately, contrary to Gauss's assumption, it turned out that unknown systematic errors proved to be non-eliminable. Consequently, the Gaussian error calculus had to be revised.


[edit] The GUM

Within the scope of Legal Metrology and Calibration Services, measurement uncertainties are specified according to the ISO Guide to the Expression of Uncertainty in Measurement or GUM for short [1]. In essence, the GUM maintains the classical Gaussian formalism. GUM's idea is to transfer time-constant unknown systematic errors formally into random errors. In fact, the GUM "randomizes" systematic errors by means of a postulated rectangular distribution density. Consequently, Gauss' original starting point, i.e. considering only random errors, is ostensibly reinstated, formally. This proceeding, however, has evoked some displeasure.

GUM's problems are threefold:

- firstly, no claim is imposed on the formalism that a given uncertainty is expected to localize the true value of a measurand

- secondly, the uncertainty is to be made "safe" by means of a so-called kP - factor. This factor should be obtained from a convolution of the distribution densities of the random errors with the postulated probability densities of the unknown systematic errors. However, as the theoretical parameters of the densities of the random errors are and will never be known, suchlike convolutions are, in fact, impossible. It is also unclear how postulated densities for time-constant quantities could be appropriate.

- thirdly, the GUM leaves unresolved the effect of the kP - factor as the GUM does not ask the uncertainty to cover or to localize the true value of the measurand.

To resume: The GUM claims to safeguard uncertainties by means of probabilities which are undefinable. Even if such probabilities were available, the GUM would fail to declare which purpose they might serve: Which kind of statement is to be made safe?

Notwithstanding these observations, it might appear of interest to explore the statements of the GUM a bit further:

To keep uncertainties "reliable", the GUM proposes to multiply uncertainties by an ad hoc factor kP = 2. First, no scientific argument can be given for this choice, second, this directive produces a contradiction, as can be shown. Disregarding the presence of random errors for the moment, the effect of the systematic error produces 2f_s/\sqrt 3, a value which exceeds the boundaries

                                 - f_s \ldots + f_s 

taken to limit the possible values of the unknown systematic error \displaystyle f.

At the same time, frequently, the ad hoc factor kP = 2 is too small to account for the influence of random errors. In most cases, the Student-factor exceed 2.

As the uncertainty components due to random and systematic errors are combined geometrically, the position of the true value may get lost entirely.

Whether a given formalism can localize true values can only be decided by means of computer simulations. Naturally, under the conditions of simulations, the true values of "measurands" are known a priori. This means that "measurement uncertainties" obtained from simulated data make it possible to verify whether or not the so obtained uncertainties do localize the a priori given true values.

The localization properties of the GUM turn out to be more dubious, the more the unknown systematic errors exhaust the limits of the pertaining intervals. On the other hand, the experimenter has no knowledge at all about the actual numerical values of the systematic errors he is faced with. Consequently, he is left unsure as to whether or not the actually obtained uncertainty does successfully localize the true value of his measurand.

A point of particular concern refers to the setting of weights in least squares adjustments. As is known, weights cause two effects: firstly, they shift the numerical values of the estimators, and, secondly, they reduce the respectice uncertainties. This, in fact, may conjure up an objectionable scenario: the experimenter cannot know whether a given estimator has been shifted towards or away from its true value. But, as measurement uncertainties appear reduced, due to the applied weights, it may happen that a weighting procedure cancels the localizations of true values -- should they have existed prior to the setting of weights.

[edit] An Alternative Approach

In contrast to the proceeding of the GUM, a diverging approach has been proposed [2] - [5]. This ansatz reformulates the Gaussian error calculus on a different basis, namely by admitting biases expressing the influence of the time-constant unknow systematic errors. Biases call into question nearly all classical procedures of data evaluation such as Analysis of Variance, but in particular those in use to assess measurement uncertainties.

The alternative concept maps unknown systematic errors as stipulated by physics, namely as quantities constant in time. Unknown systematic errors are not treated by means of postulated probability densities.

Right from the outset, the flow of random and systematic errors get strictly separated. While the influence of random errors is brought to bear by a slight, but, in fact, rather useful modification of the classical Gaussian error calculus, the influence of systematic errors is carried forward by uniquely designed, path-independent, worst-case estimations.

Uncertainties of this type are reliable and robust and withstand computer simulations, even under unfavourable conditions [2].

With regard to the setting of weights in least squares adjustments, the alternative approach safeguards the localization of the true values of the measurands for any choice of weights.

The Gauss-Markov theorem breaks down in the presence of biases and the breakdown automatically deprives experimenters of weights. In the alternative approach proposed in [2], the localization of true values is valid for any choice of weights, and therefore, the experimenter can choose any set of weights by trial and error. Repeating the choices, observing and comparing the produced uncertainties he can achieve a reduction of measurement uncertainties without having to be concerned with a possible delocalization of true values.


[edit] Literature - The GUM

[1] ISO, International Standardization Organization, Guide to the Expression of Uncertainty in Measurement, GUM, 1 Rue Varambé, Case Postale 56, CH 1221, Geneva, Switzerland.


[edit] Literature - An Alternative Approach

[2] Grabe, M., Measurement Uncertainties in Science and Technology, Springer, April 2005.

[3] Principles of Metrological Statistics, Metrologia 23 1986/87 213-219

[4] Estimation of Measurement Uncertainties—an Alternative to the ISO Guide , Metrologia 38, 2001 97-106

[5] The Alternative Error Model and its Impact on Traceability and Key Comparison, Joint BIPM-NPL Workshop on the Evaluation of Interlaboratory Comparison Data, NPL, Teddington, 19 September 2002


[edit] External links

In other languages