Robust statistics

From Wikipedia, the free encyclopedia

Robust statistics provides an alternative approach to classical statistical methods. The motivation is to produce estimators that are not unduly affected by small departures from model assumptions.

Contents

[edit] Introduction

In statistics, classical methods rely heavily on assumptions which are often not met in practice. In particular, it is often assumed that the data are normally distributed, at least approximately, or that the central limit theorem can be relied on to produce normally distributed estimates. Unfortunately, when there are outliers in the data, classical methods often have very poor performance. Robust statistics seeks to provide methods that emulate classical methods, but which are not unduly affected by outliers or other small departures from model assumptions.

In order to quantify the robustness of a method, it is necessary to define some measures of robustness. Perhaps the most common of these are the breakdown point and the influence function, described below.

Good books on robust statistics include those by Huber (1981), Hampel et al (1986) and Rousseeuw and Leroy (1987). A modern treatment is given by Maronna et al (2006). Huber's book is quite theoretical, whereas the book by Rousseew and Leroy is very practical (although the sections discussing software are rather out of date, the bulk of the book is still very relevant). Hampel et al (1987) and Maronna et al (2006) fall somewhere in the middle ground. All four of these are recommended reading, though Maronna et al is the most up to date.

Robust parametric statistics tends to rely on replacing the normal distribution in classical methods with the t-distribution with low degrees of freedom (high kurtosis; degrees of freedom between 4 and 6 have often been found to be useful in practice) or with a mixture of two or more distributions.

[edit] Example: speed of light data

Gelman et al in Bayesian Data Analysis (2004) consider a data set relating to speed of light measurements made by Simon Newcomb. The data sets for that book can be found via the Classic data sets page, and the book's website contains more information on the data.

Although the bulk of the data look to be more or less normally distributed, there are two obvious outliers. These outliers have a large effect on the mean, dragging it towards them, and away from the center of the bulk of the data. Thus, if the mean is intended as a measure of the location of the center of the data, it is, in a sense, biased when outliers are present.

Also, the distribution of the mean is known to be asymptotically normal due to the central limit theorem. However, outliers can make the distribution of the mean non-normal even for fairly large data sets. Besides this non-normality, the mean is also inefficient in the presence of outliers and less variable measures of location are available.

[edit] Estimation of location

The plot below shows a density plot of the speed of light data, together with a rug plot (panel (a)). Also shown is a normal QQ-plot (panel (b)). The outliers are clearly visible in these plots.

Panels (c) and (d) of the plot show the bootstrap distribution of the mean (c) and the 10% trimmed mean (d). The trimmed mean is a simple robust estimator of location that deletes a certain percentage of observations (10% here) from each end of the data, then computes the mean in the usual way. The analysis was performed in R and 10 000 bootstrap samples were used for each of the raw and trimmed means.

The distribution of the mean is clearly much wider than that of the 10% trimmed mean (the plots are on the same scale). Also note that whereas the distribution of the trimmed mean appears to be close to normal, the distribution of the raw mean is quite skewed to the left. So, in this sample of 66 observations, only 2 outliers cause the central limit theorem to be inapplicable.

Image:speedOfLight.png

Robust statistical methods, of which the trimmed mean is a simple example, seek to outperform classical statistical methods in the presence of outliers, or, more generally, when underlying parametric assumptions are not quite correct.

Whilst the trimmed mean performs well relative to the mean in this example, better robust estimates are available. In fact, the mean, median and trimmed mean are all special cases of M-estimators. Details appear in the sections below.

[edit] Estimation of scale

The outliers in the speed of light data do not just have an adverse effect on the mean. The usual estimate of scale is the standard deviation, and this quantity is even more badly affected by outliers because the squares of the deviations from the mean go into the calcuation, so the outliers' effects are exacerbated.

The plots below show the bootstrap distributions of the standard deviation, median absolute deviation (MAD) and Qn estimator of scale (Rousseeuw and Croux, 1993). The plots are based on 10000 bootstrap samples for each estimator, and some normal random noise was added to the resampled data (smoothed bootstrap). Panel (a) shows the distribution of the standard deviation, (b) of the MAD and (c) of Qn.

Image:speedOfLightScale.png

The distribution of standard deviation is erratic and wide, a result of the outliers. The MAD is better behaved, and Qn is a little bit more efficient than MAD. This simple example demonstrates that when outliers are present, the standard deviation cannot be recommended as an estimate of scale.

[edit] Manual screening for outliers

Traditionally, statisticians would manually screen data for outliers, and remove them, usually checking the source of the data to see if the outliers were erroneously recorded. Indeed, in the speed of light example above, it is easy to see and remove the two outliers prior to proceeding with any further analysis. However, in modern times, data sets often consist of large numbers of variables being measured on large numbers of experimental units. As such, manual screening for outliers is impractical.

Outliers can often interact in such a way that they mask each other. As a simple example, consider a small univariate data set containing one modest and one large outlier. The estimated standard deviation will be grossly inflated by the large outlier. The result is that the modest outlier looks relatively normal. As soon as the large outlier is removed, the estimated standard deviation shrinks, and the modest outlier now looks unusual.

This problem of masking gets worse as the complexity of the data increases. For example, in regression problems, diagnostic plots are used to identify outliers. However, it is common that once a few outliers have been removed, others become visible. The problem is even worse in higher dimensions.

Robust methods provide automatic ways of detecting, downweighting (or removing), and flagging outliers, largely removing the need for manual screening.

[edit] Variety of applications

Although this article deals with general principles for univariate statistical methods, robust methods also exist for regression problems, generalized linear models, and parameter estimation of various distributions.

[edit] Measures of robustness

The basic tools used to describe and measure robustness are, the breakdown point', the 'influence function and the sensitivity curve.

[edit] Breakdown point

Intuitively, the breakdown point of an estimator is the proportion of incorrect observations (i.e. arbitrarily large observations) an estimator can handle before giving an arbitrarily large result. For example, given n independent random variables (X_1,\cdots,X_n)\sim\mathcal{N}(0,1) and the corresponding realizations x_1,\cdots,x_n, we can use \overline{X_n}:=\frac{X_1+\cdots+X_n}{n} to estimate the mean. Such an estimator has a breakdown point of 0 because we can make \overline{x} arbitrarily large just by changing any of x_1,\cdots,x_n.

The higher the breakdown point of an estimator, the more robust it is. Intuitively, we can understand that a breakdown point cannot exceed 50% because if more than half of the observations are contaminated, it is not possible to distinguish between the underlying distribution and the contaminating distribution. Therefore, the maximum breakdown point is 0.5 and there are estimators which achieve such a breakdown point. For example, the median has a breakdown point of 0.5. The X% trimmed mean has breakdown point of X%, for the chosen level of X. Huber (1981) and Maronna et al (2006) contain more details.

[edit] Example: speed of light data

In the speed of light example, removing the two lowest observations causes the mean to change from 26.2 to 27.75, a change of 1.55. The estimate of scale produced by the Qn method is 6.3. Intuitively, we can divide this by the square root of the sample size to get a robust standard error, and we find this quantity to be 0.78. Thus, the change in the mean resulting from removing two outliers is approximately twice the robust standard error.

The 10% trimmed mean for the speed of light data is 27.43. Removing the two lowest observations and recomputing gives 27.67. Clearly, the trimmed mean is less affected by the outliers and has a higher breakdown point.

Notice that if we replace the lowest observation, -44, by -1000, the mean becomes 11.73, whereas the 10% trimmed mean is still 27.43. In many areas of applied statistics, it is common for data to be log-transformed to make them near symmetrical. Very small values become large negative when log-transformed, and zeroes become negatively infinite. Therefore, this example is of practical interest.

[edit] Empirical influence function

The empirical influence function gives us an idea of how an estimator behaves when we change one point in the sample and relies on the data (i.e. no model assumptions). The following picture is Tukey's biweight function, which, as we will later see, is an example of what a "good" (in a sense defined later on) empirical influence function should look like:

Image:Biweight.png

The context is the following:

  1. (\Omega,\mathcal{A},P) is a probability space,
  2. (\mathcal{X},\Sigma) is a measure space (state space),
  3. Θ is a parameter space of dimension p\in\mathbb{N}^*,
  4. (Γ,S) is a measure space,
  5. \gamma:\Theta\rightarrow\Gamma is a projection,
  6. \mathcal{F}(\Sigma) is the set of all possible distributions on Σ

For example,

  1. (\Omega,\mathcal{A},P) is any probability space,
  2. (\mathcal{X},\Sigma)=(\mathbb{R},\mathcal{B}),
  3. \Theta=\mathbb{R}\times\mathbb{R}^+
  4. (\Gamma,S)=(\mathbb{R},\mathcal{B}),
  5. \gamma:\mathbb{R}\times\mathbb{R}^+\rightarrow\mathbb{R} is defined by γ(x,y) = x.

The definition of an empirical influence function is: Let n\in\mathbb{N}^* and X_1,\cdots,X_n:(\Omega,  \mathcal{A})\rightarrow(\mathcal{X},\Sigma) are iid and (x_1,\cdots,x_n) is a sample from these variables. T_n:(\mathcal{X}^n,\Sigma^n)\rightarrow(\Gamma,S) is an estimator. Let i\in\{1,\cdots,n\}. The empirical influence function EIFi at observation i is defined by:

EIF_i:x\in\mathcal{X}\mapsto T_n(x_1,\cdots,x_{i-1},x,x_{i+1},\cdots,x_n)\in\Gamma

What this actually means is that we are replacing the i-th value in the sample by an arbitrary value and looking at the output of the estimator.

[edit] Influence function and sensitivity curve

Instead of relying solely on the data, we could use the distribution of the random variables. The approach is quite different from that of the previous paragraph. What we are now trying to do is to see what happens to an estimator when we change the distribution of the data slightly.

Let A be a convex subset of the set of all finite signed measures on \mathcal{X}. We want to estimate the parameter \theta\in\Theta of a distribution F in A. Let the functional T:A\rightarrow\Gamma be the asymptotic value of some estimator sequence (T_n)_{n\in\mathbb{N}}. We will suppose that this functional is Fisher consistent, i.e. \forall \theta\in\Theta, T(F_\theta)=\theta. this means that at the model F, the estimator sequence asymptotically measures the right quantity.

Let G be some distribution in A. What happens when the data doesn't follow the model F exactly but another, slightly different, "going towards" G?

We're looking at: dF_{G-F}(F) = \lim_{t\rightarrow 0^+}\frac{T(tG+(1-t)F) - T(F)}{t},

which is the directional derivative of T at F, in the direction of G.

Let x\in\mathcal{X}. Δx is the probability measure which gives mass 1 to x. We chose G = Δx. The influence function is then defined by:

IF(x; T; F):=\lim_{t\rightarrow 0^+}\frac{T(t\Delta_x+(1-t)F) - T(F)}{t}

It describes the effect of an infinitesimal contamination at the point x on the estimate we are seeking, standardized by the mass t of the contamination (the asymptotic bias caused by contamination in the observations).

[edit] Desirable properties

Properties of an influence function which bestow it with desirable performance are:

  1. Finite rejection point ρ * ,
  2. Small gross-error sensitivity γ * ,
  3. Small local-shift sensitivity λ * .

[edit] Rejection point

\rho^*:=\inf_{r>0}\{r:IF(x;T;F)=0, |x|>r\}

[edit] Gross-error sensitivity

\gamma^*(T;F) := \sup_{x\in\mathcal{X}}|IF(x; T ; F)|

[edit] Local-shift sensitivity

\lambda^*(T;F) := \sup_{(x,y)\in\mathcal{X}^2, x\neq y}\left\|\frac{IF(y ; T; F) - IF(x; T ; F)}{y-x}\right\|

This value, which looks a lot like a Lipschitz constant, represents the effect of shifting an observation slightly from x to a neighbouring point y, i.e. add an observation at y and remove one at x.

[edit] M-estimators

Main article: M-estimator

(The mathematical context of this paragraph is given in the section on empirical influence functions.)

Historically, several approaches to robust estimation were proposed, including R-estimators and L-estimators. However, M-estimators now appear to dominate the field as a result of their generality, high breakdown point, and their efficiency. See Huber (1981).

M-estimators are a generalization of maximum likelihood estimators (MLEs). What we try to do with MLE's is to maximize \prod_{i=1}^n f(x_i) or, equivalently, minimize \sum_{i=1}^n-\log f(x_i). In 1964, Huber proposed to generalize this to the minimization of \sum_{i=1}^n \rho(x_i), where ρ is some function. MLE are therefore a special case of M-estimators (hence the name: "Maximum likelihood type" estimators).

Minimizing \sum_{i=1}^n \rho(x_i) can often be done by differentiating ρ and solving \sum_{i=1}^n \psi(x_i) = 0, where \psi(x) = \frac{d\rho(x)}{dx} if ρ has a derivative, that is.

Several choices of ρ and ψ have been proposed. The two figures below show four ρ functions and their correpsonding ψ functions.

Image:RhoFunctions.png

For squared errors, ρ(x) increases at an accelerating rate, whilst for absolute errors, it increases at a constant rate. When Windsorizing is used, a mixtue of these two effects is introduced: for small values of x, ρ increases at the squared rate, but once the chosen threshold is reached (1.5 in this example), the rate of increase becomes constant.

Tukey's biweight (also known as bisquare) function behaves in a similar way to the squared error function at first, but for larger errors, the function tapers off.

Image:PsiFunctions.png


[edit] Properties of M-estimators

Notice that M-estimators do not necessarily relate to a probability density function. As such, off-the-shelf approaches to inference that arise from likelihood theory can not, in general, be used.

It can be shown that M-estimators are asymptotically normally distributed, so that as long as their standard errors can be computed, an approximate approach to inference is available.

Since M-estimators are normal only asymptotically, for small sample sizes it might be appropriate to use an alternative approach to inference, such as the bootstrap. However, M-estimates are not necessarily unique (i.e. there might be more than one solution that satisfies the equations). Also, it is possible that any particular bootstrap sample can contain more outliers than the estimator's breakdown point. Therefore, some care is needed when designing bootstrap schemes.

Of course, as we saw with the speed of light example, the mean is only normally distributed asymptotically and when outliers are present the approximation can be very poor even for quite large samples. However, classical statistical tests, including those based on the mean, are typically bounded above by the nominal size of the test. The same is not true of M-estimators and the type I error rate can be substantially above the nominal level.

These considerations do not "invalidate" M-estimation in any way. They merely make clear that some care is needed in their use, as is true of any other method of estimation.

[edit] Influence function of an M-estimator

It can be shown that the influence function of an M-estimator T is proportional to ψ (see Huber, 1981 (and 2004), page 45), which means we can derive the properties of such an estimator (such as its rejection point, gross-error sensitivity or local-shift sensitivity) when we know its ψ function.

IF(x;T,F) = M − 1ψ(x,T(F)) with the p\times p given by: M = -\int_{\mathcal{X}}\left(\frac{\partial \psi(x,\theta)}{\partial \theta}\right)_{T(F)}dF(x).

[edit] Choice of ψ and ρ

In many practical situations, the choice of the ψ function is not critical to gaining a good robust estimate, and many choices will give similar results that offer great improvements, in terms of efficiency and bias, over classical estimates in the presence of outliers (Huber, 1981).

Theoretically, redescending ψ functions are to be preferred, and Tukey's biweight (also known as bisquare) function is a popular choice. Maronna et al (2006) recommend the biweight function with efficiency at the normal set to 85%.

[edit] Robust parametric approaches

M-estimators do not necessarily relate to a density function and so are not fully parametric. Fully parametric approaches to robust modelling and inference, both Bayesian and likelihood approaches, usually deal with heavy tailed distributions such as Student's t-distribution.

For the t-distribution with ν degrees of freedom, it can be shown that

\psi(x) = \frac{x}{x^2 + \nu}.

For ν = 1, the t-distribution is equivalent to the Cauchy distribution. Notice that the degrees of freedom is sometimes known as the kurtosis parameter. It is the parameter that controls how heavy the tails are. In principle, ν can be estimated from the data in the same way as any other parameter. In practice, it is common for there to be mulitple local maxima when ν is allowed to vary. As such, it is common to fix ν at a value around 4 or 6. The figure below displays the ψ-function for 4 different values of ν.

Image:TDistPsi.png

[edit] Example: speed of light data

For the speed of light data, allowing the kurtosis parameter to vary and maximizing the likelihood, we get

\hat\mu = 27.40, \hat\sigma = 3.81, \hat\nu = 2.13.

Fixing ν = 4 and maximizing the likelihood gives

\hat\mu = 27.49, \hat\sigma = 4.51.

[edit] Key contributors

Key contributors to the field of robust statistics include Frank Hampel, Peter J. Huber, Peter J. Rousseeuw, and John Tukey.

[edit] References

Robust Statistics - The Approach Based on Influence Functions, Frank R. Hampel, Elvezio M. Ronchetti, Peter J. Rousseeuw and Werner A. Stahel, Wiley, 1986 (republished in paperback, 2005)

Robust Statistics, Peter. J. Huber, Wiley, 1981 (republished in paperback, 2004)

Robust Regression and Outlier Detection, Peter J. Rousseeuw and Annick M. Leroy, Wiley, 1987 (republished in paperback, 2003)

Robust Statistics - Theory and Methods, Ricardo Maronna, Doug Martin and Victor Yohai, Wiley, 2006

Bayesian Data Analysis, Andrew Gelman, John B. Carlin, Hal S. Stern and Donald B. Rubin, Chapman & Hall/CRC, 2004

Alternatives to the Median Absolute Deviation, P. J. Rousseeuw and C. Croux, C., Journal of the American Statistical Association, 88, 1993

[edit] See also

[edit] External links

In other languages