Mahalanobis distance

From Wikipedia, the free encyclopedia

In statistics, Mahalanobis distance is a distance measure introduced by P. C. Mahalanobis in 1936. It is based on correlations between variables by which different patterns can be identified and analysed. It is a useful way of determining similarity of an unknown sample set to a known one. It differs from Euclidean distance in that it takes into account the correlations of the data set and is scale-invariant, i.e. not dependent on the scale of measurements.

Formally, the Mahalanobis distance from a group of values with mean \mu = ( \mu_1, \mu_2, \mu_3, \dots , \mu_p )^T and covariance matrix Σ for a multivariate vector x = ( x_1, x_2, x_3, \dots, x_p )^T is defined as:

D_M(x) = \sqrt{(x - \mu)^T \Sigma^{-1} (x-\mu)}.\,

Mahalanobis distance can also be defined as dissimilarity measure between two random vectors \vec{x} and \vec{y} of the same distribution with the covariance matrix Σ :

d(\vec{x},\vec{y})=\sqrt{(\vec{x}-\vec{y})^T\Sigma^{-1} (\vec{x}-\vec{y})}.\,

If the covariance matrix is the identity matrix, the Mahalanobis distance reduces to the Euclidean distance. If the covariance matrix is diagonal, then the resulting distance measure is called the normalized Euclidean distance:

d(\vec{x},\vec{y})= \sqrt{\sum_{i=1}^p  {(x_i - y_i)^2 \over \sigma_i^2}},

where σi is the standard deviation of the xi over the sample set.

[edit] Intuitive explanation

Consider the problem of estimating the probability that a test point in N-dimensional Euclidean space belongs to a set, where we are given sample points that definitely belong to that set. Our first step would be to find the average or center of mass of the sample points. Intuitively, the closer the point in question is to this center of mass, the more likely it is to belong to the set.

However, we also need to know how large the set is. The simplistic approach is to estimate the standard deviation of the distances of the sample points from the center of mass. If the distance between the test point and the center of mass is less than one standard deviation, then we conclude that it is highly probable that the test point belongs to the set. The further away it is, the more likely that the test point should not be classified as belonging to the set.

This intuitive approach can be made quantitative by defining the normalized distance between the test point and the set to be {x - \mu} \over \sigma. By plugging this into the normal distribution we get the probability of the test point belonging to the set.

The drawback of the above approach was that we assumed that the sample points are distributed about the center of mass in a spherical manner. Were the distribution to be decidedly non-spherical, for instance ellipsoidal, then we would expect the probability of the test point belonging to the set to depend not only on the distance from the center of mass, but also on the direction. In those directions where the ellipsoid has a short axis the test point must be closer, while in those where the axis is long the test point can be further away from the center.

Putting this on a mathematical basis, the ellipsoid that best represents the set's probability distribution can be estimated by building the covariance matrix of the samples. The Mahalanobis distance is simply the distance of the test point from the center of mass divided by the width of the ellipsoid in the direction of the test point.

[edit] Relationship to leverage

Mahalanobis distance is closely related to the leverage statistic h. The Mahalanobis distance of a data point from the centroid of a multivariate data set is (N − 1) times the leverage of that data point, where N is the number of data points in the set.

[edit] Applications

Mahalanobis distance is widely used in cluster analysis and other classification techniques. It is closely related to Hotelling's T-square distribution used for multivariate statistical testing.

In order to use the Mahalanobis to classify a test point as belonging to one of N classes, one first estimates the covariance matrix of each class, usually based on samples known to belong to each class. Then, given a test sample, one computes the Mahalanobis distance to each class, and classifies the test point as belonging to that class for which the Mahalanobis distance is minimal. Using the probabilistic interpretation given above, this is equivalent to selecting the class with the highest probability.

Also, Mahalanobis distance and leverage are often used to detect outliers especially in the development of linear regression models. A point that has a greater Mahalanobis distance from the rest of the sample population of points is said to have higher leverage since it has a greater influence on the slope or coefficients of the regression equation.