Partial correlation
From Wikipedia, the free encyclopedia
In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed.
Contents |
[edit] Formal definition
Formally, the partial correlation between X and Y given a set of n controlling variables Z = {Z1, Z2, …, Zn}, written ρXY·Z, is the correlation between the residuals RX and RY resulting from the linear regression of X with Z and of Y with Z, respectively. In fact, the first-order partial correlation is nothing else than a difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford ( 1973, pp. 344-345).
[edit] Computation
[edit] Using linear regression
The obvious way to compute a (sample) partial correlation is to solve the two associated linear regression problems, get the residuals, and calculate the correlation between the residuals. If we write xi, yi and zi to denote i.i.d. samples of some joint probability distribution over X, Y and Z, solving the linear regression problem amounts to finding
with N being the number of samples and the scalar product between the vectors v and w. The residuals are then
and the sample partial correlation is
[edit] Using recursive formula
It can be computationally expensive to solve the linear regression problems. Actually, the nth-order partial correlation (i.e., with |Z| = n) can be easily computed from three (n - 1)th-order partial correlations. The zeroth-order partial correlation ρXY·Ø is defined to be the regular correlation coefficient ρXY.
It holds, for any :
Naïvely implementing this computation as a recursive algorithm yields an exponential time complexity. However, this computation has the overlapping subproblems property, such that using dynamic programming or simply caching the results of the recursive calls yields a complexity of .
[edit] Using matrix inversion
Another approach allows to compute in all partial correlations between any two variables Xi and Xj of a set V of cardinality n given all others, i.e., , provided the correlation matrix Ω = (ωij), where ωij = ρXiXj, is invertible. If we define P = Ω-1, we have:
[edit] Interpretation
[edit] Geometrical
Let three variables X, Y, Z be chosen from a joint probability distribution over n variables V. Further let vi, 1 ≤ i ≤ N, be N n-dimensional i.i.d. samples taken from the joint probability distribution over V. We then consider the N-dimensional vectors x (formed by the successive values of X over the samples), y (formed by the values of Y) and z (formed by the values of Z).
It can be shown that the residuals RX coming from the linear regression of X using Z, if also considered as an N-dimensional vector rX, have a zero scalar product with the vector z generated by Z. This means that the residuals vector lives on a hyperplane Sz which is perpendicular to z.
The same also applies to the residuals RY generating a vector rY. The desired partial correlation is then the cosine of the angle φ between the projections rX and rY of x and y, respectively, onto the hyperplane perpendicular to z.[1]
[edit] As conditional independence test
With the assumption that all involved variables are multivariate Gaussian, the partial correlation ρXY·Z is zero if and only if X is conditionally independent from Y given Z.[2] This property does not hold in the general case.
In order to test if a sample partial correlation vanishes, Fisher's z-transform of the partial correlation can be used:
The null hypothesis is , to be tested against the two-tail alternative . We reject H0 with significance level α if:
where Φ(·) is the cumulative distribution function of a Gaussian distribution with zero mean and unit standard deviation, and N is the sample size. Note that this z-transform is approximate and that the actual distribution of the sample (partial) correlation coefficient is not straightforward.
The distribution of the sample partial correlation was described by Fisher.[3]
[edit] Use in time series analysis
In time series analysis, the partial autocorrelation function (sometimes "partial correlation function") of a time series is defined, for lag h, as
[edit] See also
[edit] References
- ^ Rummel, R. J. (1976). Understanding Correlation.
- ^ Baba, Kunihiro; Ritei Shibata & Masaaki Sibuya (2004). "Partial correlation and conditional correlation as measures of conditional independence". Australian and New Zealand Journal of Statistics 46 (4): 657–664. doi: .
- ^ R. A. Fisher (1924). "The distribution of the partial correlation coefficient". Metron 3 (3–4): 329–332.
[edit] Other
- Guilford J. P., Fruchter B. (1973). Fundamental statistics in psychology and education. Tokyo: MacGraw-Hill Kogakusha, LTD..