Likelihood-ratio test

From Wikipedia, the free encyclopedia

A likelihood-ratio test is a statistical test in which the ratio is computed between the maximum of the likelihood function under the null hypothesis and the maximum with that constraint relaxed. Many common test statistics such as the Z-test, the F-test, Pearson's chi-square test and the G-test can be phrased as log-likelihood ratios or approximations thereof. For example, if the likelihood ratio is Λ (lambda) and the null hypothesis holds, then for commonly occurring families of probability distributions, −2 log Λ has a particularly handy asymptotic distribution. But now that taking a log is really no more vexing than multiplying two numbers, other approximations may be more useful, especially in special cases where the approximations are suspect.

[edit] Details

A statistical model is often a parametrized family of probability density functions or probability mass functions fθ(x). A null hypothesis is often stated by saying the parameter θ is in a specified subset Θ0 of the parameter space Θ. The likelihood function is L(θ) = L(θ| x) = p(x|θ) = fθ(x) is a function of the parameter θ with x held fixed at the value that was actually observed, i.e., the data. The likelihood ratio is

\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.

This is a function of the data x, and is therefore a statistic. The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small, and is justified by the Neyman-Pearson lemma. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).

If the null hypothesis is true and the observation is a sequence of n independent identically distributed random variables, then as the sample size n approaches ∞, the test statistic −2 log Λ will be asymptotically χ2 distributed with degrees of freedom equal to the difference in dimensionality of Θ and Θ0.

For instance, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation X.

Heads Tails
Coin 1 k1H k1T
Coin 2 k2H k2T

Here ω consists of the parameters p1H, p1T, p2H, and p2T which are the probability that coin 1 (2) comes up heads (tails). The hypothesis space H is defined by the usual constraints on a distribution, pij ≥ 0, pij ≤ 1, and piH + piT = 1. The null hypothesis H0 is the sub-space where p1j = p2j. In all of these constraints, i = 1,2 and j = H,T.

The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the log-likelihood ratio to have the desired nice distribution. Since the constraint causes the two-dimensional H to be reduced to the one-dimensional H0, the asymptotic distribution for the test will be χ2(1), the χ2 distribution with one degree of freedom.

For the general contingency table, we can write the log-likelihood ratio statistic as

-2 \log \Lambda = \sum_{i\,j} k_{ij} \log {p_{ij} \over m_{ij}}.

Bayesian criticisms of classical likelihood ratio tests focus on two issues:

  1. the supremum function in the calculation of the likelihood ratio, saying that this takes no account of the uncertainty about θ and that using maximum likelihood estimates in this way can promote complicated alternative hypotheses with an excessive number of free parameters;
  2. testing the probability that the sample would produce a result as extreme or more extreme under the null hypothesis, saying that this bases the test on the probability of extreme events that did not happen.

Instead they put forward methods such as Bayes factors, which explicitly take uncertainty about the parameters into account, and which are based on the evidence that did occur.

[edit] See also

[edit] External links

In other languages