Likelihood-ratio test

Not to be confused with the use of likelihood ratios in diagnostic testing.

In statistics, a likelihood ratio test is a statistical test used to compare the goodness of fit of two models, one of which (the null model) is a special case of the other (the alternative model). The test is based on the likelihood ratio, which expresses how many times more likely the data are under one model than the other. This likelihood ratio, or equivalently its logarithm, can then be used to compute a p-value, or compared to a critical value to decide whether to reject the null model in favour of the alternative model. When the logarithm of the likelihood ratio is used, the statistic is known as a log-likelihood ratio statistic, and the probability distribution of this test statistic, assuming that the null model is true, can be approximated using Wilks's theorem.

In the case of distinguishing between two models, each of which has no unknown parameters, use of the likelihood ratio test can be justified by the Neyman–Pearson lemma, which demonstrates that such a test has the highest power among all competitors.[1]

Simple-vs-simple hypotheses

A statistical model is often a parametrized family of probability density functions or probability mass functions f(x|\theta). A simple-vs-simple hypothesis test has completely specified models under both the null and alternative hypotheses, which for convenience are written in terms of fixed values of a notional parameter \theta:


\begin{align}
H_0 &:& \theta=\theta_0 ,\\
H_1 &:& \theta=\theta_1 .
\end{align}

Note that under either hypothesis, the distribution of the data is fully specified; there are no unknown parameters to estimate. The likelihood ratio test is based on the likelihood ratio, which is often denoted by \Lambda (the capital Greek letter lambda). The likelihood ratio is defined as follows:[2][3]


\Lambda(x) = \frac{ L(\theta_0|x) }{ L(\theta_1|x) } = \frac{ f(\cup_i \, x_i|\theta_0) }{ f(\cup_i \, x_i|\theta_1) }

or

\Lambda(x)=\frac{L(\theta_0\mid x)}{\sup\{\,L(\theta\mid x):\theta\in\{\theta_0,\theta_1\}\}},

where L(\theta|x) is the likelihood function, and \sup is the supremum function. Note that some references may use the reciprocal as the definition.[4] In the form stated here, the likelihood ratio is small if the alternative model is better than the null model and the likelihood ratio test provides the decision rule as follows:

If \Lambda > c , do not reject H_0;
If \Lambda < c , reject H_0;
Reject with probability q if \Lambda = c .

The values c, \; q are usually chosen to obtain a specified significance level \alpha, through the relation q\cdot P(\Lambda=c \;|\; H_0) + P(\Lambda < c \; | \; H_0) = \alpha . The Neyman-Pearson lemma states that this likelihood ratio test is the most powerful among all level \alpha tests for this problem.[1]

Definition (likelihood ratio test for composite hypotheses)

A null hypothesis is often stated by saying the parameter \theta is in a specified subset \Theta_0 of the parameter space \Theta.


\begin{align}
H_0 &:& \theta \in \Theta_0\\
H_1 &:& \theta \in \Theta_0^{\complement}
\end{align}

The likelihood function is L(\theta|x) = f(x|\theta) (with f(x|\theta) being the pdf or pmf), which is a function of the parameter \theta with x held fixed at the value that was actually observed, i.e., the data. The likelihood ratio test statistic is [5]

\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.

Here, the \sup notation refers to the supremum function.

A likelihood ratio test is any test with critical region (or rejection region) of the form \{x|\Lambda \le c\} where c is any number satisfying 0\le c\le 1. Many common test statistics such as the Z-test, the F-test, Pearson's chi-squared test and the G-test are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof.

Interpretation

Being a function of the data x, the likelihood ratio is therefore a statistic. The likelihood ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).

The numerator corresponds to the maximum likelihood of an observed outcome under the null hypothesis. The denominator corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator. The likelihood ratio hence is between 0 and 1. Low values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis as compared to the alternative. High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as the alternative, and the null hypothesis cannot be rejected.

Distribution: Wilks's theorem

If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to accept/reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine. A convenient result, attributed to Samuel S. Wilks, says that as the sample size n approaches \infty, the test statistic -2 \log(\Lambda) for a nested model will be asymptotically \chi^2-distributed with degrees of freedom equal to the difference in dimensionality of \Theta and \Theta_0.[6] This means that for a great variety of hypotheses, a practitioner can compute the likelihood ratio \Lambda for the data and compare -2\log(\Lambda) to the \chi^2 value corresponding to a desired statistical significance as an approximate statistical test.

Wilk's theorem assumes that the true but unknown values of the estimated parameters are in the interior of the parameter space. This is commonly violated in, for example, random or mixed effects models when one of the variance components is negligible relative to the others. In some such cases with one variance component essentially zero relative to the others or the models are not properly nested, Pinheiro and Bates showed that the true distribution of this likelihood ratio chi-square statistic could be substantially different from the naive \chi^2, often dramatically so.[7] The naive assumptions could give significance probabilities (p-values) that are far too large on average in some cases and far too small in other.

In general, to test random effects, they recommend using Restricted maximum likelihood (REML). For fixed effects testing, they say, "a likelihood ratio test for REML fits is not feasible, because" changing the fixed effects specification changes the meaning of the mixed effects, and the restricted model is therefore not nested within the larger model.[8]

They simulated tests setting one and two random effects variances to zero. In those particular examples, the simulated p-values with k restrictions most closely matched a 50-50 mixture of \chi^2(k) and \chi^2(k-1). (With k = 1, \chi^2(0) is 0 with probability 1. This means that a good approximation was 0.5 \chi^2(1).)

They also simulated tests of different fixed effects. In one test of a factor with 4 levels (degrees of freedom = 3), they found that a 50-50 mixture of \chi^2(3) and \chi^2(4) was a good match for actual p-values obtained by simulation -- and the error in using the naive \chi^2(3) "may not be too alarming.[9] However, in another test of a factor with 15 levels, they found a reasonable match to \chi^2(18) -- 4 more degrees of freedom than the 14 that one would get from a naive (inappropriate) application of Wilk's theorem, AND the simulated p-value was several times the naive \chi^2(14). They conclude that for testing fixed effects, it's wise to use simulation. (And they provided a "simulate.lme" function in their "nlme" package for S-PLUS and R to support doing that.)

To be clear, these limitations on Wilk's theorem do not negate any power properties of a particular likelihood ratio test, only the use of a \chi^2 distribution to evaluate its statistical significance.

Use

Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. The test statistic (often denoted by D) is twice the log of the likelihoods ratio, i.e., it is twice the difference in the log-likelihoods:


\begin{align}
D & = -2\ln\left( \frac{\text{likelihood for null model}}{\text{likelihood for alternative model}} \right) = 2\ln\left( \frac{\text{likelihood for alternative model}}{\text{likelihood for null model}} \right) \\
&= 2 \times [ \ln(\text{likelihood for alternative model}) - \ln(\text{likelihood for null model}) ] \\
\end{align}

The model with more parameters (here alternative) will always fit at least as well, i.e., have a greater or equal log-likelihood, than the model with less parameters (here null). Whether it fits significantly better and should thus be preferred is determined by deriving the probability or p-value of the difference D. Where the null hypothesis represents a special case of the alternative hypothesis, the probability distribution of the test statistic is approximately a chi-squared distribution with degrees of freedom equal to df_{alt} - df_{null}.[10] Symbols df_{alt} and df_{null} represent the number of free parameters of models alternative and null, respectively.

Here is an example of use. If the null model has 1 parameter and a log-likelihood of −8024 and the alternative model has 3 parameters and a log-likelihood of −8012, then the probability of this difference is that of chi-squared value of 2 \times (-8012 - (-8024)) = 24 with 3 - 1 = 2 degrees of freedom, and is equal to 6 \times 10^{-6}. Certain assumptions[6] must be met for the statistic to follow a chi-squared distribution, and often empirical p-values are computed.

The likelihood-ratio test requires nested models, i.e. models in which the more complex one can be transformed into the simpler model by imposing a set of constraints on the parameters. If the models are not nested, then a generalization of the likelihood-ratio test can usually be used instead: the relative likelihood.

Examples

Coin tossing

An example, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation X.

Heads Tails
Coin 1 k_{1H} k_{1T}
Coin 2 k_{2H} k_{2T}

Here \Theta consists of the possible combinations of values of the parameters p_{1H}, p_{1T}, p_{2H}, and p_{2T}, which are the probability that coins 1 and 2 come up heads or tails. In what follows, i = 1,2 and j = H,T. The hypothesis space H is constrained by the usual constraints on a probability distribution, 0 \le p_{ij} \le 1, and  p_{iH} + p_{iT} = 1 . The space of the null hypothesis H_0 is the subspace where  p_{1j} = p_{2j}. Writing n_{ij} for the best values for p_{ij} under the hypothesis H, the maximum likelihood estimate is given by

n_{ij} = \frac{k_{ij}}{k_{iH}+k_{iT}}.

Similarly, the maximum likelihood estimates of p_{ij} under the null hypothesis H_0 are given by

m_{ij} = \frac{k_{1j}+k_{2j}}{k_{1H}+k_{2H}+k_{1T}+k_{2T}},

which does not depend on the coin i.

The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired nice distribution. Since the constraint causes the two-dimensional H to be reduced to the one-dimensional H_0, the asymptotic distribution for the test will be \chi^2(1), the \chi^2 distribution with one degree of freedom.

For the general contingency table, we can write the log-likelihood ratio statistic as

-2 \log \Lambda = 2\sum_{i, j} k_{ij} \log \frac{n_{ij}}{m_{ij}}.

Notes

  1. 1 2 Neyman & Pearson 1933.
  2. Mood & Graybill 1963, p. 286.
  3. Stuart, Ord & Arnold 1999, Chapter 22.
  4. Cox & Hinkley 1974, p. 92.
  5. Casella & Berger 2001, p. 375.
  6. 1 2 Wilks 1938.
  7. Pinheiro and Bates (2000)
  8. Pinheiro and Bates (2000, p. 87)
  9. Pinheiro and Bates (2000, p. 88)
  10. Huelsenbeck & Crandall 1997.

References

External links

This article is issued from Wikipedia - version of the Sunday, February 14, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.