Score test
From Wikipedia, the free encyclopedia
A score test is a statistical test of a simple null hypothesis that a parameter of interest θ is equal to some particular value θ0. It is the most powerful test when the true value of θ is close to θ0.
Contents |
[edit] Single parameter test
[edit] The statistic
Let L be the likelihood function which depends on a univariate parameter θ and let x be the data. The score is U(θ) where
The Fisher information is,
The statistic to test H0:θ = θ0 is
which takes a distribution asymptotically when H0 is true.
[edit] Justification
Please help improve this section by expanding it. Further information might be found on the talk page or at requests for expansion. |
[edit] The case of a likelihood with nuisance parameters
Please help improve this section by expanding it. Further information might be found on the talk page or at requests for expansion. |
[edit] As most powerful test for small deviations
Where L is the likelihood function, θ0 is the value of the parameter of interest under the null hypothesis, and C is a constant set depending on the size of the test desired (i.e. the probability of rejecting H0 if H0 is true; see Type I error).
The score test is the most powerful test for small deviations from H0. To see this, consider testing θ = θ0 versus θ = θ0 + h. By the Neyman-Pearson lemma, the most powerful test has the form
Taking the log of both sides yields
The score test follows making the substitution
and identifying the C above with log(K).
[edit] Relationship with Wald test
Please help improve this section by expanding it. Further information might be found on the talk page or at requests for expansion. |
[edit] Multiple parameters
A more general score test can be derived when there is more than one parameter. Suppose that is the Maximum Likelihood estimate of θ under the null hypothesis H0. Then
asymptotically under H0, where k is the number of constraints imposed by the null hypothesis and
and
This can be used to test H0.