Normality test
From Wikipedia, the free encyclopedia
In statistics, normality tests are used to determine whether a random variable is normally distributed, or not.
One application of normality tests is to the residuals from a linear regression model. If they are not normally distributed, the residuals should not be used in Z tests or in any other tests derived from the normal distribution, such as t tests, F tests and chi-square tests. If the residuals are not normally distributed, then the dependent variable or at least one explanatory variable may have the wrong functional form, or important variables may be missing, etc. Correcting one or more of these systematic errors may produce residuals that are normally distributed.
Normality tests include the Jarque-Bera test, the Anderson-Darling test, the Cramér-von-Mises criterion, the Lilliefors test for normality (itself an adaptation of the Kolmogorov-Smirnov test), the Pearson's chi-square test, and the Shapiro-Francia test for normality.[1]
Instead of using formal normality tests, another option is to compare a histogram of the residuals to a normal probability curve. The actual distribution of the residuals (the histogram) should be bell-shaped and resemble the normal distribution. This might be difficult to see if the sample is small. In this case one might proceed by regressing the measured residuals against a normal distribution with the same mean and variance as the sample. If the regression produces an approximately straight line, then the residuals can safely be assumed to be normally distributed. Another graphical tool is a quantile-quantile plot.
[edit] Notes
- ^ Judge et al. (1988) and Gujarati (2003) recommend the Jarque-Bera test.
[edit] References
- Judge et al, Introduction to the Theory and Practice of Econometrics, Second Edition, 1988; 890-892.
- Gujarati, Damodar N., Basic Econometrics, Fourth Edition, 2003; 147-148