Q test
From Wikipedia, the free encyclopedia
In statistics, the Q test is used for identification and rejection of outliers. This test should be used sparingly and never more than once in a data set. To apply a Q test for bad data, arrange the data in order of increasing values and calculate Q as defined:
Q = Qgap/Qrange
Where Qgap is the absolute difference between the outlier in question and the closest number to it. If Qcalculated > Qtable then reject the questionable point.
[edit] Table
Number of values: | 3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
Q90%: |
0.941 |
0.765 |
0.642 |
0.560 |
0.507 |
0.468 |
0.437 |
0.412 |
Q95%: |
0.970 |
0.829 |
0.710 |
0.625 |
0.568 |
0.526 |
0.493 |
0.466 |
[edit] Example
For the data:
- 0.189,0.169,0.187,0.183,0.186,0.182,0.181,0.184,0.181,0.177
Arranged in increasing order:
- 0.169,0.177,0.181,0.181,0.182,0.183,0.184,0.186,0.187,0.189
Outlier is 0.169. Calculate Q:
With 10 observations at 90% confidence, Qcalculated < Qtable. Therefore keep 0.169 at 90% confidence.