Specificity (tests)
From Wikipedia, the free encyclopedia
The specificity is a statistical measure of how well a binary classification test correctly classifies cases not belonging to that class. Hence for a medical test to determine if a person has a certain disease, the specificity to the disease is the probability that if the person does not have the disease, the test will be negative.
Condition (e.g. Disease) As determined by "Gold" standard |
||||
True | False | |||
Test outcome |
Positive | True Positive | False Positive | → Positive Predictive Value |
Negative | False Negative | True Negative | → Negative Predictive Value | |
↓ Sensitivity |
↓ Specificity |
That is, the specificity is the proportion of true negatives of all negative cases in the population. It is a parameter of the test.
A specificity of 100% means that the test recognizes all healthy people as healthy.
Specificity alone does not tell us how well the test recognizes positive cases. Therefore, we also need to know the sensitivity of the test to the class, or equivalently, the specificities to the other classes.
A test with a high specificity has a low Type I error rate.
Specificity is sometimes confused with the precision or the positive predictive value, both of which refer to the fraction of returned positives that are true positives. The distinction is critical when the classes are different sizes. A test with very high specificity can have very low precision if there are far more true negatives than true positives, and vice versa (see worked example in Sensitivity).
[edit] See also
- binary classification
- receiver operating characteristic
- sensitivity (tests)
- statistical significance
- Type I and type II errors
- Selectivity
[edit] External links
- Sensitivity and Specificity] Medical University of South Carolina
- Tutorial in British Medical Journal