Cohen's kappa
From Wikipedia, the free encyclopedia
Cohen's kappa coefficient is a statistical measure of inter-rater reliability. It is generally thought to be a more robust measure than simple percent agreement calculation since kappa takes into account the agreement occurring by chance. Cohen's kappa measures the agreement between two raters who each classify N items into C mutually exclusive categories.
The equation for kappa is:
where is the relative observed agreement among raters, and is the probability that agreement is due to chance. If the raters are in complete agreement, kappa = 1. If there is no agreement among the raters (other than what would be expected by chance) kappa <= 0.
The seminal paper introducing kappa as a new technique was published by Jacob Cohen in the journal Educational and Psychological Measurement in 1960.
Note that Cohen's kappa measures agreement between two raters only. For a similar measure of agreement (Fleiss' kappa) used when there are more than two raters, see Fleiss (1981).
[edit] See also
[edit] References
- Jacob Cohen, A coefficient of agreement for nominal scales, Educational and Psychological Measurement 20: 37–46, 1960.
- Joseph L. Fleiss. Statistical methods for rates and proportions, 2ed. John Wiley & Sons, Inc. New York. 1981. pp 212-236 (chapter 13: The measurement of interrater agreement).
[edit] External links
- Computing Cohen's Kappa Value - a web application for calculating the Cohen's kappa value, very easy to use
- Cohen's Kappa Example
- Vassar A Kappa worksheet with explanation provided by Dr Lowry of Vassar.