Confusion matrix
From Wikipedia, the free encyclopedia
In the field of artificial intelligence, a confusion matrix is a visualization tool typically used in supervised learning (in unsupervised learning it is typically called a matching matrix). Each column of the matrix represents the instances in a predicted class, while each row represents the instances in an actual class. One benefit of a confusion matrix is that it is easy to see if the system is confusing two classes (i.e. commonly mislabelling one as another).
When a data set is unbalanced (when the number of samples in different classes vary greatly) the error rate of a classifier is not representative of the true performance of the classifier. This can easily be understood by an example. If there are for example 990 samples from class 1 and only 10 samples from class 2, the classifier can easily be biased towards class 1. If the classifier classifies all the samples as class 1, the accuracy will be 99%. This is not a good indication of the classifier's true performance. The classifier had a 100% recognition rate for class 1 but a 0% recognition rate for class 2.
In the example confusion matrix below, of the 8 actual cats, the system predicted that three were dogs, and of the six dogs, it predicted that one was a rabbit and two were cats. We can see from the matrix that the system in question has trouble distinguishing between cats and dogs, but can make the distinction between rabbits and other types of animals pretty well.
Cat | Dog | Rabbit | |
---|---|---|---|
Cat | 5 | 3 | 0 |
Dog | 2 | 3 | 1 |
Rabbit | 0 | 2 | 11 |
[edit] Table of Confusion
In Predictive Analytics, a Table of Confusion, also known as a confusion matrix, is a table with two rows and two columns that reports the number of True Negatives, False Positives, False Negatives, and True Positives.
actual value | ||||
---|---|---|---|---|
p | n | total | ||
prediction outcome |
p' | True Positive |
False Positive |
P' |
n' | False Negative |
True Negative |
N' | |
total | P | N |
Table 1: Table of Confusion.
For example, consider a model which predicts for 10,000 Insurance Claims whether each case is Fraudulent. This model correctly predicts 9,700 non-fraudulent cases, and 100 fraudulent cases. The model also incorrectly predicts 150 cases which are not fraudulent to be fraudulent, and 50 cases which are fraudulent to be non-fraudulent. The resulting Table of Confusion is shown below.
actual value | ||||
---|---|---|---|---|
p | n | total | ||
prediction outcome |
p' | 100 | 150 | P' |
n' | 50 | 9700 | N' | |
total | P | N |
Table 2: Example Table of Confusion.
[edit] See also
[edit] External links
http://www2.cs.uregina.ca/~dbd/cs831/notes/confusion_matrix/confusion_matrix.html