C4.5 algorithm
From Wikipedia, the free encyclopedia
C4.5 is an algorithm used to generate a decision tree developed by Ross Quinlan. C4.5 is an extension of Quinlan's earlier ID3 algorithm. The decision trees generated by C4.5 can be used for classification, and for this reason, C4.5 is often referred to as a statistical classifier.
Contents |
[edit] The Algorithm
C4.5 builds decision trees from a set of training data in the same way as ID3, using the concept of Information Entropy. The training data is a set S = s1,s2,... of already classified samples. Each sample si = x1,x2,... is a vector where x1,x2,... represent attributes or features of the sample. The training data is augmented with a vector C = c1,c2,... where c1,c2,... represent the class that each sample belongs to.
C4.5 uses the fact that each attribute of the data can be used to make a decision that splits the data into smaller subsets. C4.5 examines the normalized information gain (difference in entropy) that results from choosing an attribute for splitting the data. The attribute with the highest normalized information gain is the one used to make the decision. The algorithm then recurs on the smaller sublists.
This algorithm has a few base cases, the most common base case is when all the samples in your list belong to the same class. Once this happens, you simply create a leaf node for your decision tree telling you to choose that class. It might also happen that none of the features give you any information gain, in this case C4.5 creates a decision node higher up the tree using the expected value of the class. It also might happen that you've never seen any instances of a class; again, C4.5 creates a decision node higher up the tree using expected value.
In pseudocode the algorithm looks like this:
Check for base cases For each attribute a Find the normalized information gain from splitting on a Let a_best be the attribute with the highest normalized information gain Create a decision node node that splits on a_best recur on the sublists obtained by splitting on a_best and add those nodes as children of node
[edit] Information Gain and Information Entropy
Although explained further in their respective sections, Entropy(S) can be thought of as a measure of how random the class distribution is in S. Information gain is a measure given to an attribute a. Attribute a can separate S into subsets Sa1,Sa2,Sa3,...,San the information gain of a is then Entropy(S) − Entropy(Sa1) − Entropy(Sa2) − ... − Entropy(San). Information gain is then normalized by multiplying the entropy of each attribute choice by the proportion of attribute values that have that choice.
[edit] C4.5 and ID3
C4.5 made a number of improvements to ID3. Some of these are:
- Handling both continuous and discrete attributes - In order to handle continuous attributes, C4.5 creates a threshold and then splits the list into those whose attribute value is above the threshold and those that are less than or equal to it. [Quinlan, 96]
- Handling training data with missing attribute values - C4.5 allows attribute values to be marked as ? for missing. Missing attribute values are simply not used in gain and entropy calculations.
- Handling attributes with differing costs.
- Pruning trees after creation - C4.5 goes back through the tree once it's been created and attempts to remove branches that do not help by replacing them with leaf nodes.
[edit] C4.5 and C5.0/See5
Quinlan went on to create C5.0 and See5 (C5.0 for Unix/Linux, See5 for Windows) which he markets commercially. C5.0 offers a number of improvements on C4.5. Some of these are:
- Speed - C5.0 is significantly faster than C4.5 (several orders of magnitude)
- Memory Usage - C5.0 is more memory efficient than C4.5
- Smaller Decision Trees - C5.0 gets similar results to C4.5 with considerably smaller decision trees.
- Support For Boosting - Boosting improves the trees and gives them more accuracy.
- Weighting - C5.0 allows you to weight different attributes and misclassification types.
- Winnowing - C5.0 automatically winnows the data to help reduce noise.
C5.0/See5 is a commercial and closed-source product, although free source code is available for interpreting and using the decision trees and rule sets it outputs.
[edit] See also
[edit] References
- Quinlan, J. R. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, 1993.
- J. R. Quinlan. Improved use of continuous attributes in c4.5. Journal of Artificial Intelligence Research, 4:77-90, 1996.
[edit] External links
- Original implementation on Ross Quinlan's homepage: http://www.rulequest.com/Personal/