Random subspace method
Random subspace method (or attribute bagging[1]) is an ensemble classifier that consists of several classifiers and outputs the class based on the outputs of these individual classifiers. Random subspace method is a generalization of the random forest algorithm.[2] Whereas random forests are composed of decision trees, a random subspace classifier can be composed from any underlying classifiers. Random subspace method has been used for linear classifiers,[3] support vector machines[4] and other types of classifiers. This method is also applicable to one-class classifiers.[5]
Algorithm
The ensemble classifier is constructed using the following algorithm:
- Let the number of training objects be N and the number of features in the training data be D.
- Choose d to be the number of input variables to be used in each individual classifier, d<D. d may also have different values for each individual classifier.
- Choose L to be the number of individual classifiers in the ensemble.
- For each individual classifier, create a training set by choosing d out of D features without replacement and train the classifier.
- For classifying a new object, combine the outputs of the L individual classifiers by majority voting or by combining the posterior probabilities.
References
- ^ Bryll, R. (2003). "Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets". Pattern Recognition 20 (6): 1291–1302.
- ^ Ho, Tin (1998). "The Random Subspace Method for Constructing Decision Forests". IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (8): 832–844. doi:10.1109/34.709601. http://cm.bell-labs.com/cm/cs/who/tkh/papers/df.pdf.
- ^ Skurichina, Marina (2002). "Bagging, boosting and the random subspace method for linear classifiers". Pattern Analysis and Applications 5 (2): 121–135.
- ^ Tao, D. (2006). "Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval". IEEE Transactions on Pattern Analysis and Machine Intelligence.
- ^ Nanni, L. (2006). "Experimental comparison of one-class classifiers for online signature verification". Neurocomputing 69 (7).