Minimum redundancy feature selection
From Wikipedia, the free encyclopedia
Feature selection is one of the basic problems in pattern recognition and machine learning. It has a variety of applications in many areas, such as cancer diagnosis and speaker recognition.
Features can be selected in many different ways. One scheme is to select features that correlate strongest to the classification variable. This has been called maximum-relevance selection. Many heuristic algorithms can be used, such as the sequential forward, backward, or floating selections.
On the other hand, features can be selected to be mutually far away from each other, while they still have "high" correlation to the classification variable. This scheme, termed as minimum-Redundancy-Maximum-Relevance selection (mRMR), has been found to be more powerful than the maximum relevance selection.
As a special case, the "correlation" can be replaced by the statistical dependency between variables. Mutual information can be used to quantify the dependency. In this case, it is shown that mRMR is an approximation to maximizing the dependency between the joint distribution of the selected features and the classification variable.
[edit] External links
- Peng, H.C., Long, F., and Ding, C., "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 8, pp.1226-1238, 2005. Program
- Chris Ding and Hanchuan Peng, "Minimum Redundancy Feature Selection from Microarray Gene Expression Data". 2nd IEEE Computer Society Bioinformatics Conference (CSB 2003), 11-14 August 2003, Stanford, CA, USA. Pages 523-529.