Statistical classification

From Wikipedia, the free encyclopedia

Statistical classification is a statistical procedure in which individual items are placed into groups based on quantitative information on one or more characteristics inherent in the items (referred to as traits, variables, characters, etc) and based on a training set of previously labeled items.

Formally, the problem can be stated as follows: given training data \{(\mathbf{x_1},y_1),\dots,(\mathbf{x_n}, y_n)\} produce a classifier h:\mathcal{X}\rightarrow\mathcal{Y} which maps an object \mathbf{x} \in \mathcal{X} to its classification label y \in \mathcal{Y}. For example, if the problem is filtering spam, then \mathbf{x_i} is some representation of an email and y is either "Spam" or "Non-Spam".

Statistical classification algorithms are typically used in pattern recognition systems.

Note: in community ecology, the term "classification" is synonymous with what is commonly known (in machine learning) as clustering. See that article for more information about purely unsupervised techniques.

Contents

[edit] Statistical classification techniques

While there are many methods for classification, they are solving one of three related mathematical problems.

The first is to find a map of a feature space (which is typically a multi-dimensional vector space) to a set of labels. This is equivalent to partitioning the feature space into regions, then assigning a label to each region. Such algorithms (e.g., the nearest neighbour algorithm) typically do not yield confidence or class probabilities, unless post-processing is applied. Another set of algorithms to solve this problem first apply unsupervised clustering to the feature space, then attempt to label each of the clusters or regions.

The second problem is to consider classification as an estimation problem, where the goal is to estimate a function of the form

P({\rm class}|{\vec x}) = f\left(\vec x;\vec \theta\right)

where the feature vector input is \vec x, and the function f is typically parameterized by some parameters \vec \theta. In the Bayesian approach to this problem, instead of choosing a single parameter vector \vec \theta, the result is integrated over all possible thetas, with weighted by how likely they are given the training data D:

P({\rm class}|{\vec x}) = \int f\left(\vec x;\vec \theta\right)P(\vec \theta|D) d\vec \theta

The third problem is related to the second, but the problem is to estimate the class-conditional probabilities P(\vec x|{\rm class}) and then use Bayes' rule to produce the class probability as in the second problem.

Examples of classification algorithms include:

[edit] Application domains

[edit] External links

[edit] See also