Machine learning

From Wikipedia, the free encyclopedia

As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn". At a general level, there are two types of learning: inductive, and deductive. Inductive machine learning methods extract rules and patterns out of massive data sets.

The major focus of machine learning research is to extract information from data automatically, by computational and statistical methods. Hence, machine learning is closely related not only to data mining and statistics, but also theoretical computer science.

Contents

[edit] Applications

Machine learning has a wide spectrum of applications including natural language processing, syntactic pattern recognition, search engines, medical diagnosis, bioinformatics, brain-machine interfaces and cheminformatics, detecting credit card fraud, stock market analysis, classifying DNA sequences, speech and handwriting recognition, object recognition in computer vision, game playing and robot locomotion.

[edit] Human interaction

Some machine learning systems attempt to eliminate the need for human intuition in the analysis of the data, while others adopt a collaborative approach between human and machine. Human intuition cannot be entirely eliminated since the designer of the system must specify how the data is to be represented and what mechanisms will be used to search for a characterization of the data. Machine learning can be viewed as an attempt to automate parts of the scientific method[citation needed].

Some statistical machine learning researchers create methods within the framework of Bayesian statistics.

[edit] Algorithm types

Machine learning algorithms are organized into a taxonomy, based on the desired outcome of the algorithm. Common algorithm types include:

  • Supervised learning — in which the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate) the behavior of a function which maps a vector [X_1, X_2, \ldots X_N]\, into one of several classes by looking at several input-output examples of the function.
  • Unsupervised learning — An agent which models a set of inputs: labeled examples are not available.
  • Semi-supervised learning — which combines both labeled and unlabeled examples to generate an appropriate function or classifier.
  • Reinforcement learning — in which the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment, and the environment provides feedback that guides the learning algorithm.
  • Transduction — similar to supervised learning, but does not explicitly construct a function: instead, tries to predict new outputs based on training inputs, training outputs, and test inputs which are available while training.
  • Learning to learn — in which the algorithm learns its own inductive bias based on previous experience.

The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.

[edit] Machine learning topics

This list represents the topics covered on a typical machine learning course.
Approximate inference techniques
Optimization
  • Most of methods listed above either use optimization or are instances of optimization algorithms
Meta-learning (ensemble methods)
Inductive transfer and learning to learn

[edit] See also

[edit] References

[edit] Further reading

[edit] External links