Probably approximately correct learning

From Wikipedia, the free encyclopedia

In computational learning theory Probably approximately correct learning (PAC learning) is a framework of machine learning proposed by Leslie Valiant in his paper A theory of the learnable that allows an accurate mathematical analysis of learning.

In this framework the learner gets samples that are classified according to a function from a certain class. The aim of the learner is to find a bounded approximation (approximately) of the function with high probability (probably). The learner must be able to learn the concept given any arbitrary approximation ratio, probability of success, or distribution of the samples.

The model was further extended to treat noise (misclassified samples).

Critical are definitions of efficiency. In particular, finding efficient classifiers (time and space requirements bounded to a polynomial of the example size) with efficient learning procedures (requiring an example count bounded to a polynomial of the concept size, modified by the approximation and likelihood bounds).


[edit] References

  1. L. Valiant. A theory of the learnable. Communications of the ACM, 27, 1984. The paper that proposed the PAC learning framework.
  2. M. Kearns, U. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994. A textbook.

[edit] External links

In other languages