Regularization (mathematics)
From Wikipedia, the free encyclopedia
- For other uses in related fields, see Regularization
It has been suggested that regularization (machine learning) be merged into this article or section. (Discuss) |
In mathematics, inverse problems are often ill-posed. To solve these problems numerically one must introduce some additional information about the solution, such as an assumption on the smoothness or a bound on the norm. This process is known as regularization.
The same idea arose in many fields of science. For example, the least-squares method can be viewed as a very simple form of regularization. A simple form of regularization applied to integral equations, generally termed Tikhonov regularization after Andrey Nikolayevich Tychonoff, is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, including total variation regularization have become popular.
In statistics, a similar concept was introduced about the same time for finite-dimensional problems, where it is known as ridge regression.
[edit] References
- A. Neumaier, Solving ill-conditioned and singular linear systems: A tutorial on regularization, SIAM Review 40 (1998), 636-666. Available in pdf from author's website.