Regularization (mathematics)

For other uses in related fields, see Regularization (disambiguation).

Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, refers to a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting.

Introduction

In general, a regularization term R(f) is introduced to a general loss function:

\min_f \sum_{i=1}^{n} V(f(\hat x_i), \hat y_i) + \lambda R(f)

for a loss function V that describes the cost of predicting f(x) when the label is y, such as the square loss or hinge loss, and for the term \lambda which controls the importance of the regularization term. R(f) is typically a penalty on the complexity of f, such as restrictions for smoothness or bounds on the vector space norm.[1]

A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution, as depicted in the figure. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.

Regularization can be used to learn simpler models, induce models to be sparse, introduce group structure into the learning problem, and more.

The same idea arose in many fields of science. For example, the least-squares method can be viewed as a very simple form of regularization. A simple form of regularization applied to integral equations, generally termed Tikhonov regularization after Andrey Nikolayevich Tikhonov, is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, including total variation regularization have become popular.

Generalization

Main article: Generalization error

Regularization can be motivated as a technique to improve the generalization of a learned model.

The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. The expected error of a function f_n is:

 I[f_n] = \int_{X \times Y} V(f_n(x),y) \rho(x,y) dx dy

Typically in learning problems, only a subset of input data and labels are available, measured with some noise. Therefore the expected error is unmeasurable, and the best surrogate available is the empirical error over the  N available samples:

 I_S[f_n] = \frac{1}{n} \sum_{i=1}^N V(f_n(\hat x_i), \hat y_i)

Without bounds on the complexity of the function space (formally, the reproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. If measurements (e.g. of x_i) were made with noise, this model may suffer from overfitting and display poor expected error. Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization.

Tikhonov Regularization

When learning a linear function, such that f(x) = w \cdot x, the L_2 norm loss corresponds to Tikhonov regularization. This is one of the most common forms of regularization, is also known as ridge regression, and is expressed as:

\min_w \sum_{i=1}^{n} V(\hat x_i \cdot w, \hat y_i) + \lambda \|w\|_{2}^{2}

In the case of a general function, we take the norm of the function in its reproducing kernel Hilbert space:

\min_f \sum_{i=1}^{n} V(f(\hat x_i), \hat y_i) + \lambda \|f\|_{\mathcal{H}}^{2}

As the L_2 norm is differentiable, learning problems using Tikhonov regularization can be solved by gradient descent.

Tikhonov Regularized Least Squares

The learning problem with the least squares loss function and Tikhonov regularization can be solved analytically. Written in matrix form, the problem is solved by setting the gradient with respect to w to 0 and solving for the w which meets that criterion.

\min_w \frac{1}{n} (\hat X w - \hat Y)^2 + \lambda \|w\|_{2}^{2}
\nabla_w = \frac{2}{n} \hat X^T (\hat X w - \hat Y) + 2 \lambda w (the first-order condition for this optimization problem)
0 = \hat X^T (\hat X w - \hat Y) + n \lambda w
w = (\hat X^T \hat X + \lambda n I)^{-1} (\hat X^T \hat Y)

During training, this algorithm takes O(d^3 + nd^2) time. The terms correspond to the matrix inversion and calculating X^T X, respectively. Testing takes O(nd) time.

Early Stopping

Main article: Early stopping

Early stopping can be viewed as regularization in time. Intuitively, a training procedure like gradient descent will tend to learn more and more complex functions as the number of iterations increases. By regularizing on time, the complexity of the model can be controlled, improving generalization.

In practice, early stopping is implemented by training on a training set and measuring accuracy on a statistically independent validation set. The model is trained until performance on the validation set no longer improves. The model is then tested on a testing set.

Theoretical Motivation in Least Squares

Consider the finite approximation of Neumann series for an invertible matrix A where \| A \| < 1:

\sum_{i=0}^{T-1}(I-A)^i \approx A^{-1}

This can be used to approximate the analytical solution of unregularized least squares, if \gamma is introduced to ensure the norm is less than one.

w_T = \frac{\gamma}{n} \sum_{i=0}^{T-1} ( I - \frac{\gamma}{n} \hat X^T \hat X )^i \hat X^T \hat Y

The exact solution to the unregularized least squares learning problem will minimize the empirical error, but may fail to generalize and minimize the expected error. By limiting T, the only free parameter in the algorithm above, the problem is regularized on time which may improve its generalization.

The algorithm above is equivalent to restricting the number of gradient descent iterations for the empirical risk

I_s[w] = \frac{1}{2n} \| \hat X w - \hat Y \|^{2}_{\mathbb{R}^n}

with the gradient descent update:

w_0 = 0
w_{t+1} = (I - \frac{\gamma}{n} \hat X^T \hat X)w_t + \frac{\gamma}{n}\hat X^T \hat Y

The base case is trivial. The inductive case is proved as follows:

w_{T} = (I - \frac{\gamma}{n} \hat X^T \hat X)\frac{\gamma}{n} \sum_{i=0}^{T-2}(I - \frac{\gamma}{n} \hat X^T \hat X )^i \hat X^T \hat Y  + \frac{\gamma}{n}\hat X^T \hat Y
w_{T} = \frac{\gamma}{n} \sum_{i=1}^{T-1}(I - \frac{\gamma}{n} \hat X^T \hat X )^i \hat X^T \hat Y  + \frac{\gamma}{n}\hat X^T \hat Y
w_{T} = \frac{\gamma}{n} \sum_{i=0}^{T-1}(I - \frac{\gamma}{n} \hat X^T \hat X )^i \hat X^T \hat Y

Regularizers for Sparsity

Assume that a dictionary \phi_j with dimension p is given such that a function in the function space can be expressed as:

f(x) = \sum_{j=1}^{p} \phi_j(x) w_j
A comparison between the L1 ball and the L2 ball in two dimensions gives an intuition on how L1 regularization achieves sparsity.
Enforcing a sparsity constraint on w can lead to more simple and interpretable models. This is useful in many real-life applications such as computational biology. An example is developing a simple predictive test for a disease in order to minimize the cost of performing medical tests while maximizing predictive power.

A sensible sparsity constraint is the L_0 norm \|w\|_0, defined as the number of non-zero elements in w. Solving a L_0 regularized learning problem, however, has been demonstrated to be NP-hard.

The L_1 norm can be used to approximate the optimal L_0 norm via convex relaxation. It can be shown that the L_1 norm induces sparsity. In the case of least squares, this problem is known as LASSO in statistics and basis pursuit in signal processing.

\min_{w \in \mathbb{R}^p} \frac{1}{n} \|\hat X w - \hat Y \|^2 + \lambda \|w\|_{1}
Elastic Net regularization
L_1 regularization can occassionally produce non-unique solutions. A simple example is provided in the figure when the space of possible solutions lies on a 45 degree line. This can be problematic for certain applications, and is overcome by combining L_1 with L_2 regularization in Elastic Net regularization, which takes the following form:
\min_{w \in \mathbb{R}^p} \frac{1}{n} \|\hat X w - \hat Y \|^2 + \lambda (\alpha \|w\|_{1} + (1 - \alpha)\|w\|_{2}^{2}), \alpha \in [0, 1]

Elastic net regularization tends to have a grouping effect, where correlated input features are assigned equal weights.

Elastic net regularization is commonly used in practice and is implemented in many machine learning libraries.

Proximal methods

While the L_1 norm does not result in an NP-hard problem, it should be noted that the L_1 norm is convex but is not strictly diffentiable due to the kink at x = 0. Subgradient methods which rely on the subderivative can be used to solve L_1 regularized learning problems. However, faster convergence can be achieved through proximal methods.

For a problem \min_{w \in H} F(w) + R(w) such that F is convex, continuous, differentiable, with Lipschitz continuous gradient (such as the least squares loss function), and R is convex, continuous, and proper, then the proximal method to solve the problem is as follows.

prox_{R}(v) = argmin_{w \in \mathbb{R}^D} \{R(w) + \frac{1}{2}\|w-v\|^2\}
w_{k+1} = prox_{\gamma, R}(w_k - \gamma \nabla F(w_k))

The proximal method iteration iteratively performs gradient descent and then projects the result back into the space permitted by R.

When R is the L_1 regularizer, the proximal operator is equivalent to the soft-thresholding operator,

S_\lambda(v)f(n) = \begin{cases} v_i - \lambda, & \text{if }v_i > \lambda \\ 0, & \text{if }v_i \in [-\lambda, \lambda] \\ v_i + \lambda, & \text{if }v_i < - \lambda \end{cases}

This allows for efficient computation.

Group Sparsity without Overlaps

Groups of features can be regularized by a sparsity constraint, which can be useful for expressing certain prior knowledge into an optimization problem.

In the case of a linear model with non-overlapping known groups, a regularizer can be defined:

R(w) = \sum_{g=1}^{G} \|w_g\|_g, where \|w_g\|_g = \sqrt{\sum_{j=1}^{|G_g|}(w_g^j)^2}

This can be viewed as inducing a regularizer over the L_2 norm over members of each group followed by an L_1 norm over groups.

This can be solved by the proximal method, where the proximal operator is a block-wise soft-thresholding function:

(prox_{\lambda, R, g}(w_g))^j = \begin{cases} (w_g^j - \lambda \frac{w_g^j}{\|w_g\|_g}), & \text{if } \|w_g\|_g > \lambda \\ 0 & \text{if } \|w_g\|_g \in [-\lambda, \lambda] \\ (w_g^j + \lambda \frac{w_g^j}{\|w_g\|_g}), & \text{if } \|w_g\|_g < - \lambda \end{cases}

Group Sparsity with Overlaps

The algorithm described for group sparsity without overlaps can be applied to the case where groups do overlap, in certain situations. It should be noted that this will likely result in some groups with all zero elements, and other groups with some non-zero and some zero elements.

If it is desired to preserve the group structure, a new regularizer can be defined:

R(w) = inf \{ \sum_{g=1}^G \|w_g\|_g : w = \sum_{g=1}^G \bar w_g \}

For each w_g, \bar w_g is defined as the vector such that the restriction of \bar w_g to to the group g equals w_g and all other entries of \bar w_g are zero. The regularizer finds the optimal disintegration of w into parts. It can be viewed as duplicating all elements that exist in multiple groups. Learning problems with this regularizer can also be solved with the proximal method with a complication. The proximal operator cannot be computed in closed form, but can be effectively solved iteratively, inducing an inner iteration within the proximal method iteration.

Regularizers for Semi-Supervised Learning

When labels are more expensive to gather than input examples, semi-supervised learning can be useful. Regularizers have been designed to guide learning algorithms to learn models that respect the structure of unsupervised training samples. If a symmetric weight matrix W is given, a regularizer can be defined:

R(f) = \sum_{i,j} w_{ij}(f(x_i) - f(x_j))^2

If W_{ij} encodes the result of some distance metric for points x_i and x_j, it is desirable that f(x_i) \approx f(x_j). This regularizer captures this intuition, and is equivalent to:

R(f) = \bar f^T L \bar f where L = D- W is the Laplacian matrix of the graph induced by W.

The optimization problem \min_{f \in \mathbb{R}^m} R(f), m = u + l can be solved analytically if the constraint f(x_i) = y_i is applied for all supervised samples. The labeled part of the vector f is therefore obvious. The unlabeled part of f is solved for by:

\min_{f_u \in \mathbb{R}^u} f^T L f = \min_{f_u \in \mathbb{R}^u} \{ f^T_u L_{uu} f_u + f^T_l L_{lu} f_u + f^T_u L_{ul} f_l \}
\nabla_{f_u} = 2L_{uu}f_u + 2L_{ul}Y
f_u = L_{uu}^\dagger (L_{ul} Y)

Note that the pseudo-inverse can be taken because L_{ul} has the same range as L_{uu}.

Regularizers for Multitask Learning

Main article: Multi-task learning

In the case of multitask learning, T problems are considered simultaneously, each related in some way. The goal is to learn T functions, ideally borrowing strength from the relatedness of tasks, that have predictive power. This is equivalent to learning the matrix W: T \times D .

Sparse Regularizer on Columns

R(w) = \sum_{i=1}^D \|W\|_{2,1}

This regularizer defines an L2 norm on each column and an L1 norm over all columns. It can be solved by proximal methods.

Nuclear Norm Regularization

R(w) = \|\sigma(W)\|_1 where \sigma(W) is the eigenvalues in the singular value decomposition of W.

Mean-constrained Regularization

R(f_1 ... f_T) = \sum_{t=1}^T \|f_t - \frac{1}{T} \sum_{s=1}^{T} f_s \|_{H_k}^2

This regularizer constrains the functions learned for each task to be similar to the overall average of the functions across all tasks. This is useful for expressing prior information that each task is expected to share similarities with each other task. An example is predicting blood iron levels measured at different times of the day, where each task represents a different person.

Clustered mean-constrained regularization

R(f_1 ... f_T) = \sum_{r=1}^C \sum_{t \in I(r)} \|f_t - \frac{1}{I(r)} \sum_{s \in I(r)} f_s\|_{H_k}^2 where I(r) is a cluster of tasks.

This regularizer is similar to the mean-constrained regularizer, but instead enforces similarity between tasks within the same cluster. This can capture more complex prior information. This technique has been used to predict Netflix recommendations. A cluster would correspond to a group of people who share similar preferences in movies.

Graph-based Similarity

More general than above, similarity between tasks can be defined by a function. The regularizer encourages the model to learn similar functions for similar tasks.

R(f_1 ... f_T) = \sum_{t,s=1, t \neq s}^T \| f_t - f_s \|^2 M_{ts} for a given symmetric similarity matrix M.

Other Uses of Regularization in Statistics and Machine Learning

Bayesian learning methods make use of a prior probability that (usually) gives lower probability to more complex models. Well-known model selection techniques include the Akaike information criterion (AIC), minimum description length (MDL), and the Bayesian information criterion (BIC). Alternative methods of controlling overfitting not involving regularization include cross-validation.

Examples of applications of different methods of regularization to the linear model are:

ModelFit measureEntropy measure[1][2]
AIC/BIC\|Y-X\beta\|_2\|\beta\|_0
Ridge regression[3] \|Y-X\beta\|_2 \|\beta\|_2
Lasso[4] \|Y-X\beta\|_2\|\beta\|_1
Basis pursuit denoising \|Y-X\beta\|_2 \lambda\|\beta\|_1
Rudin-Osher-Fatemi model (TV) \|Y-X\beta\|_2 \lambda\|\nabla\beta\|_1
Potts model \|Y-X\beta\|_2 \lambda\|\nabla\beta\|_0
RLAD[5] \|Y-X\beta\|_1 \|\beta\|_1
Dantzig Selector[6] \|X^\top (Y-X\beta)\|_\infty\|\beta\|_1
SLOPE[7] \|Y-X\beta\|_2 \sum_{i=1}^p \lambda_i|\beta|_{(i)}

Ensemble-based regularization

In inverse problem theory, an optimization problem is usually solved to generate a model that provides a good match to observed data. In this context, a regularization term is used to preserve prior information about the model and prevent over-fitting and convergence to a model that matches the data but does not predict well. Ensemble-based regularization is based on utilizing an ensemble (i.e., a set) of realizations from the prior probability distribution function (pdf) to construct a regularization term .[8] This regularization is flexible as it is based on representing the prior pdf using a set of realizations, instead of using, say a mean and covariance matrix for Gaussian distribution.

See also

Notes

  1. 1 2 Bishop, Christopher M. (2007). Pattern recognition and machine learning (Corr. printing. ed.). New York: Springer. ISBN 978-0387310732.
  2. Duda, Richard O. (2004). Pattern classification + computer manual : hardcover set (2. ed.). New York [u.a.]: Wiley. ISBN 978-0471703501.
  3. Arthur E. Hoerl; Robert W. Kennard (1970). "Ridge regression: Biased estimation for nonorthogonal problems". Technometrics 12 (1): 55–67. doi:10.2307/1267351.
  4. Tibshirani, Robert (1996). "Regression Shrinkage and Selection via the Lasso" (PostScript). Journal of the Royal Statistical Society, Series B 58 (1): 267288. MR 1379242. Retrieved 2009-03-19.
  5. Li Wang, Michael D. Gordon & Ji Zhu (2006). "Regularized Least Absolute Deviations Regression and an Efficient Algorithm for Parameter Tuning". Sixth International Conference on Data Mining. pp. 690700. doi:10.1109/ICDM.2006.134.
  6. Candes, Emmanuel; Tao, Terence (2007). "The Dantzig selector: Statistical estimation when p is much larger than n". Annals of Statistics 35 (6): 23132351. arXiv:math/0506081. doi:10.1214/009053606000001523. MR 2382644.
  7. Małgorzata Bogdan, Ewout van den Berg, Weijie Su & Emmanuel J. Candes (2013). "Statistical estimation and testing via the ordered L1 norm" (PDF). arXiv preprint arXiv:1310.1969. arXiv:1310.1969v2.
  8. "History matching production data and uncertainty assessment with an efficient TSVD parameterization algorithm". Journal of Petroleum Science and Engineering 113: 54–71. doi:10.1016/j.petrol.2013.11.025.

References

This article is issued from Wikipedia - version of the Saturday, February 13, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.