Generalized linear model
From Wikipedia, the free encyclopedia
In statistics, the generalized linear model (GLM) is a useful generalization of ordinary least squares regression. A GLM stipulates that the random part of the experiment (the distribution function) and the systematic portion of the experiment (the linear predictor) are related through what's deemed the link function.
Contents |
[edit] Overview
In a GLM, the data (Y) are assumed to be generated from a distribution function in the exponential family (a very large range of distributions; also see below). The data's expectation μ is predicted by
where Xβ is the linear predictor, a linear combination (X, known from the experiment) of unknown parameters β, and g is called the link function.
In this framework, typically the random component is also a function V of the mean:
.
It is convenient if the variance follows from the exponential family distribution, but it may simply be that the variance is a function of the predicted value.
The unknown parameters β are typically estimated with maximum likelihood or quasi maximum likelihood techniques.
[edit] Components of the model
The GLM consists of three elements.
- 1. A distribution function f, from the exponential family.
- 2. A linear predictor η = X β.
- 3. A link function g such that E(y) = μ = g-1(η).
[edit] Exponential Family of Distributions
The exponential family of distributions are those probability distributions, parameterized by θ and τ, whose density functions can be expressed in the form
- .
τ, called the dispersion parameter, typically is known. The functions a, b, c, d, and h are known. Many, although not all, common distributions are in this family.
If a is the identity function, then the distribution is said to be in canonical form. If in addition b is the identity, then θ is called the canonical parameter.
[edit] Linear Predictors
The linear predictor is a quantity which relate to the expectation of the data (thus, "predictor") through the link function. The symbol η ("eta") is typically used to denote a linear predictor.
η is expressed as linear combinations (thus, "linear") of unknown parameters β. The coefficients of the linear combination are represented as the matrix X; its elements are either fully known by the experimenters or stipulated by them in the modeling process.
Thus η can be expressed as
- .
[edit] Link functions
The link function provides the relationship between the linear predictor and the distribution function (through its mean). There are many commonly used link functions, and their choice can be somewhat arbitrary. However, it is important to match the domain of the link function to the range of the distribution function's mean.
Following is a table of some common link functions and their inverses (sometimes referred to as the mean function) used for several distributions in the exponential family.
Distribution | Name | Link Function | Mean Function |
---|---|---|---|
Normal | Identity | ||
Exponential | Inverse | ||
Gamma | |||
Poisson | Log | ||
Binomial | Logit | ||
Multinomial |
[edit] Examples
[edit] Linear regression
The simplest example of a GLM is linear regression. Here the distribution function is the normal distribution with constant variance and the link function is the identity.
[edit] Binomial data
When the response data (Y) are binary (taking on only values 0 and 1), the distribution function is generally chosen to be the binomial distribution and the interpretation of μi is then the probability of Yi taking on the value one. There are several popular link functions for binomial functions; the most typical is the logistic function:
- .
GLMs with this setup are logistic regression models.
In addition, any inverse cumulative density function (CDF) can be used for the link since the CDF's range is [0, 1], the range of the binomial mean. The normal CDF Φ is a popular choice and yields the probit model. Its link is
- .
The identity link is also sometimes used for binomial data (this is equivalent to using the uniform distribution instead of the normal as the CDF) but this can problematic as the predicted probabilities can be greater than one or less than zero. In implementation this is possible to fix the nonsensical probabilities outside of [0,1] but interpreting the coefficients can be difficult in this model. The model's primary merit is that near p=0.5 it is approximately a linear transformation of the probit and logit — econometricians sometimes call this the Harvard model.
The variance function for binomial data is given given by:
where the dispersion parameter τ is typically exactly one. When it is not, the model is often described as binomial with overdispersion or quasibinomial.
[edit] Count data
Another example of generalized linear models includes Poisson regression which models count data using the Poisson distribution. The link is typically the logarithm.
The variance function is proportional to the mean:
- ,
where the dispersion parameter τ is typically exactly one. When it is not, the model is often described as poisson with overdispersion or quasipoisson.
[edit] References
- P. McCullagh and J.A. Nelder. Generalized Linear Models. London: Chapman and Hall, 1989.
- A.J. Dobson. Introduction to Generalized Linear Models, Second Edition. London: Chapman and Hall/CRC, 2001.