Logit

From Wikipedia, the free encyclopedia

The logit function is an important part of logistic regression: for more information, please see that article.

The logit function is the inverse of the "sigmoid", or "logistic" function used in mathematics, especially in statistics. The logit of a number p between 0 and 1 is given by the formula:

\operatorname{logit}(p)=\log\left( \frac{p}{1-p} \right) =\log(p)-\log(1-p). \!\,

Logit is (pronounced /ˈloʊdʒɪt/ with a long "o" and a soft "g".

The base of the logarithm function used is of little importance in the present article, as long as it is greater than 1, but the natural logarithm with base e is the one most often used.

If p is a probability then p/(1 − p) is the corresponding odds, and the logit of the probability is the logarithm of the odds; similarly the difference between the logits of two probabilities is the logarithm of the odds ratio (R), thus providing a shorthand for writing the correct combination of odds-ratios only by adding and subtracting:

\operatorname{log}(R)=\log\left( \frac{{p_1}/(1-p_1)}{{p_2}/(1-p_2)} \right) =\log\left( \frac{p_1}{1-p_1} \right) - \log\left(\frac{p_2}{1-p_2}\right)=\operatorname{logit}(p_1)-\operatorname{logit}(p_2). \!\,
Plot of logit in the range 0 to 1, base is e
Plot of logit in the range 0 to 1, base is e

Contents

[edit] History

The logit model was introduced by Joseph Berkson in 1944, who coined the term. The term was borrowed by analogy from the very similar probit model developed by Chester Ittner Bliss in 1934. G. A. Barnard in 1949 coined the commonly used term log-odds; the log-odds of an event is the logit of the probability of the event.


[edit] Uses and properties

[edit] See also

[edit] External links