Bayes' theorem

From Wikipedia, the free encyclopedia

Bayes' theorem (also known as Bayes' rule or Bayes' law) is a result in probability theory, which relates the conditional and marginal probability distributions of random variables. In some interpretations of probability, Bayes' theorem tells how to update or revise beliefs in light of new evidence: a posteriori.

The probability of an event A conditional on another event B is generally different from the probability of B conditional on A. However, there is a definite relationship between the two, and Bayes' theorem is the statement of that relationship.

As a formal theorem, Bayes' theorem is valid in all interpretations of probability. However, frequentist and Bayesian interpretations disagree about the kinds of things to which probabilities should be assigned in applications: frequentists assign probabilities to random events according to their frequencies of occurrence or to subsets of populations as proportions of the whole; Bayesians assign probabilities to propositions that are uncertain. A consequence is that Bayesians have more frequent occasion to use Bayes' theorem. The articles on Bayesian probability and frequentist probability discuss these debates at greater length.

Contents

[edit] Statement of Bayes' theorem

Bayes' theorem relates the conditional and marginal probabilities of stochastic events A and B:

\begin{align}   \Pr(A|B)  &  =       \frac{\Pr(B | A)\, \Pr(A)}{\Pr(B)}  \\             &  \propto L(A | B)\, \Pr(A)   \end{align}

where L(A|B) is the likelihood of A given fixed B. Notice the relationship \Pr(B | A) = L(A | B).

Each term in Bayes' theorem has a conventional name:

With this terminology, the theorem may be paraphrased as

\mbox{posterior} = \frac{\mbox{likelihood} \times \mbox{prior}} {\mbox{normalizing constant}}

In words: the posterior probability is proportional to the prior probability times the likelihood.

In addition, the ratio Pr(B|A)/Pr(B) is sometimes called the standardised likelihood, so the theorem may also be paraphrased as

\mbox{posterior} = {\mbox{standardised likelihood} \times \mbox{prior} }.\,

[edit] Derivation from conditional probabilities

To derive the theorem, we start from the definition of conditional probability. The probability of event A given event B is

\Pr(A|B)=\frac{\Pr(A \cap B)}{\Pr(B)}.

Likewise, the probability of event B given event A is

\Pr(B|A) = \frac{\Pr(A \cap B)}{\Pr(A)}. \!

Rearranging and combining these two equations, we find

\Pr(A|B)\, \Pr(B) = \Pr(A \cap B) = \Pr(B|A)\, \Pr(A). \!

This lemma is sometimes called the product rule for probabilities. Dividing both sides by Pr(B), providing that it is non-zero, we obtain Bayes' theorem:

\Pr(A|B) = \frac{\Pr(B|A)\,\Pr(A)}{\Pr(B)}. \!

[edit] Alternative forms of Bayes' theorem

Bayes' theorem is often embellished by noting that

\Pr(B) = \Pr(A\cap B) + \Pr(A^C\cap B) = \Pr(B|A) \Pr(A) + \Pr(B|A^C) \Pr(A^C)\,

where AC is the complementary event of A (often called "not A"). So the theorem can be restated as

\Pr(A|B) = \frac{\Pr(B | A)\, \Pr(A)}{\Pr(B|A)\Pr(A) + \Pr(B|A^C)\Pr(A^C)}.  \!

More generally, where {Ai} forms a partition of the event space,

\Pr(A_i|B) = \frac{\Pr(B | A_i)\, \Pr(A_i)}{\sum_j \Pr(B|A_j)\,\Pr(A_j)} , \!

for any Ai in the partition.

See also the law of total probability.

[edit] Bayes' theorem in terms of odds and likelihood ratio

Bayes' theorem can also be written neatly in terms of a likelihood ratio Λ and odds O as

O(A|B)=O(A) \cdot \Lambda (A|B)

where O(A|B)=\frac{\Pr(A|B)}{\Pr(A^C|B)} \! are the odds of A given B,

and O(A)=\frac{\Pr(A)}{\Pr(A^C)} \! are the odds of A by itself,

while \Lambda (A|B) = \frac{L(A|B)}{L(A^C|B)} = \frac{\Pr(B|A)}{\Pr(B|A^C)} \! is the likelihood ratio.

[edit] Bayes' theorem for probability densities

There is also a version of Bayes' theorem for continuous distributions. It is somewhat harder to derive, since probability densities, strictly speaking, are not probabilities, so Bayes' theorem has to be established by a limit process; see Papoulis (citation below), Section 7.3 for an elementary derivation. Bayes's theorem for probability densities is formally similar to the theorem for probabilities:

f(x|y) = \frac{f(x,y)}{f(y)} = \frac{f(y|x)\,f(x)}{f(y)} \!

and there is an analogous statement of the law of total probability:

f(x|y) = \frac{f(y|x)\,f(x)}{\int_{-\infty}^{\infty} f(y|x)\,f(x)\,dx}. \!

As in the discrete case, the terms have standard names. f(x, y) is the joint distribution of X and Y, f(x|y) is the posterior distribution of X given Y=y, f(y|x) = L(x|y) is (as a function of x) the likelihood function of X given Y=y, and f(x) and f(y) are the marginal distributions of X and Y respectively, with f(x) being the prior distribution of X.

Here we have indulged in a conventional abuse of notation, using f for each one of these terms, although each one is really a different function; the functions are distinguished by the names of their arguments.

[edit] Abstract Bayes' theorem

Given two absolutely continuous probability measures P˜Q on the probability space (\Omega, \mathcal{F}) and a sigma-algebra \mathcal{G} \subset \mathcal{F}, the abstract Bayes theorem for a \mathcal{F}-measurable random variable X becomes

E_P[X|\mathcal{G}] = \frac{E_Q[\frac{dP}{dQ} X |\mathcal{G}]}{E_Q[\frac{dP}{dQ}|\mathcal{G}]}.

This formulation is used in Kalman filtering to find Zakai equations. It is also used in financial mathematics for change of numeraire techniques.

[edit] Extensions of Bayes' theorem

Theorems analogous to Bayes' theorem hold in problems with more than two variables. For example:

\Pr(A|B,C) = \frac{\Pr(A) \, \Pr(B|A) \, \Pr(C|A,B)}{\Pr(B) \, \Pr(C|B)} \!

This can be derived in several steps from Bayes' theorem and the definition of conditional probability:

\Pr(A|B,C) = \frac{\Pr(A,B,C)}{\Pr(B,C)} = \frac{\Pr(A,B,C)}{\Pr(B) \, \Pr(C|B)} =
= \frac{\Pr(C|A,B) \, \Pr(A,B)}{\Pr(B) \, \Pr(C|B)} = \frac{\Pr(A) \, \Pr(B|A) \, \Pr(C|A,B)}{\Pr(B) \, \Pr(C|B)} .

A general strategy is to work with a decomposition of the joint probability, and to marginalize (integrate) over the variables that are not of interest. Depending on the form of the decomposition, it may be possible to prove that some integrals must be 1, and thus they fall out of the decomposition; exploiting this property can reduce the computations very substantially. A Bayesian network, for example, specifies a factorization of a joint distribution of several variables in which the conditional probability of any one variable given the remaining ones takes a particularly simple form (see Markov blanket).

[edit] Examples

[edit] Example #1: Conditional probabilities

Suppose there are two bowls full of cookies. Bowl #1 has 10 chocolate chip cookies and 30 plain cookies, while bowl #2 has 20 of each. Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?

Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes' theorem. But first, we can clarify the situation by rephrasing the question to "what’s the probability that Fred picked bowl #1, given that he has a plain cookie?” Thus, to relate to our previous explanation, the event A is that Fred picked bowl #1, and the event B is that Fred picked a plain cookie. To compute Pr(A|B), we first need to know:

  • Pr(A), or the probability that Fred picked bowl #1 regardless of any other information. Since Fred is treating both bowls equally, it is 0.5.
  • Pr(B), or the probability of getting a plain cookie regardless of any information on the bowls. In other words, this is the probability of getting a plain cookie from each of the bowls. It is computed as the sum of the probability of getting a plain cookie from a bowl multiplied by the probability of selecting this bowl. We know from the problem statement that the probability of getting a plain cookie from bowl #1 is 0.75, and the probability of getting one from bowl #2 is 0.5, and since Fred is treating both bowls equally the probability of selecting any one of them is 0.5. Thus, the probability of getting a plain cookie overall is 0.75×0.5 + 0.5×0.5 = 0.625. Or to put it more simply, the proportion of all cookies that are plain is 50 out of 80 = 50/80 = 0.625.
  • Pr(B|A), or the probability of getting a plain cookie given that Fred has selected bowl #1. From the problem statement, we know this is 0.75, since 30 out of 40 cookies in bowl #1 are plain.

Given all this information, we can compute the probability of Fred having selected bowl #1 given that he got a plain cookie, as such:

\Pr(A|B) = \frac{\Pr(B | A) \Pr(A)}{\Pr(B)} = \frac{0.75 \times 0.5}{0.625} = 0.6

As we expected, it is more than half.

[edit] Tables of occurrences and relative frequencies

It is often helpful when calculating conditional probabilities to create a simple table containing the number of occurrences of each outcome, or the relative frequencies of each outcome, for each of the independent variables. The tables below illustrate the use of this method for the cookies.

Number of cookies in each bowl
by type of cookie
          Relative frequency of cookies in each bowl
by type of cookie
Bowl #1 Bowl #2 Totals
Chocolate Chip
10
20
30
Plain
30
20
50
Total
40
40
80
Bowl #1 Bowl #2 Totals
Chocolate Chip
0.125
0.250
0.375
Plain
0.375
0.250
0.625
Total
0.500
0.500
1.000

The table on the right is derived from the table on the left by dividing each entry by the total number of cookies under consideration, i.e. dividing each number by 80.

[edit] Example #2: Drug testing

Bayes' theorem is useful in evaluating the result of drug tests. Suppose a certain drug test is 99% accurate, that is, the test will correctly identify a drug user as testing positive 99% of the time, and will correctly identify a non-user as testing negative 99% of the time. This would seem to be a relatively accurate test, but Bayes' theorem will reveal a potential flaw. Let's assume a corporation decides to test its employees for opium use, and 0.5% of the employees use the drug. We want to know the probability that, given a positive drug test, an employee is actually a drug user. Let "D" be the event of being a drug user and "N" indicate being a non-user. Let "+" be the event of a positive drug test. We need to know the following:

  • Pr(D), or the probability that the employee is a drug user, regardless of any other information. This is 0.005, since 0.5% of the employees are drug users.
  • Pr(N), or the probability that the employee is not a drug user. This is 1-Pr(D), or 0.995.
  • Pr(+|D), or the probability that the test is positive, given that the employee is a drug user. This is 0.99, since the test is 99% accurate.
  • Pr(+|N), or the probability that the test is positive, given that the employee is not a drug user. This is 0.01, since the test will produce a false positive for 1% of non-users.
  • Pr(+), or the probability of a positive test event, regardless of other information. This is 0.015 or 1.5%, which found by adding the probability that the test will produce a true positive result in the event of drug use (= 99% x 0.5% = 0.495%) plus the probability that the test will produce a false positive in the event of non-drug use (= 1% x 99.5% = 0.995%).

Given this information, we can compute the probability that an employee who tested positive is actually a drug user:

\begin{align}\Pr(D|+) & = \frac{\Pr(+ | D) \Pr(D)}{\Pr(+)} \\ & = \frac{\Pr(+ | D) \Pr(D)}{\Pr(+ | D) \Pr(D) + \Pr(+ | N) \Pr(N)} \\ & = \frac{0.99 \times 0.005}{0.99 \times 0.005 + 0.01 \times 0.995} \\ & = 0.3322\end{align}

Despite the high accuracy of the test, the probability that the employee is actually a drug user is only about 33%. The rarer the condition for which we are testing, the greater percentage of the positive tests will be false positives. This illustrates why it is important to do follow-up tests.

[edit] Example #3: Bayesian inference

Applications of Bayes' theorem often assume the philosophy underlying Bayesian probability that uncertainty and degrees of belief can be measured as probabilities. One such example follows. For additional worked out examples, including simpler examples, please see the article on the examples of Bayesian inference.

We describe the marginal probability distribution of a variable A as the prior probability distribution or simply the prior. The conditional distribution of A given the "data" B is the posterior probability distribution or just the posterior.

Suppose we wish to know about the proportion r of voters in a large population who will vote "yes" in a referendum. Let n be the number of voters in a random sample (chosen with replacement, so that we have statistical independence) and let m be the number of voters in that random sample who will vote "yes". Suppose that we observe n = 10 voters and m = 7 say they will vote yes. From Bayes' theorem we can calculate the probability distribution function for r using

f(r | n=10, m=7) =    \frac {f(m=7 | r, n=10) \, f(r)} {\int_0^1 f(m=7|r, n=10) \, f(r) \, dr}. \!

From this we see that from the prior probability density function f(r) and the likelihood function L(r) = f(m = 7|r, n = 10), we can compute the posterior probability density function f(r|n = 10, m = 7).

The prior probability density function f(r) summarizes what we know about the distribution of r in the absence of any observation. We provisionally assume in this case that the prior distribution of r is uniform over the interval [0, 1]. That is, f(r) = 1. If some additional background information is found, we should modify the prior accordingly. However before we have any observations, all outcomes are equally likely.

Under the assumption of random sampling, choosing voters is just like choosing balls from an urn. The likelihood function L(r) = P(m = 7|r, n = 10,) for such a problem is just the probability of 7 successes in 10 trials for a binomial distribution.

\Pr( m=7 | r, n=10) = {10 \choose 7} \, r^7 \, (1-r)^3.

As with the prior, the likelihood is open to revision -- more complex assumptions will yield more complex likelihood functions. Maintaining the current assumptions, we compute the normalizing factor,

\int_0^1 \Pr( m=7|r, n=10) \, f(r) \, dr = \int_0^1 {10 \choose 7} \, r^7 \, (1-r)^3 \, 1 \, dr = {10 \choose 7} \, \frac{1}{1320} \!

and the posterior distribution for r is then

f(r | n=10, m=7) =   \frac{{10 \choose 7} \, r^7 \, (1-r)^3 \, 1} {{10 \choose 7} \, \frac{1}{1320}} = 1320 \, r^7 \, (1-r)^3

for r between 0 and 1, inclusive.

One may be interested in the probability that more than half the voters will vote "yes". The prior probability that more than half the voters will vote "yes" is 1/2, by the symmetry of the uniform distribution. In comparison, the posterior probability that more than half the voters will vote "yes", i.e., the conditional probability given the outcome of the opinion poll – that seven of the 10 voters questioned will vote "yes" – is

1320\int_{1/2}^1 r^7(1-r)^3\,dr \approx 0.887, \!

which is about an "89% chance".

[edit] Example #4: The Monty Hall problem

We are presented with three doors - red, green, and blue - one of which has a prize. We choose the red door. A presenter who knows what door the prize is behind, and who must open a door, but is not permitted to open the door we have picked or the door with the prize, opens the green door and reveals that there is no prize behind it. What is the probability that the prize is behind the blue door?

Let us call the the situation that the prize is behind a given door Ar, Ag, and Ab.

To start with, \Pr(A_r) = \Pr(A_g) = \Pr(A_b) = \frac 1 3, and to make things simpler we shall assume that we have already picked the red door.

Let us call B "the presenter opens the green door". Without any prior knowledge, we would assign this a value of 50%

  • In the situation where the prize is behind the red door, the host is free to pick the green or blue door random. Thus, \Pr(B|A_r) = 1/2
  • In the situation where the prize is behind the green door, the host must pick the blue door. Thus, \Pr(B|A_g) = 0
  • In the situation where the prize is behind the blue door, the host must pick the green door. Thus, \Pr(B|A_b) = 1

Thus,

\begin{matrix}   \Pr(A_r|B) & =  \frac{\Pr(B | A_r) \Pr(A_r)}{\Pr(B)} & =   \frac{\frac 1 2 \frac 1 3}{\frac 1 2} & = \frac 1 3 \\   \Pr(A_g|B) & =  \frac{\Pr(B | A_g) \Pr(A_g)}{\Pr(B)} & =   \frac{0 \frac 1 3}{\frac 1 2} & = 0 \\   \Pr(A_b|B) & =  \frac{\Pr(B | A_b) \Pr(A_b)}{\Pr(B)} & =   \frac{1 \frac 1 3}{\frac 1 2} & = \frac 2 3 \end{matrix}

Note how this depends on the value of B. Let us suppose that if the prize is behind the red door, then the probability that the host will pick the green door is very high: 90% for instance. That is - the host will pick the green door unless he is forced not to.

The value of B then becomes 1/3 * 1 + 1/3 * 0 + 1/3 * 9/10 = 19/30.

\begin{matrix}   \Pr(A_r|B) & =  \frac{\Pr(B | A_r) \Pr(A_r)}{\Pr(B)} & =   \frac{\frac 9 {10} \frac 1 3}{\frac {19} {30}} & = \frac 9 {19} \\   \Pr(A_g|B) & =  \frac{\Pr(B | A_g) \Pr(A_g)}{\Pr(B)} & =   \frac{0 \frac 1 3}{\frac {19} {30}} & = 0 \\   \Pr(A_b|B) & =  \frac{\Pr(B | A_b) \Pr(A_b)}{\Pr(B)} & =   \frac{1 \frac 1 3}{\frac {19} {30}} & = \frac {10} {19} \end{matrix}

So in this situation, the host picking the green door tells us very little - he would probably have picked it anyway. Pr(Ab) is only slightly better than 1/2.

Let us, by contrast, suppose that if the prize is behind the red door, then the probability that the host will pick the green door is very low: 10% for instance. That is - the host will almost never pick the green door unless he is forced to.

The value of B then becomes 1/3 * 1 + 1/3 * 0 + 1/3 * 1/10 = 11/30.

\begin{matrix}   \Pr(A_r|B) & =  \frac{\Pr(B | A_r) \Pr(A_r)}{\Pr(B)} & =   \frac{\frac 1 {10} \frac 1 3}{\frac {11} {30}} & = \frac 1 {11} \\   \Pr(A_g|B) & =  \frac{\Pr(B | A_g) \Pr(A_g)}{\Pr(B)} & =   \frac{0 \frac 1 3}{\frac {11} {30}} & = 0 \\   \Pr(A_b|B) & =  \frac{\Pr(B | A_b) \Pr(A_b)}{\Pr(B)} & =   \frac{1 \frac 1 3}{\frac {11} {30}} & = \frac {10} {11} \end{matrix}

In this situation, the fact that the host has chosen the green door tells us a great deal. The prize is almost certainly behind the blue door. If it were not then the host would very probably have picked it.

[edit] Historical remarks

Bayes' theorem is named after the Reverend Thomas Bayes (17021761), who studied how to compute a distribution for the parameter of a binomial distribution (to use modern terminology). His friend, Richard Price, edited and presented the work in 1763, after Bayes' death, as An Essay towards solving a Problem in the Doctrine of Chances. Pierre-Simon Laplace replicated and extended these results in an essay of 1774, apparently unaware of Bayes' work.

One of Bayes' results (Proposition 5) gives a simple description of conditional probability, and shows that it can be expressed independently of the order in which things occur:

If there be two subsequent events, the probability of the second b/N and the probability of both together P/N, and it being first discovered that the second event has also happened, from hence I guess that the first event has also happened, the probability I am right [i.e., the conditional probability of the first event being true given that the second has also happened] is P/b.

Note that the expression says nothing about the order in which the events occurred; it measures correlation, not causation. His preliminary results, in particular Propositions 3, 4, and 5, imply the result now called Bayes' Theorem (as described above), but it does not appear that Bayes himself emphasized or focused on that result.

Bayes' main result (Proposition 9 in the essay) is the following: assuming a uniform distribution for the prior distribution of the binomial parameter p, the probability that p is between two values a and b is

\frac {\int_a^b {n+m \choose m} p^m (1-p)^n\,dp}  {\int_0^1 {n+m \choose m} p^m (1-p)^n\,dp} \!

where m is the number of observed successes and n the number of observed failures.

What is "Bayesian" about Proposition 9 is that Bayes presented it as a probability for the parameter p. So, one can compute probability for an experimental outcome, but also for the parameter which governs it, and the same algebra is used to make inferences of either kind.

Bayes states his question in a way that might make the idea of assigning a probability distribution to a parameter palatable to a frequentist. He supposes that a billiard ball is thrown at random onto a billiard table, and that the probabilities p and q are the probabilities that subsequent billiard balls will fall above or below the first ball.

[edit] See also

[edit] References

    [edit] Versions of the essay

    • Thomas Bayes (1763), "An Essay towards solving a Problem in the Doctrine of Chances. By the late Rev. Mr. Bayes, F. R. S. communicated by Mr. Price, in a letter to John Canton, A. M. F. R. S.", Philosophical Transactions, Giving Some Account of the Present Undertakings, Studies and Labours of the Ingenious in Many Considerable Parts of the World 53:370–418.
    • Thomas Bayes (1763/1958) "Studies in the History of Probability and Statistics: IX. Thomas Bayes' Essay Towards Solving a Problem in the Doctrine of Chances", Biometrika 45:296–315. (Bayes' essay in modernized notation)
    • Thomas Bayes "An essay towards solving a Problem in the Doctrine of Chances". (Bayes' essay in the original notation)

    [edit] Commentaries

    • G. A. Barnard (1958) "Studies in the History of Probability and Statistics: IX. Thomas Bayes' Essay Towards Solving a Problem in the Doctrine of Chances", Biometrika 45:293–295. (biographical remarks)
    • Daniel Covarrubias. "An Essay Towards Solving a Problem in the Doctrine of Chances". (an outline and exposition of Bayes' essay)
    • Stephen M. Stigler (1982). "Thomas Bayes' Bayesian Inference," Journal of the Royal Statistical Society, Series A, 145:250–258. (Stigler argues for a revised interpretation of the essay; recommended)
    • Isaac Todhunter (1865). A History of the Mathematical Theory of Probability from the time of Pascal to that of Laplace, Macmillan. Reprinted 1949, 1956 by Chelsea and 2001 by Thoemmes.

    [edit] Additional material