Checking if a coin is fair

From Wikipedia, the free encyclopedia

In statistics, a fair coin is an idealized randomizing device with two states (usually named "heads" and "tails") which are equally likely to occur. It is based on the ubiquitous coin flip used in sports and other situations where it is necessary to give two parties the same chance of winning. Depending on the occasion a specially designed chip or a simple currency coin is used, which due to unequal weight distribution might be "unfair": one state might occur more frequently than the other, giving one party an unfair advantage. So it might be necessary to experimentally determine whether the coin is in fact "fair" – that is, if the probability of the coin falling on either side in the toss is approximately 50%. It is of course impossible to ever definitively rule out arbitrarily small deviations from fairness such as might be expected to affect only one flip in a lifetime of flipping, and it is always possible for an unfair (or "biased") coin to happen to turn up exactly 10 heads in 20 flips. As such, any fairness test must only establish a certain degree of confidence in a certain degree of fairness (a certain maximum bias). In more rigorous terminology, the problem is of determining the parameters of a Bernoulli process, given only a limited sample of Bernoulli trials.

Contents

[edit] Preamble

This article describes experimental procedures for determining if a coin is fair. There are many statistical methods for analyzing such an experimental procedure. This article illustrates two of them.

Both methods prescribe an experiment (or trial) in which the coin is tossed many times and the result of each and every toss is recorded. A statistical analysis of the results can then be performed to decide if the coin is "fair" or "probably not fair".

  • Posterior probability density function. This method assumes that the number of tosses is fixed and not under the experimenter's direct control. The true probability of obtaining a particular side when a fair coin is tossed (the "prior distribution") is already known. The probability that this particular coin is a "fair coin" can then be obtained by integrating the posterior PDF over the relevant interval.
  • Estimator of true probability. This method assumes that the experimenter can decide and implement any number of coin tosses for this particular coin. The experimenter decides on the level of confidence required and the tolerable margin of error. These considerations determine the minimum number of tosses that must be performed to complete the experiment.

[edit] Posterior probability density function

One way of verifying this is to calculate the posterior probability density function of Bayesian probability theory.

A test is performed by tossing the coin n times and noting the number of heads h and tails t:

H = h (Total number of heads is h)
T = t (Total number of tails is t)
N = n = h + t (Total number of tosses is n)

Next, let r be the actual probability of obtaining heads in a single toss of the coin. This is the value desired. Using Bayes' theorem, posterior probability of r conditional on H and T is expressed as follows:

 f(r | H=h, T=t) = 
  \frac {\Pr(H=h | r, N=h+t) \, f(r)} {\int_0^1 \Pr(H=h |r, N=h+t) \, f(r) \, dr}. \!

The prior summarizes what is known about the distribution of r in the absence of any observation. We will assume that the prior distribution of r is uniform over the interval [0, 1]. That is, f(r) = 1. (In fact, we could use a prior distribution that reflects our experience with real coins.)

The probability of obtaining h heads in n tosses of a coin with a probability of heads equal to r is given by a binomial distribution:

 \Pr(H=h | r, N=h+t) = {h+t \choose h} \, r^h \, (1-r)^t. \!

Putting it all together:


 f(r | H=h, T=t)
 = \frac{{h+t \choose h}\,r^h\,(1-r)^t}
        {\int_0^1 {h+t \choose h}\,r^h\,(1-r)^t\,dr}
 = \frac{r^h\,(1-r)^t}{\int_0^1 r^h\,(1-r)^t\,dr}
 .

This is in fact a beta distribution (the conjugate prior for the binomial distribution), whose denominator can be expressed in terms of the beta function:

f(r | H=h, T=t) = \frac{1}{\mathrm{B}(h+1,t+1)} \; r^h\,(1-r)^t. \!

If a uniform prior is assumed, and because h and t are integers, this can also be written in terms of factorials:

f(r | H=h, T=t) = \frac{(h+t+1)!}{h!\,\,t!} \; r^h\,(1-r)^t. \!

[edit] Example

For example, let n=10, h=7, i.e. the coin is tossed 10 times and 7 heads are obtained:

 f(r | H=7, T=3) = \frac{(7+3+1)!}{7!\,\,3!} \; r^7 \, (1-r)^3 = 1320 \, r^7 \, (1-r)^3 \!

The graph on the right shows the probability density function of r given that 7 heads were obtained in 10 tosses. (Note: r is the probability of obtaining heads when tossing the same coin once.)

Plot of y = 1320 × x7 × (1-x)3 with x ranging from 0 to 1
Plot of y = 1320 × x7 × (1-x)3 with x ranging from 0 to 1

The probability for an unbiased coin


 \Pr(0.45 < r <0.55)
 = \int_{0.45}^{0.55} f(r | H=7, T=3) \,dr
 \approx 13\%
 \!

is small when compared with alternative hypothesis (a biased coin). However, it is not small enough to cause us to actually believe that the coin has a significant bias. Notice that this probability is slightly higher than our presupposition of the probability that the coin was fair corresponding to the uniform prior distribution, which was 10%. Using a prior distribution that reflects our prior knowledge of what a coin is and how it acts, the posterior distribution would not favor the hypothesis of bias. (But also notice that the number of trials done in this example is relatively small, and with more trials the choice of a prior distribution would be less relevant.)

[edit] Estimator of true probability

The best estimator for the actual value r\,\! is the estimator p\,\! = \frac{h}{h+t}.

This estimator has an error (E) where | pr | < E at a particular confidence level.

To determine the number of times a coin should be tossed, two vital pieces of information are required:

  1. The confidence level which is denoted by confidence interval (Z)
  2. The maximum (acceptable) error (E)
  • The confidence level is denoted by Z and is given by the Z-value of a standard normal distribution. This value can be read off a standard score statistics table for the normal distribution.
Z value Confidence Level Comment
0.6745 gives 50.000% level of confidence Half
1.0000 gives 68.269% level of confidence One std dev
1.6449 gives 90.000% level of confidence "One Nine"
1.9599 gives 95.000% level of confidence 95 percent
2.0000 gives 95.450% level of confidence Two std dev
2.5759 gives 99.000% level of confidence "Two Nines"
3.0000 gives 99.730% level of confidence Three std dev
3.2905 gives 99.900% level of confidence "Three Nines"
3.8906 gives 99.990% level of confidence "Four Nines"
4.0000 gives 99.993% level of confidence Four std dev
4.4172 gives 99.999% level of confidence "Five Nines"
  • The maximum error (E) is defined by | ppactual | < E where p\,\! is the estimated probability of obtaining heads. Note: p_{\mathrm{actual}}\,\! is the same actual probability (for obtaining heads) as the term r\,\! of the previous section in this article.
  • In statistics, the estimate of a proportion of a sample (denoted by p) has a standard error (standard deviation of error) given by:
s_p = \sqrt{ \frac {p \, (1-p) } {n} }

This standard error sp will have a maximum theoretical value if p = (1 − p) = 0.5.

Hence, assuming the worst case, p is set to 0.5 to get the maximum possible value of sp.

s_p\,\! = \sqrt{ \frac {p \, (1-p) } {n} } = \sqrt{ \frac {0.5 \times 0.5 } {n} }
= \sqrt{ \frac { 1 } {4 \, n} } = \frac {1}{2 \, \sqrt{n}}

And hence the value of maximum error (E) is given by

E\,\! = Z \, s_p =  \frac {Z}{2 \, \sqrt{n}}

Therefore, the final formula for the number of coin tosses for the estimator p\,\! is

E = \frac {Z}{2 \, \sqrt{n}} \quad \quad \mbox{or} \quad \quad n = \frac {Z^2} {4 \, E^2} \!

provided that n \cdot p \ge 5 and n \cdot q \ge 5 where q = (1-p)\, to satisfy the Central Limit Theorem.

[edit] Example

1. If a maximum error of 0.01 is desired, how many times should the coin be tossed?

n = \frac {Z^2} {4 \, E^2} = \frac {Z^2} {4 \times 0.01^2} = 2500 \ Z^2
n = 2500\, at 68.27% level of confidence (Z=1)
n = 10000\, at 95.45% level of confidence (Z=2)
n = 27225\, at 99.90% level of confidence (Z=3.3)

2. If the coin is tossed 10000 times, what is the maximum error of the estimator p\,\! on the value of r\,\! (the actual probability of obtaining heads in a coin toss)?

E = \frac {Z}{ 2 \, \sqrt{n} }
E = \frac {Z}{ 2 \, \sqrt{ 10000 } } = \frac {Z}{ 200 }
E = 0.0050\, at 68.27% level of confidence (Z=1)
E = 0.0100\, at 95.45% level of confidence (Z=2)
E = 0.0165\, at 99.90% level of confidence (Z=3.3)

3. The coin is tossed 12000 times with a result of 5961 heads (and 6039 tails). What interval does the value of r\,\! (the true probability of obtaining heads) lie within if a confidence level of 99.999% is desired?

p = \frac{h}{h+t} \, = \frac{5961}{12000} \, = 0.4968

Now find the value of Z corresponding to 99.999% level of confidence.

Z = 4.4172 \,\!

Now calculate E

 E = \frac{Z}{2 \, \sqrt{n}} \, = \frac{4.4172}{2 \, \sqrt{12000}} \, = 0.0202

The interval which contains r is thus:

 p - E < r < p + E \,\!
 0.4766 < r < 0.5170 \,\!

Hence, 99.999% of the time, the interval above would contain r\,\! which is the true value of obtaining heads in a single toss.

[edit] Other applications

The above mathematical analysis for determining if a coin is fair can also be applied to other uses. For example:

  • Determining the product defective rates of a product when subjected to a particular (but well defined) condition. Sometimes a product can be very difficult or expensive to produce. Furthermore if testing such products will result in their destruction, a minimum amount of products should be tested. Using the same analysis the probability density function of the product defect rate can be found.
  • Two party polling. If a small random sample poll is taken where the there are only two mutually exclusive choices, then this is equivalent to tossing a single coin multiple times using a bias coin. The same analysis can therefore be applied to determine actual voting ratio.
  • Finding the proportion of females in an animal group. Determining the gender ratio in a large group of an animal species. Provided that a very small random sample is taken when performing the random sampling of the population, the analysis is similar to determining the probability of obtaining heads in a coin toss.

[edit] See also

[edit] References

  • Guttman, Wilks, and Hunter: Introductory Engineering Statistics, John Wiley & Sons, Inc. (1971) ISBN 0471337706
  • Devinder Sivia: Data Analysis, a Bayesian Tutorial, Oxford University Press (1996) ISBN 0198518897

[edit] External links