Talk:Bias (statistics)

From Wikipedia, the free encyclopedia

In the discussion of estimation of sample variance I don't believe we need the assumption of normality. Just iid will do will it not?

--Richard Clegg 17:48, 10 Feb 2005 (UTC)

As stated in my recent edit summary: We don't need any assumption of normality in order to conclude that n − 1 is the denominator that makes the estimator of σ2 unbiased, but when we get to the discussion of mean squared error, saying that the biased estimator with n in the denominator is better by that criterion, then assumptions about i.i.d. and finite variance are not enough. Accordingly, I have moved the part about the normality assumption to a later point in the article. Michael Hardy 02:01, 11 Feb 2005 (UTC)
Ah... I understand. Thanks for making the alteration. I think it is clearer now.
--Richard Clegg 10:59, 11 Feb 2005 (UTC)

[edit] Bias vs power

In my understanding, the mean square measure of "bias" of a given statistic is in fact a measure of it's expected power. It represents the variability of the statistic's sampling distribution. While this is an important factor to consider when choosing your statistic, it is not a measure of bias per se.

Thoughts?

Informavore aka Mike Lawrence 23:52, 24 March 2006 (UTC)

[edit] Another kind of bias?

Trying to find the average weekly alcohol consumptions by the students at a highschool, you interview all the students. Results may skewed by some untrue answers - depending on the context, one could have deliberate exaggerations of the alcohol consumption, or the opposite, or respondents might give incorrect answers due to incomplete memories. Suppose, for example, that most students tend to exaggerate, leading to too high an average.

Would it be correct to describe this error as a bias? I'd say yes, the students are biased towards exaggeration, but this is neither due to a biased sample (we are asking the whole population), nor to a biased estimator (we are looking at simple averages).

Is this, then, a third type of bias that ought to be included in the article? Unlike the other two types, it is due to an error in each individual observation, not in the way they are bunched together. But all the same, it is a systematic error in a statistical investigation.--Niels Ø 11:10, 27 October 2006 (UTC)

A second thought: The kind of bias I talk abaout here is an important type of bias influencing statistical investigations, but it is not in itself of a statistical nature. In fact it is the kind of bias covered in Systematic bias. I think it should be mentioned briefly in the article Bias (statistics), perhaps: Statistical investigations may also be influenced by a systematic bias acting on each individual observation, as e.g. in an enquete with leading questions.
Any opinions on this?--Niels Ø 18:54, 27 October 2006 (UTC)
Go for it! :-) JXM 15:33, 28 October 2006 (UTC)
Done (slightly different wording).--Niels Ø 15:54, 28 October 2006 (UTC)

[edit] Unbiased estimator in ticket example

I am not on top of this theory; I haven't even read and understood everything in the article. However, I have doubts about the following example:

The bias of maximum-likelihood estimators can be substantial. Consider a case where n tickets numbered from 1 through to n are placed in a box and one is selected at random, giving a value X. If n is unknown, then the maximum-likelihood estimator of n is X, even though the expectation of X is only (n+1)/2; we can only be certain that n is at least X and is probably more. In this case, the natural unbiased estimator is 2X − 1.

(My bolding.) Unless we have an a priori assumption about the distribution n, is it possible to have an unbiased estimator for n? Suppose we do not know n, but we know that it is either 100 or 1000, with equal probabilities. We observe X=17, and so conclude that we are not much wiser, but I guess it strengthens the possibility 100 slightly. Certainly 2X-1 = 33 is not an unbiased estimator. Or suppose we observe X=117; then we know n=1000, and 2X-1 is even worse. Is this a silly objection? I think not; we will always have a priori knowledge (such as: "n is not likely to be more than a million"), and I don't think you can show that an estimator is unbiased without quantifying a priori probabilities. So 2X-1 may possibly be called a natural estimator (whatever that is), but I don't think you can call it an unbiased estimator.--Niels Ø 08:59, 5 November 2006 (UTC)

I believe you are confusing two different types of estimation problems. The problem you described is called Bayesian estimation. It occurs when we know the prior distribution of the parameter to be estimated. The second type, sometimes called deterministic estimation, is used when we have no a priori information about the estimator (or wish to avoid utilizing any a priori "feeling" as to what the outcome should be). An estimator's bias can be defined only with respect to deterministic estimation. Of course, these are all just definitions, but I assure you the definition given here is widely accepted. --Zvika 09:36, 7 November 2006 (UTC)
In what you call deterministic distribution, it is possible to have an unbiased estimator of mean or variance, but here we are estimating a model parameter. Of course, with the given model, the mean is (n+1)/2, so as X is a central estimator for the mean, 2X-1 may seem to be a central estimator for n. But suppose we observe three numbers, 1, 4, and 1000. The mean is 335, so the central estimator for n would be 669, though we are sure n is at leat 1000. That's clearly nonsense!--Niels Ø 14:25, 7 November 2006 (UTC)
Maximum likelihood sometimes results in unintuitive estimators, but never results in estimators than contradict observations. In the present example, you incorrectly assume that the maximum likelihood estimator when several measurements are available is twice the mean of the measurements. In fact, if you work out the maximum likelihood in that case, it will come out max(xi), so in your present example the estimate would be 1000. An unbiased estimator based on this estimate would be \tfrac{n+1}{n} \max(x_i) or, in your case, about 1333. Take a look at maximum likelihood if you want to see some more examples. --Zvika 17:58, 7 November 2006 (UTC)
The point is, we were not discussing max likelihood estimators, we were discussing unbiased estimators. (Sorry if I confused matters by calling them "central" instead of "unbiased"; that's the terminology used in Danish literature.)--Niels Ø 18:18, 7 November 2006 (UTC)
There can be many unbiased estimators for the same estimation problem. Not all of them are "good". In the example we are discussing, both 2 \bar{x} - 1 and \tfrac{n+1}{n} \max(x_i) are unbiased, but the second estimator dominates (i.e. is always better than) the first. Another unbiased estimator would be 2x1 − 1, i.e., use only the first measurement; it is still unbiased, but clearly it is even worse than the estimator you proposed. --Zvika 06:59, 8 November 2006 (UTC)

I suddenly get it; of course you are right. The way to see it is, n actually has some specific value (not a distribution), and the estimator 2X-1 has the expected value n, and hence it is unbiased. I was confused among other things because I had the following confusing (and only remotely connected) problem in mind: In a quiz program or something, you are presented with two envelopes. One contain a sum of money; the other one twice that sum. You are allowed to open one envelope (containng X), and then to choose which of the envelopes to keep. Without an apriori distribution, it would be tempting to say "the other" envelope has 50% chance of containing 2*X, and 50% for 0.5*X, giving an expected value of 1.25*X, so switching seems like a good idea. But the quiz program must have a limited budget, so if X = 10 000 000 dollars, perhaps you should stay with the envelope you opened. Or maybe not... Anyway, you should include an apriori distribution in your considerations. As far as I recall, it is discussed in a Martin Gardner book.--Niels Ø 08:30, 8 November 2006 (UTC)

Yes, this is more or less what I was trying to say in my distinction between deterministic and Bayesian estimation. When you have a prior distribution, the resulting (Bayesian) estimation problem is different (and considerably easier).
Incidentally, the envelope problem (which I like very much) is discussed at length in Two envelopes problem. You can't wedge out of it by saying that you "should include an apriori distribution"; if I know the a priori distribution, that's fine, but what if I don't? In that case, I should be able to use deterministic estimation methods, and this is where the "paradox" comes in. Of course, it isn't really a paradox, only a demonstration of the inaccuracy of the principle of indifference. --Zvika 09:58, 8 November 2006 (UTC)