Talk:Estimator
From Wikipedia, the free encyclopedia
- θ is an unbiased estimator of θ iff B(θ) = 0 for all θ
I'm not sure why we need "for all θ". I thought it was implied that there was only one parameter θ. Perhaps the discussion should be framed in terms of multiple parameters θ1, θ2, etc., or in terms of a θ vector. But it seems that, as it is currently framed, θ is just one parameter, say μ, the population mean. So why do we need to "for all" over a set of one? --Ryguasu 14:33 Dec 10, 2002 (UTC)
- I mean for all values of theta. B depends on the estimator (function of data) but also on the theta we estimate. Patrick 14:39 Dec 10, 2002 (UTC)
- Oh. I thought the θ in the expression for B was the true value of the population parameter, not an estimate thereof. Is this incorrect? It seems like it could be useful defined this way, if you happened to know what the population was. --Ryguasu 15:05 Dec 10, 2002 (UTC)
-
- Yes, θ is the true value of the population parameter, but you don't know it, otherwise you don't have to estimate it. Without knowing it you design a procedure (the estimator) to compute an estimate from the data; for a fixed θ the data depend on chance, hence also the resulting estimate. If the expected value of this estimate is the actual value, and this holds for all θ, the estimator is unbiased. Patrick 20:31 Dec 10, 2002 (UTC)
"For all θ" is absolutely necessary. The point is that you must be able to know that the expected value of the estimator is θ without knowing the value of θ. Michael Hardy 19:48 Feb 12, 2003 (UTC)
Is "The standard deviation of θ is also called the standard error of θ" true? I would have guessed that the first was the square root of V(θ) and the second the square root of MSE(θ), which would be different if θ is biased, but I am happy to be enlightened. --Henrygb 17:18, 5 Aug 2004 (UTC)
Nop, the standard error is the SD divided by sqrt of N.
The definition of the MSE (MSE(θ) = E[(θ − θ)]) seems quite unclear : what does the second θ stand for ?
I just wanted to mention that for an unbiased estimator, the MSE IS the Variance. This is important and the article neglects this (though it is obvious from property 5) ) and indeed seems to imply the opposite in the section titled "Efficiency". Also,I don't know the protocal on one discussion refering to another, but regarding the post above this, the two thetas are the estimator and the value of the parameter. One is a statistic, the other is just a real number.
Contents |
[edit] Cleanup is badly needed
I find it difficult to be patient with the person who thinks that being pointlessly abstract in such a way that one can understand the article only by paying close attention to details not relevant to the topic constitutes "rigor". Michael Hardy 00:26, 19 November 2005 (UTC)
[edit] Politeness & Rigor
Hi there,
One can give an intuitive definition of what an estimator is and one can work with estimators in most cases without knowing precisely what one is talking about. But I don't think that this practical approach should exclude a more complete one. It's not because you don't use the words "probability space" or "measure space" that you don't refer to them implicitly.
Statistics is both very practical and very abstract: as it deals with all sorts of real-life situations, it has to have the mathematical tools to do it well and, like it or not, these tools involve a lot of probability and measure theory. I'm not saying that one should include a whole course on measure theory in each statistic article (and I have a tendency to do that, I must admit). What I'm saying is that at least somewhere in the article (arguably not at the beginning), one should give a very clear and precise definition of the mathematical beings that we're dealing with. Your presentation is sufficient for most applications, but for someone needing "something more" (and I was such a person), it's not: I think it almost treacherous to give the illusion of simplicity: there's a reason we don't learn about estimators in high school...
Besides, you took out the sections on Bayes estimators and minimax estimators (admittedly, not written yet - but at least the name was there somewhere). You actually deleted the paragraph on the asymptotic value of an estimator, to which I refer in the article on robust statistics. Just because you don't like/understand something doesn't mean (a) that it's wrong and (b) that it doesn't exist. I totally agree that my presentation is probably not the best possible one, but simply annihilating my work is definitely not improving things and is closer to an act of vandalism than to a scientific approach. You say my way of writing is "absurd", which means it doesn't make any sense. A more constructive approach than simply hitting the delete key would be to point out the things that don't make any sense (to you): if I made a mistake, I'll be very thankful if someone (e.g. you) tells me. I don't consider saying that an estimator is a function and specifying the sets on which it operates instead of a hand-wavy explanation to be a mistake, by the way.
I understand that an encyclopedia is not the place for a full treatment of a subject, but I think it should be used as a reference and therefore have the exact definitions somewhere in its articles. My presentation was probably clumsy, but I'm confident the maths in it wasn't: why remove all my additions? I didn't remove anything you wrote... Tell me what you think.
Regards,
Deimos.
-
- One can give an intuitive definition of what an estimator is and one can work with estimators in most cases without knowing precisely what one is talking about.
You miss the point. It is nonsense to think that if someone does not know one particular way of formalizing something, then they don't know what they're talking about. Set theorists encode all of mathematics within set theory, but that doesn't mean a mathematician who does not know how an operator on a Hilbert space is encoded within ZFC "doesn't know precisely what he's talking about" or is not rigorous.
-
- one should give a very clear and precise definition of the mathematical beings that we're dealing with
And that's exactly what you are not doing. Michael Hardy 21:44, 20 November 2005 (UTC)
PS: You are seriously deluded if you think "rigor" is what this is about. I'm changing the section heading.
Thank you for changing the title of my section: you could've also changed the message itself to set it more to the liking of Michael Hardy - oh sorry, you already did it... Politeness is also the theme so I changed the title of *my* message to what it is now... If you don't like what somebody is telling you, then don't read it: I find it highly dishonest to change the content of my message (even if it's only the title). If you feel like replying, create another message. My title was *not* "Deimos' editing style": had I wanted it to be, I think I could've found the right keys to press myself. If you think my title isn't correct, say so. If I agree, I'll change it.
You might be able "encode" the whole of mathematics using set theory (although I don't quite see what that would look like nor how one would go about it), but usually, when dealing with an arithmetic concept, you use the formalism of arithmetics, when dealing with an algebraic one, that of algebra and so forth. What you might be trying to say is that sometimes, when you use a geometrical notion (for example) in, say, statistics, you might adapt the notations slightly to be coherent with the rest of the statistics world. But in statistics itself, we deal with measure spaces, samples, etc. and all the lecturers I have encountered use the same definitions.
I know that stuff about estimators already: you're not doing me a favour by accepting the changes, I'm doing *you* a favor in giving you the commonly accepted definition. Delete my post if you like - it's no big deal for me. Besides, it'll still be in the database somewhere anyway. I first thought you might have point, but as I still don't see it, I'm starting to lose hope.
Deimos.
[edit] Merge with Estimation theory
See Talk:Estimation theory. Cburnett 18:28, 9 February 2006 (UTC)
[edit] Statistics versus signal processing
The big problem with Estimation theory is that it is very much focussed on Estimation Theory as it is understood in engineering esp. Signal Processing. There is also a mathematical science called Statistics which treates Estimation (and hence Estimators), Testing (and hence Statistical Tests), and so on. In principle Statistics is applicable in medicine, biology, physics, social science, economics, .... engineering ... law, sport, consumer studies ... . The page on Estimator about which there is discussion above is an example of the topic seen from Statistics. Obviously people from engineering will hardly recognise that it's all, in principle, about the same thing, and vice versa.
The subject of Estimation Theory is: construction, design, evaluation of Estimators! So one hardly needs two different pages with those two titles. I suppose that Interval Estimation is also part of estimation theory, while presently it is only treated under Estimators and not under Estimation !!!
I think there should be a general page on Estimation Theory with subtopics on Estimation theory in engineering etc.. as far as these subfields cannot identifiy themselves with the broad topic. So I agree there should be a merge but then there must be a subtopic on Estimation in Engineering esp. Signal Processing Gill110951 08:13, 10 December 2006 (UTC)