Talk:M-estimator
From Wikipedia, the free encyclopedia
[edit] Some rewrites
I commented out the proof that the influence function is proportional to the psi function. I thought that it was an unnecessary level of detail for an encyclopedia entry, and the mathematics would likely be offputting for many readers (it was for me). Tolstoy the Cat 16:46, 2 September 2006 (UTC)
In the section "M-estimators of ρ-type", shouldn't be the measurable function ρ an aplication to Rp instead of R ?
- Probably. That section, and other parts of this article, seem overly complicated to me. I have 4 textbooks on robust methods. All deal with M-estimators and none make the distinction between \rho and \psi type. I don't see why it is useful to do so. The article on robust statistics has an embedded image of the some choices of these functions, which seems a bit more practical that all the Greek stuff.
- Tolstoy the Cat 17:30, 5 October 2006 (UTC)
-
- I am having a go-through to clarify some points and perhaps simplify the technical discussion a little. There will always need to be a minimum level of technical detail which may still overwhelm some, but that is not to say it cannot be improved now. My background and sources are more on the asymptotic side and less on the robust side, so my edits will reflect that. But I am sure we can get this to read in a way that does both perspectives, and others, justice. Baccyak4H 03:35, 8 November 2006 (UTC)
[edit] "Types" section mostly about parameters
After looking at the Types section, but for the very top and the brief sentence at the end mentioning finite sample extensions, this whole section is not about estimators at all. It is about functionals of distribution functions, what are more rightly called parameters. I am going to proceed to comment these parts out completely; I am not removing them so as to make it easier to bring back some of the technical typesetting in the case it ever becomes convenient to do so. Baccyak4H 16:08, 8 November 2006 (UTC)
- I don't really understand why this was commented. It seems to me that it directly explains what ρ and ψ type estimators are, including examples of each. I have uncommented it for now. --Zvika (talk) 07:37, 20 April 2008 (UTC)
[edit] M for what?
The article has been changed to read that "M" is for "Minimization". According to Huber's book, it is for "Maximum likelihood type". (I don't have the book to hand, so can't give the page.) I'm changing it back. Tolstoy the Cat 21:15, 13 November 2006 (UTC)
- "Minimization" was the meaning given in Hoaglin, Mosteller & Tukey (Robust and Exploratory Data Analysis) (sp?). Either probably captures much of the essence of these estimators. So it doesn't matter much to me (I made the change you reverted). Baccyak4H 04:09, 14 November 2006 (UTC)
-
- I don't have a copy of that book. I had thought that Huber coined the term in 1964, but could be wrong. Perhaps both explanations should be in here. Tolstoy the Cat 18:15, 14 November 2006 (UTC)
-
-
- Huber predates HMT, and also Serfling's Approximation Theorems in Mathematical Statistics, which also has the same usage as HMT. But if Huber coined the term, his rationale should definitely be in the article. Couldn't hurt to have both, as you suggest. Baccyak4H 19:16, 14 November 2006 (UTC)
-