Talk:Statistical power
From Wikipedia, the free encyclopedia
The article links to an article which purports to be about using statistical controls to improve power, but the page linked to is not about this, it is about process control, which is a completely different subject.
The article says, "The power of the test is the probability that when the test concludes that there a statistically significant difference between test scores for men and women, that the difference found reflects a true difference between the populations of men and women." That seems backwards to me. Rephrasing, it says, "The power of the test is the probability that when the test rejects the null hypothesis, that the null hypothesis is false." Isn't that backwards?
I think the sentence should read, "The power of the test is the probability that when there is a true difference between the test scores of men and women, the test concludes that there a statistically significant difference between the populations of men and women." --Kent37 00:24, 12 October 2006 (UTC)
Power = probability of rejecting a valid null hypothesis??? Wrong! That is the exact opposite of the truth. Power is the probability of rejection, usually as a function of a parameter of interest, and one is interested in having a powerful test in order to be assured of rejection of a false null hypothesis. Michael Hardy 19:52 2 Jun 2003 (UTC)
Increasing the power of a test does not increase the probability of type I error if the increase in power results from an increase in sample size. I have deleted that statement. Michael Hardy 19:58 2 Jun 2003 (UTC)
- Thanks for the corrections. Valid was a slip, as the next paragraph shows. Bad place to make a slip, though. I was afraid I had left the impression that increasing power increases the chance of Type I error, and had already made a change to avoid leaving the impression, but apparently it wasn't good enough. Jfitzg
As for the misleading link, I added a Reliability (psychometric) page for the link to reliability. I don't know that you always have to use psychometrics to increase reliability, although maybe that's just a quibble. Jfitzg
Thank you for fixing the link, but I still think it's not quite right. This is only my opinion, but I think that a user clicking on 'reliability' does not expect to go to an article on reliability in psychometrics. What about in other branches of statistics? We should use the principle of least surprise, and make the link explicit, e.g. "by increasing the reliability of measures, as in the case of psychometric reliability". -- Heron
- Good idea. Jfitzg
[edit] beta
It might be worth mentioning that some texts define beta = power, not 1 - power. See for example Bickel & Doksum 2nd edition page 217. Btyner 19:47, 7 November 2006 (UTC)
anyway a section can added for the stats newbie, using a more intuitive/conceptual approach? i found the posting difficult to follow because i didn't know what half the terms meant. 204.141.184.245 16:14, 20 July 2007 (UTC) N
[edit] Removed post-hoc power
I've removed the mention of post-hoc power calculations from the 2nd para as they are generally agreed to be a bad idea, certainly in the form that was stated (power for the sample size you used and the effect you estimated) when the power is a function of the p-value alone. For more details google "post-hoc power" or see this thread on the Medstats discussion list. --Qwfp (talk) 19:03, 23 January 2008 (UTC)