Publication bias

From Wikipedia, the free encyclopedia

Publication bias arises from the tendency for researchers and editors to handle experimental results that are positive (they found something) differently from results that are negative (found that something did not happen) or inconclusive.

Contents

[edit] Definition

"Publication bias occurs when the publication of research results depends on their nature and direction."[1]

Positive results bias, a type of publication bias, occurs when authors are more likely to submit, or editors accept, positive than null (negative or inconclusive) results.[2] A related term, "the file drawer problem", refers to the tendency for those negative or inconclusive results to remain hidden and unpublished.[3] Even a small number of studies lost in the file-drawer can result in a significant bias [1].

Outcome reporting bias occurs when several outcomes within a trial are measured but these are reported selectively depending on the strength and direction of those results.[4] A related term that has been coined is HARKing (Hypothesizing After the Results are Known).[5]

[edit] Effect on meta-analysis

The effect of this is that published studies may not be truly representative of all valid studies undertaken, and this bias may distort meta-analyses and systematic reviews of large numbers of studies - on which evidence-based medicine, for example, increasingly relies. The problem may be particularly significant when the research is sponsored by entities that may have a financial interest in achieving favourable results.

Those undertaking meta-analyses and systematic reviews need to take account of publication bias in the methods they use for identifying the studies to include in the review. Among other techniques to minimise the effects of publication bias, they may need to perform a thorough search for unpublished studies, and to use such analytical tools as a funnel plot to quantify the effects of bias.

[edit] Possible example

One study[2] compared Chinese and non-Chinese studies of gene-disease associations and found that "Chinese studies in general reported a stronger gene-disease association and more frequently a statistically significant result"[3]. One possible interpretation of this result is selective publication (publication bias).

Ioannidis has inventoried factors that should alert readers to risks of publication bias [4].

[edit] Study registration

In September 2004, editors of several prominent medical journals (including the New England Journal of Medicine, The Lancet, Annals of Internal Medicine, and JAMA) announced that they would no longer publish results of drug research sponsored by pharmaceutical companies unless that research was registered in a public database from the start [6]. In this way, negative results should no longer be able to disappear.

[edit] See also

[edit] External links

[edit] References

  1. ^ Jeffrey D. Scargle, Journal of Scientific Exploration, 14 (2) 94-106, 2000
  2. ^ Zhenglun Pan, Thomas A. Trikalinos, Fotini K. Kavvoura, Joseph Lau, John P.A. Ioannidis, "Local literature bias in genetic epidemiology: An empirical evaluation of the Chinese literature". PLoS Medicine, 2(12):e334, 2005 December.
  3. ^ Jin Ling Tang, "Selection Bias in Meta-Analyses of Gene-Disease Associations", PLoS Medicine, 2(12):e409, 2005 December.
  4. ^ Ioannidis J (2005). "Why most published research findings are false". PLoS Med 2 (8): e124. DOI:10.1371/journal.pmed.0020124. PMID 16060722. 
In other languages