Hierarchy of evidence
Evidence hierarchies reflect the relative authority of various types of biomedical research which create levels of evidence. There is broad agreement on the relative strength of the principal types of epidemiological studies but no single, universally-accepted hierarchy of evidence. Randomized controlled trials (RCTs) rank above observational studies, while expert opinion and anecdotal experience are ranked at the bottom. Some evidence hierarchies place systematic review and meta analysis above RCTs. Evidence hierarchies are integral to evidence-based medicine.
Definition
In 2014, Stegenga defined a hierarchy of evidence as "rank-ordering of kinds of methods according to the potential for that method to suffer from systematic bias". At the top of the hierarrchy is a method with the most freedom from systemic bias or best internal validity relative to the tested medical intervention´s hypothesized efficacy.[1]:313 In 1997, Greenhalgh suggested it was "the relative weight carried by the different types of primary study when making decisions about clinical interventions"[2]
Examples
In 1995, Guyatt and Sackett published the first such hierarchy.[3]
Greenhalgh put the different types of primary study in the following order:[2]
- Systematic reviews and meta-analyses of "RCTs with definitive results".
- RCTs with definitive results (confidence intervals that do not overlap the threshold clinically significant effect)
- RCTs with non-definitive results (a point estimate that suggests a clinically significant effect but with confidence intervals overlapping the threshold for this effect)
- Cohort studies
- Case-control studies
- Cross sectional surveys
- Case reports
Criticism
More than a decade after it was established, use of evidence hierarchies was increasingly criticized in the 21 st century. In 2011, a systematic review of the critical literature found 3 kinds of criticism: procedural aspects of EBM (especially from Cartwright, Worrall and Howick), greater than expected fallibility of EBM (Ioaanidis and others), and EBM being incomplete as a philosophy of science (Ashcroft and others).[4]
Many critics have published in journals of philosophy, ignored by the clinician proponents of EBM. Rawlins [5] and Bluhm note, that EBM limits the ability of research results to inform the care of individual patients, and that to understand the causes of diseases both population-level and laboratory research are necessary. EBM hierarchy of evidence does not take into account research on the safety and efficacy of medical interventions. RCTs should be designed "to elucidate within-group variability, which can only be done if the hierarchy of evidence is replaced by a network that takes into account the relationship between epidemiological and laboratory research"[6]
A clinician/teacher noted that EBM claims to be a normative guide to being a better physician, but is not a philosophical doctrine. He pointed out that EBM supporters displayed "near-evangelical fervor" convinced of its superiority, ignoring critics who seek to expand the borders of EBM from a philosophical point of view.[7]
Borgerson writes that the justifications for the hierarchy levels are not absolute and do not not epistemically justify them, but that "medical researchers should pay closer attention to social mechanisms for managing pervasive biases"[8] La Caze noted that basic science resides on the lower tiers of EBM though it "plays a role in specifying experiments, but also analysing and interpreting the data."[9]
Concato argued, it allowed RCTs too much authority and that not all research questions could be answered through RCTs, either because of practical or because of ethical issues. Even when evidence is available from high-quality RCTs, evidence from other study types may still be relevant.[10] Stegenga opined that evidence assessment schemes are unreasonably constraining and less informative than other schemes now available.[1]
References
- 1 2 Stegenga J (October 2014). "Down with the hierarchies". Topoi. 33 (2): 313–22. doi:10.1007/s11245-013-9189-4.
- 1 2 Greenhalgh, T (July 1997). "How to read a paper. Getting your bearings (deciding what the paper is about)". BMJ. 315 (7102): 243–6. PMC 2127173 . PMID 9253275. doi:10.1136/bmj.315.7102.243.
- ↑ Guyatt, GH; Sackett, DL; Sinclair, JC; Hayward, R; Cook, DJ; Cook, RJ (December 1995). "Users' guides to the medical literature. IX. A method for grading health care recommendations. Evidence-Based Medicine Working Group". JAMA. 274 (22): 1800–4. PMID 7500513. doi:10.1001/jama.1995.03530220066035.
- ↑ Solomon M (2011) Just a paradigm: evidence-based medicine in epistemological context. Eur J Philos Sci 1(3):451–466. doi:10.1007/s13194-011-0034-6
- ↑ Rawlins M (2008) De Testimonio: on the evidence for decisions about the use of therapeutic interventions. Royal College of Physicians, London Goldenberg
- ↑ Bluhm, Robyn (2005) From hierarchy to network: a richer view of evidence for evidence-based medicine. Perspect Biol Med 48(4):535–547.DOI 10.1353/pbm.2005.0082
- ↑ Upshur RE (2005) Looking for rules in a world of exceptions: reflections on evidence-based practice. Perspect Biol Med, 48(4):477–489. DOI 10.1353/pbm.2005.0098
- ↑ Borgerson K (2009) Valuing evidence: bias and the evidence hierarchy of evidence-based medicine. Perspect Biol Med, 52(2):218–233. DOI 10.1353/pbm.0.0086
- ↑ La Caze A (2011) The role of basic science in evidence-based medicine. Biol Philos 26(1):81–98. DOI 10.1007/s10539-010-9231-5
- ↑ Concato J (July 2004). "Observational versus experimental studies: what's the research for a hierarchy?". NeuroRx. 1 (3): 341–7. PMC 534936 . PMID 15717036. doi:10.1602/neurorx.1.3.341.