Hierarchy of evidence

A hierarchy of evidence (or levels of evidence) is a heuristic used to rank the relative strength of results obtained from scientific research. There is broad agreement on the relative strength of large-scale, epidemiological studies. More than 80 different hierarchies have been proposed for assessing medical evidence.[1][2] The design of the study (such as a case report for an individual patient or a blinded randomized controlled trial) and the endpoints measured (such as survival or quality of life) affect the strength of the evidence. In clinical research, the best evidence for treatment efficacy is mainly from meta-analyses of randomized controlled trials (RCTs).[1][3] Typically, systematic reviews of completed, high-quality randomized controlled trials such as those published by the Cochrane Collaboration rank as the highest quality of evidence above observational studies, while expert opinion and anecdotal experience are at the bottom level of evidence quality.[1][4] Evidence hierarchies are often applied in evidence-based practices and are integral to evidence-based medicine (EBM).

Definition

In 2014, Stegenga defined a hierarchy of evidence as "rank-ordering of kinds of methods according to the potential for that method to suffer from systematic bias". At the top of the hierarchy is a method with the most freedom from systemic bias or best internal validity relative to the tested medical intervention's hypothesized efficacy.[5]:313 In 1997, Greenhalgh suggested it was "the relative weight carried by the different types of primary study when making decisions about clinical interventions".[6]

The National Cancer Institute defines levels of evidence as "a ranking system used to describe the strength of the results measured in a clinical trial or research study. The design of the study [...] and the endpoints measured [...] affect the strength of the evidence."[7]

History

Canada

The term was first used in a 1979 report by the "Canadian Task Force on the Periodic Health Examination" (CTF) to "grade the effectiveness of an intervention according to the quality of evidence obtained".[8]:1195 The task force used three levels, subdividing level II:

  • Level I: Evidence from at least one randomized controlled trial,
  • Level II1: Evidence from at least one well designed cohort study or case control study, i.e. a controlled trial which is not randomized
  • Level II2: Comparisons between times and places with or without the intervention
  • Level III: Opinions of respected authorities, based on clinical experience, descriptive studies or reports of expert committees.

The CTF graded their recommendations into a 5-point A–E scale: A: Good level of evidence for the recommendation to consider a condition, B: Fair level of evidence for the recommendation to consider a condition, C: Poor level of evidence for the recommendation to consider a condition, D: Fair level evidence for the recommendation to exclude the condition, and E: Good level of evidence for the recommendation to exclude condition from consideration.[8]:1195 The CTF updated their report in 1984,[9] in 1986[10] and 1987.[11]

USA

In 1988, the United States Preventive Services Task Force (USPSTF) came out with its guidelines based on the CTF using the same 3 levels, further subdividing level II.[12][13]

  • Level I: Evidence obtained from at least one properly designed randomized controlled trial.
  • Level II-1: Evidence obtained from well-designed controlled trials without randomization.
  • Level II-2: Evidence obtained from well-designed cohort or case-control analytic studies, preferably from more than one center or research group.
  • Level II-3: Evidence obtained from multiple time series designs with or without the intervention. Dramatic results in uncontrolled trials might also be regarded as this type of evidence.
  • Level III: Opinions of respected authorities, based on clinical experience, descriptive studies, or reports of expert committees.

Over the years many more grading systems have been described.[14]

UK

In September 2000, the Oxford (UK) CEBM Levels of Evidence published its guidelines for 'Levels' of evidence re claims about prognosis, diagnosis, treatment benefits, treatment harms, and screening. It not only addressed therapy and prevention, but also diagnostic tests, prognostic markers, or harm. The original CEBM Levels was first released for Evidence-Based On Call to make the process of finding evidence feasible and its results explicit. As published in 2009[15][16] they are:

  • 1a: Systematic reviews (with homogeneity) of randomized controlled trials
  • 1b: Individual randomized controlled trials (with narrow confidence interval)
  • 1c: All or none randomized controlled trials
  • 2a: Systematic reviews (with homogeneity) of cohort studies
  • 2b: Individual cohort study or low quality randomized controlled trials (e.g. <80% follow-up)
  • 2c: "Outcomes" Research; ecological studies
  • 3a: Systematic review (with homogeneity) of case-control studies
  • 3b: Individual case-control study
  • 4: Case series (and poor quality cohort and case-control studies)
  • 5: Expert opinion without explicit critical appraisal, or based on physiology, bench research or "first principles"

In 2011, an international team redesigned the Oxford CEBM Levels to make it more understandable and to take into account recent developments in evidence ranking schemes. The Levels have been used by patients, clinicians and also to develop clinical guidelines including recommendations for the optimal use of phototherapy and topical therapy in psoriasis[17] and guidelines for the use of the BCLC staging system for diagnosing and monitoring hepatocellular carcinoma in Canada.[18]

Global

In 2007, the World Cancer Research Fund grading system described 4 levels: Convincing, probable, possible and insufficient evidence.[19] All Global Burden of Disease Studies have used it to evaluate epidemiologic evidence supporting causal relationships.[20]

Examples

A large number of hierarchies of evidence have been proposed. Similar protocols for evaluation of research quality are still in development. So far, the available protocols pay relatively little attention to whether outcome research is relevant to efficacy (the outcome of a treatment performed under ideal conditions) or to effectiveness (the outcome of the treatment performed under ordinary, expectable conditions).

Guyatt and Sackett

In 1995, Guyatt and Sackett published the first such hierarchy.[21]

Greenhalgh put the different types of primary study in the following order:[6]

  1. Systematic reviews and meta-analyses of "RCTs with definitive results".
  2. RCTs with definitive results (confidence intervals that do not overlap the threshold clinically significant effect)
  3. RCTs with non-definitive results (a point estimate that suggests a clinically significant effect but with confidence intervals overlapping the threshold for this effect)
  4. Cohort studies
  5. Case-control studies
  6. Cross sectional surveys
  7. Case reports

Saunders et al.

A protocol suggested by Saunders et al. assigns research reports to six categories, on the basis of research design, theoretical background, evidence of possible harm, and general acceptance. To be classified under this protocol, there must be descriptive publications, including a manual or similar description of the intervention. This protocol does not consider the nature of any comparison group, the effect of confounding variables, the nature of the statistical analysis, or a number of other criteria. Interventions are assessed as belonging to Category 1, well-supported, efficacious treatments, if there are two or more randomized controlled outcome studies comparing the target treatment to an appropriate alternative treatment and showing a significant advantage to the target treatment. Interventions are assigned to Category 2, supported and probably efficacious treatment, based on positive outcomes of nonrandomized designs with some form of control, which may involve a non-treatment group. Category 3, supported and acceptable treatment, includes interventions supported by one controlled or uncontrolled study, or by a series of single-subject studies, or by work with a different population than the one of interest. Category 4, promising and acceptable treatment, includes interventions that have no support except general acceptance and clinical anecdotal literature; however, any evidence of possible harm excludes treatments from this category. Category 5, innovative and novel treatment, includes interventions that are not thought to be harmful, but are not widely used or discussed in the literature. Category 6, concerning treatment, is the classification for treatments that have the possibility of doing harm, as well as having unknown or inappropriate theoretical foundations.[22]

Khan et al.

A protocol for evaluation of research quality was suggested by a report from the Centre for Reviews and Dissemination, prepared by Khan et al. and intended as a general method for assessing both medical and psychosocial interventions. While strongly encouraging the use of randomized designs, this protocol noted that such designs were useful only if they met demanding criteria, such as true randomization and concealment of the assigned treatment group from the client and from others, including the individuals assessing the outcome. The Khan et al. protocol emphasized the need to make comparisons on the basis of "intention to treat" in order to avoid problems related to greater attrition in one group. The Khan et al. protocol also presented demanding criteria for nonrandomized studies, including matching of groups on potential confounding variables and adequate descriptions of groups and treatments at every stage, and concealment of treatment choice from persons assessing the outcomes. This protocol did not provide a classification of levels of evidence, but included or excluded treatments from classification as evidence-based depending on whether the research met the stated standards.[23]

U.S. National Registry of Evidence-Based Practices and Programs

An assessment protocol has been developed by the U.S. National Registry of Evidence-Based Practices and Programs (NREPP). Evaluation under this protocol occurs only if an intervention has already had one or more positive outcomes, with a probability of less than .05, reported, if these have been published in a peer-reviewed journal or an evaluation report, and if documentation such as training materials has been made available. The NREPP evaluation, which assigns quality ratings from 0 to 4 to certain criteria, examines reliability and validity of outcome measures used in the research, evidence for intervention fidelity (predictable use of the treatment in the same way every time), levels of missing data and attrition, potential confounding variables, and the appropriateness of statistical handling, including sample size.[24]

Mercer and Pignotti

A protocol suggested by Mercer and Pignotti uses a taxonomy intended to classify on both research quality and other criteria. In this protocol, evidence-based interventions are those supported by work with randomized designs employing comparisons to established treatments, independent replications of results, blind evaluation of outcomes, and the existence of a manual. Evidence-supported interventions are those supported by nonrandomized designs, including within-subjects designs, and meeting the criteria for the previous category. Evidence-informed treatments involve case studies or interventions tested on populations other than the targeted group, without independent replications; a manual exists, and there is no evidence of harm or potential for harm. Belief-based interventions have no published research reports or reports based on composite cases; they may be based on religious or ideological principles or may claim a basis in accepted theory without an acceptable rationale; there may or may not be a manual, and there is no evidence of harm or potential for harm. Finally, the category of potentially harmful treatments includes interventions such that harmful mental or physical effects have been documented, or a manual or other source shows the potential for harm.[25]

Proponents

In 1995 Wilson et al.,[26] in 1996 Hadorn et al.[27] and in 1996 Atkins et al.[28] have described and defended various types of grading systems.

Criticism

More than a decade after it was established, use of evidence hierarchies was increasingly criticized in the 21st century. In 2011, a systematic review of the critical literature found 3 kinds of criticism: procedural aspects of EBM (especially from Cartwright, Worrall and Howick), greater than expected fallibility of EBM (Ioaanidis and others), and EBM being incomplete as a philosophy of science (Ashcroft and others).[29] Many critics have published in journals of philosophy, ignored by the clinician proponents of EBM. Rawlins[30] and Bluhm note, that EBM limits the ability of research results to inform the care of individual patients, and that to understand the causes of diseases both population-level and laboratory research are necessary. EBM hierarchy of evidence does not take into account research on the safety and efficacy of medical interventions. RCTs should be designed "to elucidate within-group variability, which can only be done if the hierarchy of evidence is replaced by a network that takes into account the relationship between epidemiological and laboratory research"[31]

The hierarchy of evidence produced by a study design has been questioned, because guidelines have "failed to properly define key terms, weight the merits of certain non-randomized controlled trials, and employ a comprehensive list of study design limitations".[32]

Stegenga has criticized specifically that meta-analyses are placed at the top of such hierarchies.[33] The assumption that RCTs ought to be necessarily near the top of such hierarchies has been criticized by Worrall.[34] and Cartwright[35]

In 2005, Ross Upshur noted that EBM claims to be a normative guide to being a better physician, but is not a philosophical doctrine. He pointed out that EBM supporters displayed "near-evangelical fervor" convinced of its superiority, ignoring critics who seek to expand the borders of EBM from a philosophical point of view.[36]

Borgerson in 2009 wrote that the justifications for the hierarchy levels are not absolute and do not epistemically justify them, but that "medical researchers should pay closer attention to social mechanisms for managing pervasive biases".[37] La Caze noted that basic science resides on the lower tiers of EBM though it "plays a role in specifying experiments, but also analysing and interpreting the data."[38]

Concato argued in 2004, that it allowed RCTs too much authority and that not all research questions could be answered through RCTs, either because of practical or because of ethical issues. Even when evidence is available from high-quality RCTs, evidence from other study types may still be relevant.[39] Stegenga opined that evidence assessment schemes are unreasonably constraining and less informative than other schemes now available.[5]

See also

References

  1. Shafee, Thomas; Masukume, Gwinyai; Kipersztok, Lisa; Das, Diptanshu; Häggström, Mikael; Heilman, James (28 August 2017). "Evolution of Wikipedia's medical content: past, present and future". Journal of Epidemiology and Community Health. 71 (11): jech–2016–208601. doi:10.1136/jech-2016-208601. ISSN 0143-005X. PMC 5847101. PMID 28847845.
  2. Siegfried T (2017-11-13). "Philosophical critique exposes flaws in medical evidence hierarchies". Science News. Retrieved 2018-05-16.
  3. Straus SE, Richardson WS, Glasziou P, Haynes RB (2005). Evidence-based Medicine: How to Practice and Teach EBM (3rd ed.). Edinburgh: Churchill Livingstone. pp. 102–05. ISBN 978-0443074448.
  4. Kim Hugel (16 May 2013). "The Journey of Research - Levels of Evidence". Canadian Association of Pharmacy in Oncology. Retrieved 8 December 2019.
  5. Stegenga J (October 2014). "Down with the hierarchies". Topoi. 33 (2): 313–22. doi:10.1007/s11245-013-9189-4.
  6. Greenhalgh T (July 1997). "How to read a paper. Getting your bearings (deciding what the paper is about)". BMJ. 315 (7102): 243–6. doi:10.1136/bmj.315.7102.243. PMC 2127173. PMID 9253275.
  7. National Cancer Institute (n.d.). "NCI Dictionary of Cancer Terms: Levels of evidence". US DHHS-National Institutes of Health. Retrieved 8 December 2014.
  8. Canadian Task Force on the Periodic Health Examination (3 November 1979). "Task Force Report: The periodic health examination". Can Med Assoc J. 121 (9): 1193–1254. PMC 1704686. PMID 115569.
  9. Canadian Task Force on the Periodic Health Examination (15 May 1984). "Task Force Report: The periodic health examination. 2. 1984 update". Can Med Assoc J. 130 (10): 1278–1285. PMC 1483525. PMID 6722691.
  10. Canadian Task Force on the Periodic Health Examination (15 May 1986). "Task Force Report: The periodic health examination. 3. 1986 update". Can Med Assoc J. 134 (10): 721–729.
  11. Canadian Task Force on the Periodic Health Examination (1 April 1988). "Task Force Report: The periodic health examination. 2. 1987 update". Can Med Assoc J. 138 (7): 618–26. PMC 1267740. PMID 3355931.
  12. Lawrence, Robert; U. S. Preventive Services Task Force Edition (1989). Guide to Clinical Preventive Services. DIANE Publishing. ISBN 978-1568062976. Retrieved 9 December 2014.
  13. U.S. Preventive Services Task Force (August 1989). Guide to clinical preventive services: report of the U.S. Preventive Services Task Force. DIANE Publishing. pp. 24–. ISBN 978-1-56806-297-6.Appendix A
  14. Welsh, Judith (January 2010). "Levels of evidence and analyzing the literature". National Institutes of Health Library. Retrieved 9 September 2015.
  15. "Oxford Centre for Evidence-based Medicine – Levels of Evidence (March 2009)". Centre for Evidence-Based Medicine. 2009-06-11. Retrieved 25 March 2015.
  16. Burns el al 2011.
  17. OCEBM Levels of Evidence Working Group (May 2016). "The Oxford Levels of Evidence 2'".
  18. Paul, C.; Gallini, A.; Archier, E.; et al. (2012). "Evidence-Based Recommendations on Topical Treatment and Phototherapy of Psoriasis: Systematic Review and Expert Opinion of a Panel of Dermatologists". Journal of the European Academy of Dermatology and Venereology. 26 (Suppl 3): 1–10. doi:10.1111/j.1468-3083.2012.04518.x. PMID 22512675.
  19. World Cancer Research Fund AICR. Food, Nutrition, and Physical Activity, and the Prevention of Cancer: A Global Perspective. American Institute for Cancer Research, Washington, DC; 2007
  20. Lim, Stephen S; Vos, Theo; Flaxman, Abraham D; Danaei, Goodarz; Shibuya, Kenji; Adair-Rohani, Heather; Almazroa, Mohammad A; Amann, Markus; Anderson, H Ross; Andrews, Kathryn G; Aryee, Martin; Atkinson, Charles; Bacchus, Loraine J; Bahalim, Adil N; Balakrishnan, Kalpana; Balmes, John; Barker-Collo, Suzanne; Baxter, Amanda; Bell, Michelle L; Blore, Jed D; Blyth, Fiona; Bonner, Carissa; Borges, Guilherme; Bourne, Rupert; Boussinesq, Michel; Brauer, Michael; Brooks, Peter; Bruce, Nigel G; Brunekreef, Bert; et al. (2012). "A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: A systematic analysis for the Global Burden of Disease Study 2010". The Lancet. 380 (9859): 2224–2260. doi:10.1016/S0140-6736(12)61766-8. PMC 4156511. PMID 23245609.
  21. Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ (December 1995). "Users' guides to the medical literature. IX. A method for grading health care recommendations. Evidence-Based Medicine Working Group". JAMA. 274 (22): 1800–4. doi:10.1001/jama.1995.03530220066035. PMID 7500513.
  22. Saunders, B., Berliner, L., & Hanson, R. (2004). Child physical and sexual abuse: Guidelines for treatments. Retrieved September 15, 2006, from http://www.musc.edu/cvc.guidel.htm%5B%5D
  23. Khan, K.S., et al. (2001). CRD Report 4. Stage II. Conducting the review. phase 5. Study quality assessment. York, UK: Centre for Reviews and Dissemination, University of York. Retrieved July 20, 2007 from http://www.york.ac.uk/inst/crd/pdf/crd_4ph5.pdf
  24. National Registry of Evidence-Based Practices and Programs (2007). NREPP Review Criteria. Retrieved March 10, 2008 from http://www.nrepp.samsha.gov/review-criteria.htm%5B%5D
  25. Mercer, J.; Pignotti, M. (2007). "Shortcuts cause errors in systematic research syntheses: Rethinking evaluation of mental health interventions". Scientific Review of Mental Health Practice. 5 (2): 59–77. ISSN 1538-4985.
  26. Wilson, Mark C (1995). "Users' guides to the medical literature. VIII. How to use clinical practice guidelines. B. what are the recommendations and will they help you in caring for your patients? The evidence-based medicine working group". JAMA. 274 (20): 1630–1632. doi:10.1001/jama.1995.03530200066040.
  27. Hadorn, David C; Baker, David; Hodges, James S; Hicks, Nicholas (1996). "Rating the quality of evidence for clinical practice guidelines". Journal of Clinical Epidemiology. 49 (7): 749–754. doi:10.1016/0895-4356(96)00019-4. PMID 8691224.
  28. Atkins, D; Best, D; Briss, P. A; Eccles, M; Falck-Ytter, Y; Flottorp, S; Guyatt, G. H; Harbour, R. T; Haugh, M. C; Henry, D; Hill, S; Jaeschke, R; Leng, G; Liberati, A; Magrini, N; Mason, J; Middleton, P; Mrukowicz, J; O'Connell, D; Oxman, A. D; Phillips, B; Schünemann, H. J; Edejer, T; Varonen, H; Vist, G. E; Williams Jr, J. W; Zaza, S; GRADE Working Group (2004). "Grading quality of evidence and strength of recommendations". BMJ. 328 (7454): 1490. doi:10.1136/bmj.328.7454.1490. PMC 428525. PMID 15205295.
  29. Solomon M (October 2011). "Just a paradigm: evidence-based medicine in epistemological context". European Journal for Philosophy of Science. Springer. 1 (3): 451–466. doi:10.1007/s13194-011-0034-6.
  30. Rawlins M (December 2008). "De Testimonio: on the evidence for decisions about the use of therapeutic interventions". Clinical Medicine. Royal College of Physicians. 8 (6): 579–88. doi:10.7861/clinmedicine.8-6-579. PMC 4954394. PMID 19149278.
  31. Bluhm R (October 2011). "From hierarchy to network: a richer view of evidence for evidence-based medicine". Perspectives in Biology and Medicine. Johns Hopkins University Press. 48 (4): 535–47. doi:10.1353/pbm.2005.0082. PMID 16227665.
  32. Gugiu, PC; Westine, CD; Coryn, CL; Hobson, KA (3 April 2012). "An application of a new evidence grading system to research on the chronic care model". Eval Health Prof. 36 (1): 3–43. CiteSeerX 10.1.1.1016.5990. doi:10.1177/0163278712436968. PMID 22473325.
  33. Stegenga, J (2011). "Is meta-analysis the platinum standard of evidence?". Stud Hist Philos Biol Biomed Sci. 42 (4): 497–507. doi:10.1016/j.shpsc.2011.07.003. PMID 22035723.
  34. Worrall, John (2002). "What Evidence in Evidence‐Based Medicine?". Philosophy of Science. 69: S316–S330. doi:10.1086/341855.
  35. Cartwright, Nancy (2007). "Are RCTs the Gold Standard?" (PDF). BioSocieties. 2: 11–20. doi:10.1017/s1745855207005029.
  36. Upshur RE (Autumn 2005). "Looking for rules in a world of exceptions: reflections on evidence-based practice". Perspectives in Biology and Medicine. Johns Hopkins University Press. 48 (4): 477–89. doi:10.1353/pbm.2005.0098. PMID 16227661.
  37. Borgerson K (Spring 2009). "Valuing evidence: bias and the evidence hierarchy of evidence-based medicine" (PDF). Perspectives in Biology and Medicine. Johns Hopkins University Press. 52 (2): 218–33. doi:10.1353/pbm.0.0086. PMID 19395821.
  38. La Caze A (January 2011). "The role of basic science in evidence-based medicine". Biology & Philosophy. Springer. 26 (1): 81–98. doi:10.1007/s10539-010-9231-5.
  39. Concato J (July 2004). "Observational versus experimental studies: what's the evidence for a hierarchy?". NeuroRx. Springer. 1 (3): 341–7. doi:10.1602/neurorx.1.3.341. PMC 534936. PMID 15717036.

Further reading

 This article incorporates public domain material from the U.S. National Cancer Institute document "Dictionary of Cancer Terms".

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.