Hierarchy of evidence


A hierarchy of evidence is a heuristic used to rank the relative strength of results obtained from scientific research. There is broad agreement on the relative strength of large-scale, epidemiological studies. More than 80 different hierarchies have been proposed for assessing medical evidence. The design of the study and the endpoints measured affect the strength of the evidence. In clinical research, the best evidence for treatment efficacy is mainly from meta-analyses of randomized controlled trials. Typically, systematic reviews of completed, high-quality randomized controlled trials - such as those published by the Cochrane Collaboration - rank as the highest quality of evidence above observational studies, while expert opinion and anecdotal experience are at the bottom level of evidence quality. Evidence hierarchies are often applied in evidence-based practices and are integral to evidence-based medicine.

Definition

In 2014, Stegenga defined a hierarchy of evidence as "rank-ordering of kinds of methods according to the potential for that method to suffer from systematic bias". At the top of the hierarchy is a method with the most freedom from systemic bias or best internal validity relative to the tested medical intervention's hypothesized efficacy.
In 1997, Greenhalgh suggested it was "the relative weight carried by the different types of primary study when making decisions about clinical interventions".
The National Cancer Institute defines levels of evidence as "a ranking system used to describe the strength of the results measured in a clinical trial or research study. The design of the study and the endpoints measured affect the strength of the evidence."

Examples

A large number of hierarchies of evidence have been proposed. Similar protocols for evaluation of research quality are still in development. So far, the available protocols pay relatively little attention to whether outcome research is relevant to efficacy or to effectiveness.

GRADE

The GRADE approach is a method of assessing the certainty in evidence and the strength of recommendations. The GRADE began in the year 2000 as a collaboration of methodologists, guideline developers, biostatisticians, clinicians, public health scientists and other interested members.
Over 100 organizations have endorsed and/or are using GRADE to evaluate the quality of evidence and strength of health care recommendations..
GRADE rates quality of evidence as follows:
HighThere is a lot of confidence that the true effect lies close to that of the estimated effect.
ModerateThere is moderate confidence in the estimated effect: The true effect is likely to be close to the estimated effect, but there is a possibility that it is substantially different.
LowThere is limited effect in the estimated effect: The true effect might be substantially different from the estimated effect.
Very lowThere is very little confidence in the estimated effect: The true effect is likely to be substantially different from the estimated effect.

Guyatt and Sackett

In 1995, Guyatt and Sackett published the first such hierarchy.
Greenhalgh put the different types of primary study in the following order:
  1. Systematic reviews and meta-analyses of "RCTs with definitive results".
  2. RCTs with definitive results
  3. RCTs with non-definitive results
  4. Cohort studies
  5. Case-control studies
  6. Cross sectional surveys
  7. Case reports

    Saunders et al.

A protocol suggested by Saunders et al. assigns research reports to six categories, on the basis of research design, theoretical background, evidence of possible harm, and general acceptance. To be classified under this protocol, there must be descriptive publications, including a manual or similar description of the intervention. This protocol does not consider the nature of any comparison group, the effect of confounding variables, the nature of the statistical analysis, or a number of other criteria. Interventions are assessed as belonging to Category 1, well-supported, efficacious treatments, if there are two or more randomized controlled outcome studies comparing the target treatment to an appropriate alternative treatment and showing a significant advantage to the target treatment. Interventions are assigned to Category 2, supported and probably efficacious treatment, based on positive outcomes of nonrandomized designs with some form of control, which may involve a non-treatment group. Category 3, supported and acceptable treatment, includes interventions supported by one controlled or uncontrolled study, or by a series of single-subject studies, or by work with a different population than the one of interest. Category 4, promising and acceptable treatment, includes interventions that have no support except general acceptance and clinical anecdotal literature; however, any evidence of possible harm excludes treatments from this category. Category 5, innovative and novel treatment, includes interventions that are not thought to be harmful, but are not widely used or discussed in the literature. Category 6, concerning treatment, is the classification for treatments that have the possibility of doing harm, as well as having unknown or inappropriate theoretical foundations.

Khan et al.

A protocol for evaluation of research quality was suggested by a report from the Centre for Reviews and Dissemination, prepared by Khan et al. and intended as a general method for assessing both medical and psychosocial interventions. While strongly encouraging the use of randomized designs, this protocol noted that such designs were useful only if they met demanding criteria, such as true randomization and concealment of the assigned treatment group from the client and from others, including the individuals assessing the outcome. The Khan et al. protocol emphasized the need to make comparisons on the basis of "intention to treat" in order to avoid problems related to greater attrition in one group. The Khan et al. protocol also presented demanding criteria for nonrandomized studies, including matching of groups on potential confounding variables and adequate descriptions of groups and treatments at every stage, and concealment of treatment choice from persons assessing the outcomes. This protocol did not provide a classification of levels of evidence, but included or excluded treatments from classification as evidence-based depending on whether the research met the stated standards.

U.S. National Registry of Evidence-Based Practices and Programs

An assessment protocol has been developed by the U.S. National Registry of Evidence-Based Practices and Programs. Evaluation under this protocol occurs only if an intervention has already had one or more positive outcomes, with a probability of less than.05, reported, if these have been published in a peer-reviewed journal or an evaluation report, and if documentation such as training materials has been made available. The NREPP evaluation, which assigns quality ratings from 0 to 4 to certain criteria, examines reliability and validity of outcome measures used in the research, evidence for intervention fidelity, levels of missing data and attrition, potential confounding variables, and the appropriateness of statistical handling, including sample size.

Mercer and Pignotti

A protocol suggested by Mercer and Pignotti uses a taxonomy intended to classify on both research quality and other criteria. In this protocol, evidence-based interventions are those supported by work with randomized designs employing comparisons to established treatments, independent replications of results, blind evaluation of outcomes, and the existence of a manual. Evidence-supported interventions are those supported by nonrandomized designs, including within-subjects designs, and meeting the criteria for the previous category. Evidence-informed treatments involve case studies or interventions tested on populations other than the targeted group, without independent replications; a manual exists, and there is no evidence of harm or potential for harm. Belief-based interventions have no published research reports or reports based on composite cases; they may be based on religious or ideological principles or may claim a basis in accepted theory without an acceptable rationale; there may or may not be a manual, and there is no evidence of harm or potential for harm. Finally, the category of potentially harmful treatments includes interventions such that harmful mental or physical effects have been documented, or a manual or other source shows the potential for harm.

History

Canada

The term was first used in a 1979 report by the "Canadian Task Force on the Periodic Health Examination" to "grade the effectiveness of an intervention according to the quality of evidence obtained".
The task force used three levels, subdividing level II:
The CTF graded their recommendations into a 5-point A–E scale: A: Good level of evidence for the recommendation to consider a condition, B: Fair level of evidence for the recommendation to consider a condition, C: Poor level of evidence for the recommendation to consider a condition, D: Fair level evidence for the recommendation to exclude the condition, and E: Good level of evidence for the recommendation to exclude condition from consideration.
The CTF updated their report in 1984, in 1986 and 1987.

USA

In 1988, the United States Preventive Services Task Force came out with its guidelines based on the CTF using the same 3 levels, further subdividing level II.
Over the years many more grading systems have been described.

UK

In September 2000, the Oxford CEBM Levels of Evidence published its guidelines for 'Levels' of evidence re claims about prognosis, diagnosis, treatment benefits, treatment harms, and screening. It not only addressed therapy and prevention, but also diagnostic tests, prognostic markers, or harm. The original CEBM Levels was first released for Evidence-Based On Call to make the process of finding evidence feasible and its results explicit. As published in 2009 they are:
In 2011, an international team redesigned the Oxford CEBM Levels to make it more understandable and to take into account recent developments in evidence ranking schemes. The Levels have been used by patients, clinicians and also to develop clinical guidelines including recommendations for the optimal use of phototherapy and topical therapy in psoriasis and guidelines for the use of the BCLC staging system for diagnosing and monitoring hepatocellular carcinoma in Canada.

Global

In 2007, the World Cancer Research Fund grading system described 4 levels: Convincing, probable, possible and insufficient evidence. All Global Burden of Disease Studies have used it to evaluate epidemiologic evidence supporting causal relationships.

Proponents

In 1995 Wilson et al., in 1996 Hadorn et al. and in 1996 Atkins et al. have described and defended various types of grading systems.

Criticism

More than a decade after it was established, use of evidence hierarchies was increasingly criticized in the 21st century. In 2011, a systematic review of the critical literature found 3 kinds of criticism: procedural aspects of EBM, greater than expected fallibility of EBM, and EBM being incomplete as a philosophy of science.
Many critics have published in journals of philosophy, ignored by the clinician proponents of EBM. Rawlins and Bluhm note, that EBM limits the ability of research results to inform the care of individual patients, and that to understand the causes of diseases both population-level and laboratory research are necessary. EBM hierarchy of evidence does not take into account research on the safety and efficacy of medical interventions. RCTs should be designed "to elucidate within-group variability, which can only be done if the hierarchy of evidence is replaced by a network that takes into account the relationship between epidemiological and laboratory research"
The hierarchy of evidence produced by a study design has been questioned, because guidelines have "failed to properly define key terms, weight the merits of certain non-randomized controlled trials, and employ a comprehensive list of study design limitations".
Stegenga has criticized specifically that meta-analyses are placed at the top of such hierarchies. The assumption that RCTs ought to be necessarily near the top of such hierarchies has been criticized by Worrall. and Cartwright
In 2005, Ross Upshur noted that EBM claims to be a normative guide to being a better physician, but is not a philosophical doctrine. He pointed out that EBM supporters displayed "near-evangelical fervor" convinced of its superiority, ignoring critics who seek to expand the borders of EBM from a philosophical point of view.
Borgerson in 2009 wrote that the justifications for the hierarchy levels are not absolute and do not epistemically justify them, but that "medical researchers should pay closer attention to social mechanisms for managing pervasive biases". La Caze noted that basic science resides on the lower tiers of EBM though it "plays a role in specifying experiments, but also analysing and interpreting the data."
Concato argued in 2004, that it allowed RCTs too much authority and that not all research questions could be answered through RCTs, either because of practical or because of ethical issues. Even when evidence is available from high-quality RCTs, evidence from other study types may still be relevant. Stegenga opined that evidence assessment schemes are unreasonably constraining and less informative than other schemes now available.