As newly minted nurses, most of us had the experience of being told to implement a procedure with the explanation "that's the way we do it here." Such practice decisions reflected little more than a one-off, usually negative outcome that led to a unit or agency practice change. Moreover, all new nurses quickly learned that patients with the same underlying disease, treatment, or surgery received different care based on individual physician preference. The growing trend toward using evidence to direct practice is weeding out care variations based on traditions or individual preferences. However, just what do nurses mean when claiming to be practicing "evidence-based" care? An early and still relevant call to use evidence in practice means we commit to using the best available evidence along with our clinical expertise while engaging patient preferences and values in their decision-making.1 This notion of evidence-based practice includes (1) accessing the best available evidence, (2) having clinical expertise, (3) addressing patient preferences, and (4) valuing patients in decisions about care. Although all these elements are important, it is timely to look at what we mean by "best available" evidence. One-off experiences are evidence but hardly at the level necessary for directing practice change. In a hierarchy of evidence quality, systematic reviews sit on top because they are designed to exhaustively and systematically search all available studies (published, unpublished, and all languages) and critically appraise and synthesize the world's evidence on a particular question.2
Misunderstanding and confusion about the meaning of systematic reviews and the systematic review process is common and can vary by geographic location. The United Kingdom, Western European countries, and Australia have a more consistent understanding of systematic review than is seen in the United States. In the United States, a pattern of misunderstanding can be seen threaded through national organizations, academia, authors, and consumers, resulting in ill-fitting educational practices and a hodgepodge of literature claiming to be systematic reviews. Accreditors, editors, and faculty need a better understanding of systematic review. A true systematic review is data-based scholarship. It can inform research and practice and be a quality-learning experience for students, but it takes a team of reviewers, trained leaders, and a long time to conduct, requiring perhaps a year of concerted effort. Because the quality of evidence is critical to clinical care, we need to look at the typology of reviews and associated defining characteristics to better differentiate between bona fide systematic review, other types of reviews, and pseudo-systematic reviews.
The terms "systematic review" and "integrative review" are too often used interchangeably. Integrative review has emerged as a catch-all phrase to represent may different approaches to a literature review lacking specificity in purpose and method. To capture a more nuanced list of the types of reviews, Grant and Booth3 used a process of reviewing reviews, a process of reviewing papers published as reviews. For each review, they looked at a definition, search methods, quality appraisal requirements, synthesis approaches, and methods of analysis. Fourteen unique review types were identified: critical review, literature review (generic), mapping review, meta-analysis, mixed-studies review, overviews, qualitative systematic review, rapid review, scoping review, state-of-the-art review, systematic review, systematic search and review, systematized review, and umbrella review. This work is highly recommended for readers interested in the similarities/differences and identified strengths/weaknesses of the 14 review types.
Focusing in on systematic reviews, we will begin by saying a systematic review is not a review of the literature conducted in a systematic way. Systematic reviews use a structured, or systematic, method for examining the literature to address a priori designated aim(s). A systematic review is much more than using the search approach outlined by the Preferred Reporting items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram. A central tenant of a valid systematic review is that at least 2 reviewers (scientists and/or clinicians) adhere to an explicit, rigorous method of developing the review protocol, identifying studies, appraising the quality of included studies, and following predetermined synthesis processes. As in primary research methods, rigor is necessary to minimize bias and error in the review process. In addition, as in primary research, the methods must be described with enough detail to allow others to replicate the review.
Systematic reviews require qualified review panels and are not valid when carried out in isolation. Before embarking on a valid systematic review, reviewers must create a focused question using the Population, Intervention, Comparison, Outcome (Time) [PICO(T)] format and perform an initial search to determine whether the need exists for the review. Next, the review team must identify appropriate search terms, create a search strategy, define databases to be searched, list study inclusion/exclusion criteria, describe the study appraisal process and criteria, and describe the planned process for synthesizing the literature. A minimum of 2 people are expected to be involved in developing the protocol, and there is a growing recognition that stakeholders and representatives of the review's population of interest need to be involved during the developmental phase. Once developed, a review protocol should undergo peer evaluation to ensure that the planned systematic review adheres to quality criteria. Just as researchers conducting clinical trials are expected to make intervention protocols available so others can judge the fidelity of their actual and planned methods, systematic reviewers are expected to publish their protocols for others to judge the reliability of the protocol to produce valid findings.
Another characteristic of valid systematic reviews is having at least 2 independent reviewers engaged in the quality appraisal and data extraction phases of the review. The 2 reviewers independently review each study selected from the literature, examine each study's relevance using predetermined inclusion and exclusion criteria, and decide which studies should be retrieved for further evaluation. To minimize bias, independent reviewers then appraise the retrieved studies using standardized criteria published with the protocol and decide whether the primary studies should be retained or excluded based on the quality of each study. In addition, the reviewers use the appraisal process to transparently depict the types and degree of risks of bias (in quantitative reviews) or threats to credibility (in qualitative reviews) when they publish the systematic review.4 If the 2 independent reviewers fail to reach agreement on inclusion, they consult a third reviewer to achieve consensus.
Once the reviewers make their decisions about which studies to include, they extract data relating to the review question using a predetermined extraction tool developed as part of the protocol. In this way, the reviewers independently extract data while simultaneously agreeing on the data necessary for the review purpose.
Finally, the reviewers synthesize the evidence using the synthesis methods described in the review protocol. Depending on the type of evidence included in the review, these methods can include meta-analysis, narrative synthesis, or meta-synthesis. Outcomes of this synthesis should include a discussion of what is known and not known in relation to the review's question along with recommendations for practice and research. In addition, reviewers should involve stakeholders when examining the outcomes of the synthesis and making recommendations for practice and research.
A quality systematic review will meet these essential characteristics and follow the established procedures. Unfortunately, too many publications are called systematic reviews that fail to meet the quality standards. Fortunately, there are reliable and valid ways to determine whether a review is truly systematic or is another type of review. The internationally recognized appraisal tool Critical Appraisal skills Programme (CASP)5 systematic review checklist provides a concise way to examine systematic review quality and can be used in small group settings. The tool can be really useful for committee work or as a basis for journal clubs.
Some confusion exists regarding the use of PRISMA guidelines for evaluating systematic reviews. According to the PRISMA statement,6 these guidelines provide authors with the information to be included in a manuscript; they are not intended to be a method for conducting a review or evaluating the quality of the systematic review or its findings. Systematic review and synthesis methodologies continue to evolve; the journal Research Synthesis Methods is just 1 example of efforts aimed at advancing the science of synthesis. The Agency for Healthcare Research and Quality7 and international groups such as the Cochrane, Campbell, and Joanna Briggs Collaborations continue to contribute to methodological development as well.
Systematic reviews of evidence can be powerful sources of evidence to guide clinical and research priority decision-making. Authentic systematic reviews are characterized by rigor and transparency. Pseudo- or poorly conducted systematic reviews run the risk of misleading that decision-making. Reviewers achieve rigor through peer review of the protocol and review report, through dual, blinded decisions related to retrieval, appraisal, and inclusion of studies and the data extraction. They achieve transparency through publication of the protocol, the appraisal and extraction tools, and tracking and reporting the rationale for inclusion and exclusion of published studies. As consumers of systematic reviews for the purpose of evidence-based practice, we are accountable for selecting high quality reports that meet the standards of a true systematic review. Pseudo-reviews that do not meet rigorous standards are little more than a reflection of the author's biases and may simply be a modern interpretation of "that's how we do it here."
References