In the United States, patients receive only half of recommended medical care and nearly a third of the care received may be unnecessary. This disparity between usual and evidence-based clinical practice is associated with a third of hospital deaths and loss of an estimated $380 billion each year.1 The health care industry has invested billions of dollars in electronic medical record systems, believing that it would revolutionize the process of translating evidenced-based guidelines into clinical practice-leading to higher quality care at lower costs, and thereby, improving health care value. Despite these expectations, variability in clinical practice continues for common pediatric diagnoses. Such variability often leads to unsafe, less efficient, and costly health care. However, astute clinicians, policy makers, and payers recognize that evidence-based clinical practice guidelines can make care more reliable and efficient and narrow the evidence practice gap between what clinicians do and what scientific evidence supports.
Current strategies to optimize health information technology as a tool to translate evidence into clinical practice include clinical decision support (CDS) tools, such as condition-specific order sets, documentation templates, and best practice advisories.2 Despite these strategies, inconsistences remain in medical care, widening gaps in population-level health experiences and outcomes for common pediatric diagnoses, including asthma, community-acquired pneumonia, and bronchiolitis.3,4 Poor adoption by providers is an important barrier to optimizing health information technology, specifically CDS, as a tool to translate evidence-based recommendations into clinical practice.4 Provider factors, such as their attitudes toward CDS, are typically not addressed during any stage of a CDS development, even though they represent an important factor for adoption.5
It is important to develop strategies that foster partnerships between end users, most often clinicians, and medical informatics personnel to learn what provider factors will enhance CDS adoption. Even a user-centered design CDS tool will not likely completely address the variability in current clinical practice. Thus, we must also focus on robust implementation strategies, which will foster consistent adoption of these tools and sustainability of improvement efforts.
We have previously described one conceptual framework associated with sustained improvements in a variety of accountability measures at The Johns Hopkins Hospital.6 The model has also been implemented in 4 other hospitals in the Johns Hopkins Health System to translate evidence into practice and achieved similar improvements in performance measures.7 This model may provide a promising approach to promote CDS adoption and reduce variability in a sustainable manner.6 The model entails (1) clarifying and communicating the goals and measures of interest across all levels of the organization; (2) building capacity and using Lean Sigma or a similar change model for improvement; (3) transparent reporting and ensuring accountability for performance; and (4) developing a sustainability process.
Here is how the model could support implementation, using pediatric asthma as an example. First, a multidisciplinary team involving a pediatric pulmonologist and hospitalist, respiratory therapist, nursing, and a pharmacist would agree on the overall goals of the CDS form. They would select or, if needed, identify complementary performance measures inclusive of process and outcome metrics. The aim would be to use national or standardized performance measures when these exist. For example, The Joint Commission has identified 3 performance measures for Children's Asthma Care that pediatric hospitalists should aim to achieve for patients hospitalized to treat asthma.
Once the multidisciplinary team identifies appropriate measures, they would partner with health care informatics to integrate these measures into the CDS tool and to build a mechanism to easily track and report performance. The next step would be to implement and test the CDS tool. Implementation would be an iterative process and could use the Lean Sigma "define-measure-analyze-improve-control (DMAIC)"8 or a similar change framework to systematically identify where failures occur and determine ways to improve. It is important to have transparent reporting of performance during this process to ensure accountability and to determine when goals have been consistently achieved over time (step 3). Once goals have been consistently achieved, the last step would be to develop a sustainability plan to ensure that improvements persist outside of the targeted implementation phase.
The potential of CDS tools in improving the translation of evidence-based recommendations into clinical practice has not been fully realized. A primary factor has been poor provider adoption of these tools and lackluster implementation strategies. Development of CDS tools should incorporate end user feedback to facilitate improvements in adoption. A comprehensive implementation model that involves end user clinicians in the initiative could promote adoption of such a tool and sustainable improvement of performance measures. The authors plan to study this model as a mechanism to standardize pediatric care.
REFERENCES