LASTING IMPROVEMENT OR FLEETING SUCCESS
In "Does Root Cause Analysis Improve Patient Safety? A Systematic Review at the Department of Veterans Affairs," Shah and colleagues1 examine if root cause analysis in the Veterans Health Administration has been effective. The article shows that some investigators collect data on the impact of the improvement, which is encouraging and reassuring. Others also report the use of evidence in root cause analysis.2,3 This journal has already called to readers' attention the need for evidence-based root cause analysis.4 So, the article by Shah and colleagues is helpful in reemphasizing the point.
At the same time, this new article provides an opportunity for us to discuss how we know if root cause analysis has led to lasting improvement. Some investigators argue that the quest for lasting change through addressing root causes is unhealthy.5 In events that have multiple causes, addressing one cause may lead to temporary improvements but other causes may lead to poor outcomes. In these circumstances, it is not clear what should be the objective of improvement teams. Doing some good is of course better than not doing anything. Yet, claiming success only to find the problem persists is frustrating and perhaps misleading. We can see the problem of fleeting success when improvement teams adopt a short time horizon. Collecting data over long time periods may manifest cycles of improvements and return of adverse events.
We all have access to long-term data on patient safety through the Centers for Medicare & Medicaid Services' Hospital Compare website. These data are helpful in examining if one is succeeding in reducing adverse events. Improvement teams that want to claim they have succeeded should show changes in these long-term persistent rates. Some examples of trends in adverse outcomes are helpful in understanding how little progress has been made:
1. From 2014 to 2016, the median risk-adjusted mortality rates for isolated coronary artery bypass graft surgery changed. First, it went up and then it came down, but none of the changes were statistically significant.6
2. From 2014 to 2016, the median risk-adjusted readmission rates for patients with chronic obstructive pulmonary disease, heart failure, pneumonia, and stroke changed. Again, none of the changes were statistically significant. There was a statistically significant but small decline in readmissions for myocardial infarction.7
3. From 2005 to 2011, adverse event rates (drug events, hospital-acquired infections, post-procedure complications) did not decline for pneumonia or conditions requiring surgery.8
4. From 1998 to 2007, 7 Patient Safety Indicators (post-operative pulmonary embolism or deep vein thrombosis; post-operative physiological or metabolic derangement; post-operative sepsis; selected infections due to medical care; decubitus ulcer; accidental puncture or laceration; and post-operative respiratory failure) increased in frequency and 7 other Patient Safety Indicators decreased in frequency.9 Some got worse and some got better.
5. A 2020 systemic review of the literature found that none of the reported root cause analyses followed up to see if their recommended actions actually reduced adverse events.10
These data suggest that requiring hospitals to conduct root cause analysis has not led to a consistent improvement in patient safety. The problems persist. Of course, exceptions exist and some adverse events have declined,11 but overall the picture is inconsistent and there is limited evidence of widespread reduction in adverse events.
What are we to do about occasional reports that root cause analysis is leading to some improvements, when hospital-level data show an inconsistent picture? The central question to ask about these reports is, for how long did the study observe a reduction in adverse events? Short time frames would be misleading. Longer time frames are needed. The Centers for Medicare & Medicaid Services' Hospital Compare data now show long-term data on adverse events. Any improvement effort that wants to claim success needs to show a change in these long-term hospital-wide data.
The importance of long-term perspective can be highlighted by an example. Consider, for a moment, a persistent problem in emergency departments (EDs): excessive boarding. At the Veterans Administration Health Center, an improvement team looked at the situation and identified 2 problems: (1) staffing and (2) urgent treatment of low-priority patients. The hospital administrator paid for recruitment of a new nurse and the entire ED was redesigned to physically separate low- and high-priority emergencies. The improvement team collected data on flow of patients and showed that more personnel and better flow led to less boarding. Six months later, however, the problem returned. A root cause analysis showed that delays had nothing to do with efficiency of ED operations (nothing to do with staffing or low-priority care). Delays were primarily due to distal problems outside of the ED: near-capacity hospital beds, delays in discharging, and delays in imaging.12 The initial improvement team had data and had hypothesized that delays were due to operations in EDs. They provided evidence and administrators acted to address the operational issues. This was not enough. Perhaps, it solved a problem, but not the central reason for the delays. The root cause analysis showed a different picture. It showed patients and clinicians were waiting for stat image requests, but demand on imaging elsewhere in the health center was causing a bottleneck. Imaging delays elsewhere led to backup in the ED. Discharge delays from the hospital had led to unavailability of hospital beds and therefore removed a safety valve for reducing delays in the ED. Patients in the ED could not be admitted. The improvement team had presented data on its success, but these data were collected at a time when the hospital was not near capacity and there were no imaging delays during that period. A longer-term look at the data and causes of delay identified a different set of causes.
Not every piece of evidence is real. We need data that document lasting improvements. This is best done by addressing root causes. This journal has already published articles on how to empirically identify root causes.13 It is important to use these new methods to know whether all indirect root causes have been addressed. Another way that does not require root cause analysis is to show that adverse outcomes have gone away, that is, gone away for good. Since nothing is permanent, one needs to show that it has gone away for at least a long time, say at least for a year.
Many improvement teams have a shorter time horizon. The availability of Hospital Compare data and Patient Safety Indicators within electronic health records may enable improvement teams to take a sufficiently long look in the past. They can show that current days to adverse outcomes are different from the patterns of these days in the last few years. The history of adverse outcomes is widely available, so it is relatively easy to establish historical patterns. Improvement teams need to take advantage of this history to show that current situations are different from long-term patterns. Without taking a long-term perspective, success may remain fleeting.
The article by Shah and colleagues,13 in this issue, shows the uneven reliance on data within the Veterans Health Administration. This organization has a large and sophisticated emphasis on performance improvement. Every medical center is expected to report rates of adverse outcomes. Furthermore, the organization keeps decades of electronic health records. It is not clear why root cause analysis in this organization does not rely more systematically on the long-term, national-in-scope, massive data, known to be available in the Veterans Administration Informatics and Computing Infrastructure. Quality directors and improvement teams should make a larger effort to use these data.
REFERENCES