Penman-Aguilar et al,1 scientists or statisticians from the Centers for Disease Control and Prevention (CDC) and its arm the National Center for Health Statistics (NCHS), have attempted to provide substantial guidance regarding measurement of health disparities and reporting on the methods researchers decide to employ to measure disparities. But they have overlooked fundamental issues, including one NCHS statisticians attempted to address in this journal over a decade ago. Consequently, the article will do more to impede than promote efforts to quantify differences in the circumstances of advantaged and disadvantaged groups reflected by the rates at which they experience adverse or favorable health and health care outcomes.
Patterns by Which Measures Tend to Be Affected by the Prevalence of an Outcome
A crucial shortcoming of the Penman-Aguilar et al article is the failure to recognize patterns by which standard measures of differences between outcome rates tend to be affected by the prevalence of an outcome. The rarer an outcome, the greater tends to be the relative (percentage) difference between rates at which advantaged and disadvantaged groups experience the outcome and the smaller tends to be the relative difference between rates at which such groups avoid the outcome. Thus, for example, as mortality declines, relative differences in mortality tend to increase while relative differences in survival tend to decrease; as rates of appropriate health care increase, relative differences in nonreceipt of care tend to increase while relative differences in receipt of care tend to decrease. Similarly, relative differences in adverse health and health care outcomes tend to be larger, while relative differences in the corresponding favorable outcomes tend to be smaller, where the adverse outcomes are comparatively rare than where they are comparatively common. A corollary to this pattern is that, as an outcome changes in overall prevalence, the group with the lower baseline rate tends to experience a larger proportionate change in its rate for the outcome while the other group tends to experience a larger proportionate change in its rate for the opposite outcome.2-7
Absolute (percentage point) differences between rates and differences measured by odds ratios are unaffected by which outcome one examines. But in order for a measure to effectively quantify differences in the circumstances of advantaged and disadvantaged groups (or, otherwise put, the strength of the forces causing their outcome rates to differ), the measure must remain unchanged when there occurs a general change in the prevalence of an outcome (and its opposite). And absolute differences and odds ratio tend also to change as the prevalence of an outcome changes, although in more complicated ways than the 2 relative differences.
Roughly, as an outcome goes from being rare to being common, absolute differences tend to increase; as an outcome goes from being common to being very common, absolute differences tend to decrease. The absolute difference and both relative differences may all change in the same direction, in which case one may infer that there occurred a meaningful change in the strength of the forces causing the outcome rates to differ. But when a relative difference changes in a different direction from the absolute difference, the other relative difference will necessarily have changed in the opposite direction of the first relative difference and the same direction as the absolute difference.2,3
As the prevalence of an outcome changes, differences measured by odds ratios tend to change in the opposite direction of absolute differences.2-4
Scores of illustrations of these patterns, based on real and hypothetical data, may be found in referenced articles2-7 or the places to which they direct the reader.
Table 1 illustrates the patterns in their most essential form. The table is based on a situation where the means of the underlying distributions of factors associated with experiencing an outcome (or its opposite) of an advantaged group (AG) and a disadvantaged group (DG) differ by half a standard deviation and the distributions have the same standard deviation. The table presents at 4 levels of overall prevalence, benchmarked by the AG rate, favorable outcome rates for both groups (with corresponding adverse outcome rates implied), along with (a) the ratio of AG's favorable outcome rate to DG's favorable outcome rate; (b) the ratio of DG's adverse outcome rate to AG's adverse outcome rate; (c) the absolute difference between the favorable (or adverse) outcome rates; and (d) the ratio of DG's odds of experiencing the adverse outcome to AG's odds of experiencing the adverse outcome (which is the same as, or the reciprocal of, the 3 other possible formulations of the odds ratio).
The parenthetical numbers next to each measure value reflect rankings, from largest to smallest, of the disparities according to each measure. In accordance with the aforementioned discussion, the 2 relative differences yield rankings that are the opposite of one another. The absolute difference and the odds ratio also yield rankings that are the opposite of one another but that are different from the rankings according to either relative difference.
In my article,2(pp335-336) I used a version of this table to refute the notion that measures yielding opposite conclusions about the comparative size of disparities may all be valid in some sense and that one must make a value judgment in choosing between a relative difference and the absolute difference when the measures yield opposite conclusions regarding such thing as direction of changes in disparities over time (a position adopted in the Penman-Aguilar et al article). The table showed that contrasting interpretations of the comparative strength of the forces causing outcome rates to differ over time or from setting to setting cannot both be correct. I rely on that treatment here.
But I also note that, as with other works discussing value judgments in choosing between interpretations based on the relative difference one happens to be examining and the absolute difference,8 the Penman-Aguilar et al article discusses only the relative difference that yields a different conclusion from the absolute difference. By ignoring the relative difference that necessarily will have yielded the same conclusion as the absolute difference, such treatments also ignore the value judgment involved in choosing to address the relative difference that yields an opposite conclusion from the absolute difference rather than the relative difference that yields the same conclusion as the absolute difference.
An essential point of the illustration is that there is no rational basis for maintaining that forces causing outcome rates to differ vary among the rows in the table. But the table is useful for several other purposes. First, by showing how outcome rates for advantaged and disadvantaged groups can be derived from an understanding of the underlying distributions, the table also implies a sound, if imperfect, method of quantifying disparities in a way that is unaffected by the prevalence of an outcome. Specifically, one may derive from any pair of rates an estimate of the difference between the means of the underlying distributions in terms of a percentage of a standard deviation. I illustrate that approach in Table 2, discussed several paragraphs below.
Second, the table illustrates the fallacy of drawing inferences about processes based on the comparative size of a relative difference without consideration that the other relative difference commonly yields a very different inference. My study2(pp339-341) also explains that inferences based on the comparative size of relative differences for an outcome (or the comparative size of relative changes in outcome rates), such as the "inverse equity" hypothesis referenced by Penman-Aguilar et al1(pS40) in their own effort to draw inferences about the nature of processes, reflect a failure to recognize that improvements in health will tend to cause larger proportionate reductions in adverse health outcomes for groups with lower baseline rates for the adverse outcomes while causing larger proportionate increases in the corresponding favorable outcomes for other groups.
Third, the table illustrates the way that observers relying on absolute differences to appraise health care disparities regarding uncommon outcomes will tend to find that (a) improvements in care increase disparities and (b) disparities are greater at higher-performing institutions than at lower-performance institutions (as reflected by comparison of scenario A with scenario B). It similarly illustrates that observers relying on absolute differences to appraise disparities regarding common outcomes will tend to find that (a) improvements in care decrease disparities and (b) disparities are smaller at higher-performing institutions than at lower-performance institutions (as reflected by comparison of scenario C with scenario D). This is a quite important issue with regard to pay-for-performance programs.2(pp337-339)
The NCHS 2004-2005 Recognition of the Pattern by Which the 2 Relative Differences Tend to Change in Opposite Directions as the Prevalence of an Outcome Changes
In a 2005 article in this journal, NCHS statisticians Keppel and Pearcy,9 based on my articles of 2000 and 1994,5,6 recognized that as the prevalence of an outcome changes, determinations of whether health and health care disparities have increased or decreased would tend to turn on whether one examined relative differences in the favorable outcome or relative differences in the corresponding adverse outcome. The article, along with a 2004 NCHS Statistical Note and 2005 NCHS monograph coauthored by Keppel and Pearcy,10,11 reflected the first government recognition that it was even possible for the 2 relative differences to change in opposite directions as the prevalence of the outcome changes.
The forces causing adverse outcome rates of advantaged and disadvantaged groups to differ are exactly the same forces causing the corresponding favorable outcome rates to differ. Thus, Keppel and Pearcy, and NCHS, should have regarded the fact that the 2 relative differences commonly (or ever) yield opposite conclusions as to whether those forces are increasing or decreasing as calling into question the value of either relative difference for quantifying the differences in the circumstances of advantaged and disadvantaged groups. Instead, however, the 3 articles simply recommended that for purposes of appraising progress on the health disparities reduction goals of Healthy People 2010, all disparities should be measured in terms of relative differences in adverse outcomes (meaning, in the case of health care, the failure to receive appropriate care).
In a letter to the editor,12 I criticized the approach of the Keppel and Pearcy article for failing to recognize that without taking patterns by which each relative difference tends to be affected by the prevalence of an outcome into account, it is not possible to determine whether observed patterns reflect something other than a change in the prevalence of an outcome-a criticism that pertained to analyses of both health and health care disparities issues. And I discussed that absolute differences tend also to be affected by the prevalence of an outcome referencing a forthcoming article in which I treated the matter at greater length.4 Keppel and Pearcy13 responded, but, in my view, failed to address the key problem with relying on any measure without consideration of the way the prevalence of the outcome affects the measure.
Prior to the Keppel-Pearcy/NCHS recommendations of 2004-2005, health care disparities usually were measured in terms of relative differences in favorable outcomes. Thus, improvements in care tended to be associated with reduced disparities.5 One consequence of the Keppel-Pearcy/NCHS recommendation, which the Agency for Healthcare Research and Quality (AHRQ) adopted for the National Healthcare Disparities Reports,14 was that improvements in health care now would tend to be associated with increased disparities.
While NCHS commonly made clear that disparities in health care outcomes were being measured in terms of relative differences in the adverse outcome even when the subject was described in terms of the favorable outcome, it did not suggest the substantial implications of doing so, save by reference to the aforementioned articles coauthored by Keppel and Pearcy.15-17 Thus, neither AHRQ nor CDC has ever indicated an awareness that it was possible for the 2 relative differences to change in opposite direction much less that NCHS had in fact recognized that this would commonly occur.
And while some researchers adopted the NCHS approach, others were not even aware of it. Table 2 is based on a 2008 study by Morita et al18 of the effects of a of school-entry hepatitis B vaccination requirement on racial and ethnic disparities in vaccination rates. The table illustrates the patterns by which relative and absolute differences commonly change as the prevalence of an outcome changes, as well as the implications of the Keppel-Pearcy/NCHS recommendation (or its reversal discussed later) and the general disarray regarding measurement methodology within the federal health disparities research community. It also illustrates the quantifying of disparities in a way not affected by the prevalence of an outcome.
The table shows vaccination rates for black and white fifth and ninth graders for the years before and following implementation of the requirement, along with the ratios of white to black vaccination rates and black to white no vaccination rates, the absolute difference between rates, and the measure unaffected by the prevalence of an outcome.
Relying on relative differences in vaccination rates as a measure of disparity (and evidencing no awareness that NCHS would do otherwise), the authors found that the requirement, which dramatically increased vaccination rates, dramatically reduced racial disparities for both fifth and ninth graders. NCHS, relying on relative differences in rates of failure to be vaccinated, would have found substantially increased disparities for both grades. CDC, which commonly measures vaccination disparities in terms of absolute differences between rates,19-21 would have found substantially increased disparities for fifth graders (where initial rates were quite low) and substantially decreased disparities for ninth graders (where initial rates were much higher).
The column denominated EES, for estimated effect size, indicates that, to the extent that the disparities can be effectively measured, there occurred notable reductions in both grades. That, it warrants note, is the type of change one would expect in the case of a formal requirement, which, if rigidly enforced, ought to eliminate any disparity.
The NCHS 2015-2016 Decision to Measure Health Care Disparities in Terms of Relative Differences in Favorable Outcomes
At some point in 2015, NCHS published online a guide for measuring health disparities in the achievement of Healthy People 2020 objectives.22 Reversing the approach NCHS adopted a decade earlier, the guide provided that "for objectives expressed in terms of favorable events or conditions that are to be increased," relative differences would be calculated on the basis of the favorable outcomes rates. That change effectively repudiated more than 10 years of health care disparities research, including the National Healthcare Disparities Reports and all other research that, relying on NCHS guidance or otherwise, measured health care disparities in terms of relative differences in adverse outcomes.
In February 2016, NCHS issued a Statistical Note authored by Talih and Huang23 (coauthors of the Penman-Aguilar et al article) more fully discussing Healthy People 2020 measurement issues. The document noted in the abstract that "HP2020 objectives that are expressed in terms of favorable outcomes to be increased no longer need to be re-expressed using the complementary adverse outcomes for comparisons to the best group rate," while explaining in the body of the document23(p8) that under Healthy People 2010, objectives had been reexpressed in terms of the adverse outcome in computing relative differences.
The Statistical Note offered no explanation for the change. It cited the 2004 NCHS Statistical Note and the 2005 NCHS monograph10,11 that had recognized that relative differences in favorable outcomes and relative differences in corresponding adverse outcomes tend to change in opposite directions as the prevalence of an outcome changes. But nothing in the text of the Statistical Note would alert readers that by no longer reexpressing the outcome in adverse terms for calculation purposes, one would tend to reach opposite conclusions concerning directions of changes over time from the approach that did reexpress the objective in adverse terms. Very likely few readers would recognize that this was even possible. And certainly only the most astute and knowledgeable reader would recognize that the modification constitutes a disavowal of a great deal of prior research.
Substantive and Disclosure Failings of the Penman-Aguilar et al Article
The Penman-Aguilar et al article shows no awareness whatever of the patterns by which measures it discusses tend to be affected by the prevalence of an outcome and hence offers no guidance on how one might appraise health disparities while taking those patterns into account. Notwithstanding the importance of choice of relative difference with regard to determinations of whether both health and health care disparities are increasing or decreasing, as NCHS statisticians Keppel and Pearcy recognized here in 2005, the article never mentions the existence of a second relative difference even as it gives substantial attention to choosing between the absolute difference and the relative difference that yields a different conclusion from the absolute difference.
The Penman-Aguilar et al1(pS36) article does cite the 2005 NCHS monograph among 3 articles that address "other considerations." If that reference was intended to address the pattern of relative differences recognized in the monograph, it would be sorely inadequate in that regard, even leaving aside that NCHS has now rejected the approach the monograph recommended with respect to health care disparities. And such reference would still leave utterly unaddressed the fact that absolute differences tend to be affected by the prevalence of an outcome that I have discussed at length since mentioning it here in my 2006 letter,2-4 and that 2 coauthors of the Penman-Aguilar et al article recently recognized.24
The failure to address these issues occurs, moreover, in an article that, in addition to being seemingly comprehensive, stresses the importance that health disparities researchers make clear their assumptions and justify chosen approaches. That aspect of the article has the effect of affirmatively leading readers to believe that there exist no issues of the type described above. For the reader will reasonably assume that if such issues existed, an article of this nature would have mentioned them.
To put health and health care research on anything approaching a sound footing, the federal disparities research establishment must directly address the implications of the patterns NCHS recognized more than a decade ago, as well as the other patterns by which measures tend to be affected by the prevalence of an outcome.
REFERENCES