Abstract
BACKGROUND: Discriminatory capabilities of a measurement technique can be assessed by a receiver operating characteristic (ROC) curve analysis (specifically, area under the curve [AUC]) and predictive modeling (predictive accuracy and positive predictive value). Theoretically, predictive accuracy is dependent on disease prevalence while AUC assessments are not.
OBJECTIVE: To compare the effect of changes in disease prevalence on ROC AUC analysis and predictive modeling.
METHODS: For this comparison, a data set with 72 individuals with coronary artery disease (CAD) and 1,857 individuals without CAD was used. A validated CAD score with a demonstrated AUC of 0.80 was applied. Disease prevalence within the study sample was altered by randomly removing non-CAD patients from the original sample. Predictive accuracy and positive predictive value of the CAD score were calculated using 2 x 2 contingency tables. Three threshold values of the CAD score were applied centering on a value for which sensitivity and specificity were equal.
RESULTS: For a chosen CAD score threshold value (eg, 60), sensitivity (0.74), specificity (0.75), and AUC (0.81) did not change significantly while positive predictive value increased (10%-70%) as disease prevalence increased from 4% to 44%. Changes in predictive accuracy were dependent on the selected test threshold value. Predictive accuracy increased (54%-68%), did not change (74%-75%), or decreased (88%-70%) with the same increase in disease prevalence for threshold values of 50, 60, and 70, respectively.
CONCLUSIONS: The ROC AUC and predictive accuracy are stable diagnostic characteristics, whereas positive predictive value is greatly influenced by disease prevalence.