Download presentation
Presentation is loading. Please wait.
Published byAmberly Cooper Modified over 8 years ago
1
Assessing the additional value of diagnostic markers: a comparison of traditional and novel measures Ewout W. Steyerberg Professor of Medical Decision Making Dept of Public Health, Erasmus MC, Rotterdam, the Netherlands E.Steyerberg@ErasmusMC.nl Birmingham, July 2, 2010
2
Introduction: additional value of a diagnostic marker Usefulness / Clinical utility: what do we mean exactly? Evaluation of predictions Ordering: concordance statistic (c, or AUC) Evaluation of decisions Net Reclassification Index (NRI), very popular Net Benefit (NB): decision-analytic, not popular Adding a marker to a model Statistical significance? Simple LR testing; not an issue Clinical usefulness: measurement worth the costs?
3
Overview Hypotheses: NRI is closely related to AUC NRI may be misleading
4
Addition of a marker to a model Typically small improvement in discriminative ability according to c statistic c stat blamed for being insensitive
6
Net Reclassification Index: (move up | event– move down | event) + (move down | non-event – move up | non-event ) = improvement in sensitivity + improvement in specificity
7
Pencina example
8
29 7 173 174 22/183=12% 1/3081=0.03%
9
Enthusiasm
10
History of NRI 1.Many object to AUC 2.Cook: Reclassification provides insight 3.Pencina: Net reclassification is what counts 4.Many: Enthusiasm 5.Objections, 8 LTTEs Stat Med 2008 a) Relationships to other measures Reply: agree b) Greenland +Vickers/Steyerberg: Need to weight consequences Reply: implicit weighting by prevalence
11
5a) NRI ‘a better measure’? NRI requires classification Simplest case: binary (high vs low risk) If binary, easy to calculate sensitivity and specificity NRI = delta sens + delta spec, reminds us of Youden Index Youden Index = sens + spec – 1 NRI = delta Youden Index
12
NRI better than AUC? Binary ROC curve AUC = (sens+spec) / 2 NRI = delta sens + delta spec NRI = 2 x delta AUC ! Conclusion: NRI misleading in claiming being ‘better’ than AUC 1. from predictions to classification 2. 2 x delta AUC
13
5b) Weighting ‘absurd”
14
Chapter 16 - Google books - Order http://www.clinicalpredictionmodels.org http://www.springer.com/978-0-387-77243-1
15
Evaluation of decisions Clinically meaningful cut-off (or threshold) for the probability: p t p t reflects relative true-positive vs weight false-positive decisions e.g. if p t = 50%, wTP=wFP if p t = 20%, wTP = 4 times wFP Net Benefit: (TP – w FP) / N, with w = harm / benefit = p t / (1 – p t ) (Pierce 1884, Vickers 2006) If p t = 50%, w =.5 / (1 –.5) = 1 if p t = 20%, w =.2 / (1 –.2) = 1/4 Net Reclassification Index: NRI = improvement in sens + improvement in spec Implicit weighting by non-event odds: (1 – Prevalence) / Prevalence Hence inconsistent if p t ≠ Prevalence
16
Overview
17
Case study Testicular cancer: prediction of residual tumor after chemotherapy N=544, 299 tumor (55%) Reference models Postchemotherapy mass size … + reduction in size + primary histology 3 tumor markers AFP: abnormal vs normal HCG: abnormal vs normal LDH: abnormal vs normal and continuous: log(LDH)
18
Evaluation of predictions LR and AUC (c) same pattern Reference model matters; dichotomization harms
19
Evaluation of decisions at 20% and 55% thresholds Net Benefit and NRI consistent at 55% (=prevalence) threshold, not 20%
20
Conclusions 1.Judgment of additional value depends on the measure chosen; the reference model; coding of the marker. 2.A decision-analytic perspective is not compatible with an overall judgment as obtained from the AUC in ROC analysis nor with NRI. 3.The current practice of reporting AUC and NRI as measures of usefulness needs to be replaced by routinely reporting net benefit analyses. 4.Further work: - NRI and NB for 2 decisions, e.g. CVD 5% and 20% thresholds - link to decision analysis / cost-effectiveness analysis
21
References Vickers AJ, Elkin EB: Decision curve analysis: a novel method for evaluating prediction models. Med Decis Making 26:565-74, 2006 Steyerberg EW, Vickers AJ: Decision Curve Analysis: A Discussion. Med Decis Making 28; 146, 2008 Steyerberg EW, Vickers AJ, Cook NR, et al. Assessing the performance of prediction models: a framework for some traditional and novel measures. Epidemiology, Jan 2010
22
From 1 cutoff to consecutive cutoffs Sensitivity and specificity ROC curve Net benefit decision curve
23
ROC curves
24
Decision curves
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.