Presentation is loading. Please wait.

Presentation is loading. Please wait.

Relationship between performance measures: From statistical evaluations to decision-analysis Ewout Steyerberg Dept of Public Health, Erasmus MC, Rotterdam,

Similar presentations


Presentation on theme: "Relationship between performance measures: From statistical evaluations to decision-analysis Ewout Steyerberg Dept of Public Health, Erasmus MC, Rotterdam,"— Presentation transcript:

1 Relationship between performance measures: From statistical evaluations to decision-analysis Ewout Steyerberg Dept of Public Health, Erasmus MC, Rotterdam, the Netherlands E.Steyerberg@ErasmusMC.nl Chicago, October 23, 2011

2 General issues  Usefulness / Clinical utility: what do we mean exactly?  Evaluation of predictions  Evaluation of decisions  Adding a marker to a model  Statistical significance? Testing β enough (no need to test increase in R 2, AUC, IDI, …)  Clinical relevance: measurement worth the costs? (patient and physician burden, financial costs)

3 Overview  Case study: residual masses in testicular cancer  Model development  Evaluation approach  Performance evaluation  Statistical  Overall  Calibration and discrimination  Decision-analytic  Utility-weighted measures

4  www.clinicalpredictionmodels.org

5 Prediction approach  Outcome: malignant or benign tissue  Predictors:  primary histology  3 tumor markers  tumor size (postchemotherapy, and reduction)  Model:  logistic regression  544 patients, 299 malignant tissue  Internal validation by bootstrapping  External validation in 273 patients, 197 malignant tissue

6 Logistic regression results

7 Evaluation approach: graphical assessment

8 Lessons 1.Plot observed versus expected outcome with distribution of predictions by outcome (‘Validation graph’) 2.Performance should be assessed in validation sets, since apparent performance is optimistic (model developed in the same data set as used for evaluation)  Preferably external validation  At least internal validation, e.g. by bootstrap cross-validation

9 Performance evaluation  Statistical criteria: predictions close to observed outcomes?  Overall; consider residuals y – ŷ, or y – p  Discrimination: separate low risk from high risk  Calibration: e.g. 70% predicted = 70% observed  Clinical usefulness: better decision-making?  One cut-off, defined by expected utility / relative weight of errors  Consecutive cut-offs: decision curve analysis

10 Predictions close to observed outcomes? Penalty functions  Logarithmic score: (1 – Y)*(log(1 – p)) + Y*log(p)  Quadratic score: Y*(1 – p)^2 + (1 – Y)*p^2

11 Overall performance measures  R 2 : explained variation  Logistic / Cox model: Nagelkerke’s R 2  Brier score: Y*(1 – p)^2 + (1 – Y)*p^2  Brier scaled = 1 – Brier / Brier max  Brier max = mean(p) x (1 – mean(p))^2 + (1 – mean(p)) x mean(p)^2  Brier scaled very similar to Pearson R 2 for binary outcomes

12 Overall performance in case study

13 Measures for discrimination  Concordance statistic, or area under the ROC curve  Discrimination slope  Lorenz curve

14 ROC curves for case study

15 Box plots with discrimination slope for case study

16 Lorenz concentration curves: general pattern

17 Lorenz concentration curves: case study

18 Discriminative ability of testicular cancer model

19 Characteristics of measures for discrimination

20 Measures for calibration  Graphical assessments  Cox recalibration framework (1958)  Tests for miscalibration  Cox; Hosmer-Lemeshow; Goeman - LeCessie

21 Calibration: general principle

22 Calibration: case study

23 Calibration tests

24 Hosmer-Lemeshow test for testicular cancer model

25 Some calibration and goodness-of-fit tests

26 Lessons 1.Visual inspection of calibration important at external validation, combined with test for calibration-in-the-large and calibration slope

27 Clinical usefulness: making decisions  Diagnostic work-up  Test ordering  Starting treatment  Therapeutic decision-making  Surgery  Intensity of treatment

28 Decision curve analysis Andrew Vickers Departments of Epidemiology and Biostatistics Memorial Sloan-Kettering Cancer Center

29 How to evaluate predictions? Prediction models are wonderful!

30 How to evaluate predictions? Prediction models are wonderful! How do you know that they do more good than harm?

31 Overview of talk Traditional statistical and decision analytic methods for evaluating predictions Theory of decision curve analysis

32 Illustrative example Men with raised PSA are referred for prostate biopsy In the USA, ~25% of men with raised PSA have positive biopsy ~750,000 unnecessary biopsies / year in US Could a new molecular marker help predict prostate cancer?

33 Molecular markers for prostate cancer detection Assess a marker in men undergoing prostate biopsy for elevated PSA Create “base” model: –Logistic regression: biopsy result as dependent variable; PSA, free PSA, age as predictors Create “marker” model –Add marker(s) as predictor to the base model Compare “base” and “marker” model

34 How to evaluate models? Biostatistical approach (ROC’ers) –P values –Accuracy (area-under-the-curve: AUC) Decision analytic approach (VOI’ers) –Decision tree –Preferences / outcomes

35 PSA velocity P value for PSAv in multivariable model <0.001 PSAv an “independent” predictor AUC:Base model = 0.609 Marker model =0.626

36 AUCs and p values I have no idea whether to use the model or not –Is an AUC of 0.626 high enough? –Is an increase in AUC of 0.017 enough to make measuring velocity worth it?

37 Decision analysis Identify every possible decision Identify every possible consequence –Identify probability of each –Identify value of each

38 Cancer No cancer a (p 1 + p 3 ) b c d Cancer No Cancer Cancer No Cancer Apply model Biopsy No biopsy Cancer No cancer a b c d Cancer No Cancer Cancer No Cancer Biopsy No biopsy p1p1 p2p2 p3p3 1- (p 1 + p 2 + p 3 ) 1 - (p 1 + p 3 ) (p 1 + p 3 ) 1 - (p 1 + p 3 ) Decision tree

39 Optimal decision Use model –p 1 a + p 2 b + p 3 c + (1 - p 1 - p 2 - p 3 )d Treat all –(p 1 + p 3 )a + (1- (p 1 + p 3 ))b Treat none –(p 1 + p 3 )c + (1- (p 1 + p 3 ))d Which gives highest value?

40 Drawbacks of traditional decision analysis p’s require a cut-point to be chosen

41 Cancer No cancer a (p 1 + p 3 ) b c d Cancer No Cancer Cancer No Cancer Apply model Biopsy No biopsy Cancer No cancer a b c d Cancer No Cancer Cancer No Cancer Biopsy No biopsy p1p1 p2p2 p3p3 1- (p 1 + p 2 + p 3 ) 1 - (p 1 + p 3 ) (p 1 + p 3 ) 1 - (p 1 + p 3 ) Decision tree

42 Problems with traditional decision analysis p’s require a cut-point to be chosen Extra data needed on health values outcomes (a – d) –Harms of biopsy –Harms of delayed diagnosis –Harms may vary between patients

43 Cancer No cancer a (p 1 + p 3 ) b c d Cancer No Cancer Cancer No Cancer Apply model Biopsy No biopsy Cancer No cancer a b c d Cancer No Cancer Cancer No Cancer Biopsy No biopsy p1p1 p2p2 p3p3 1- (p 1 + p 2 + p 3 ) 1 - (p 1 + p 3 ) (p 1 + p 3 ) 1 - (p 1 + p 3 ) Decision tree

44 Evaluating values of health outcomes 1.Obtain data from the literature on: Benefit of detecting cancer (cp to missed / delayed cancer) Harms of unnecessary prostate biopsy (cp to no biopsy) Burden: pain and inconvenience Cost of biopsy

45 Evaluating values of health outcomes 2.Obtain data from the individual patient: What are your views on having a biopsy? How important is it for you to find a cancer?

46 Either way Investigator: “here is a data set, is my model or marker of value?” Analyst: “I can’t tell you, you have to go away and do a literature search first. Also, you have to ask each and every patient.”

47 ROCkers and VOIers ROCkers’ methods are simple and elegant but useless VOIers’ methods are useful, but complex and difficult to apply

48 Solving the decision tree

49 Threshold probability Probability of disease is Define a threshold probability of disease as p t Patient accepts treatment if

50 Solve the decision tree p t, cut-point for choosing whether to treat or not Harm:Benefit ratio defines p –Harm: d – b (FP) –Benefit: a – c (TP) p t / (1-p t ) = H:B

51 If P(D=1) = P t t t t t

52 Intuitively The threshold probability at which a patient will opt for treatment is informative of how a patient weighs the relative harms of false-positive and false-negative results.

53 Nothing new so far Equation has been used to set threshold for positive diagnostic test Work out true harms and benefits of treatment and disease –E.g. if disease is 4 times worse than treatment, treat all patients with probability of disease >20%.

54 A simple decision analysis 1. Select a p t

55 A simple decision analysis 1. Select a p t 2. Positive test defined as

56 A simple decision analysis 1. Select a p t 2. Positive test defined as 3. Count true positives (benefit), false positives (harm)

57 A simple decision analysis 1. Select a p t 2. Positive test defined as 3. Count true positives (benefit), false positives (harm) 4. Calculate “Clinical Net Benefit” as:

58 Long history: Peirce 1884

59 Peirce 1884

60 Worked example at p t = 20% N=2742NegativeTrue positive False positive Net benefit calculationNet benefit Biopsy if risk ≥ 20%3466531743 653 – 1743 × (0.2 ÷ 0.8) 27420.079 Biopsy all men07102032 710- 2032× (0.2 ÷ 0.8) 27420.074

61 Net benefit has simple clinical interpretation Net benefit of 0.079 at p t of 20% Using the model is the equivalent of a strategy that identified the equivalent of 7.9 cancers per 100 patients with no unnecessary biopsies

62 Net benefit has simple clinical interpretation Difference between model and treat all at p t of 20%. –5/1000 more TPs for equal number of FPs Divide by weighting 0.005/ 0.25 = 0.02 –20/1000 less FPs for equal number of TPs (=20/1000 fewer unnecessary biopsies with no missed cancers)

63 Decision curve analysis 4. Vary p t over an appropriate range Vickers & Elkin Med Decis Making 2006;26:565–574 1. Select a p t 2. Positive test defined as 3. Calculate “Clinical Net Benefit” as:

64 Decision curve: theory

65 Treat none

66 Treat all [p(outcome)=50%] Treat all Treat none

67 Decisions with model Decisions based on model Treat all Treat none

68 Points in Decision Curves If treat none, NB =.. If treat all, and threshold = 0%, NB = … If cut-off is incidence of end point –NB treat none = NB treat all = …

69 Decision curve analysis Decision curve analysis tells us about the clinical value of a model where accuracy metrics do not Decision curve analysis does not require either: –Additional data –Individualized assessment Simple to use software is available to implement decision curve analysis www.decisioncurveanalysis.org

70

71

72

73 Decision analysis in the medical research literature Only a moderate number of papers devoted to decision analysis Many thousands of papers analyzed without reference to decision making (ROC curves, p values)

74 Decision Curve Analysis With thanks to…. –Elena Elkin –Mike Kattan –Daniel Sargent –Stuart Baker –Barry Kramer –Ewout Steyerberg

75 Illustrations

76 Clinical usefulness of testicular cancer model  Cutoff 70% necrosis / 30% malignant, motivated by  Decision analysis  Current practice: ≈ 65%

77 Net benefit calculations Resect all: NB=(299–3/7∙245)/544=0.3570.602 Resect none: NB = (0 – 0) / 544 = 00 Model: NB =(275–3/7∙143)/544=0.3930.602 Difference model – resect all: 0.0360 more resections of tumor3.6/1000 at the same number of unnecessary resections of necrosis

78 Decision curves for testicular cancer model

79 Comparison of performance measures

80 Lessons 1.Clinical usefulness may be limited despite reasonable discrimination and calibration

81 Which performance measure when?  It depends …  Evaluation of usefulness requires weighting and consideration of outcome incidence Hilden J. Prevalence-free utility-respecting summary indices of diagnostic power do not exist. Stat Med. 2000;19(4):431-40.  Summary indices vs graphs (e.g. area vs ROC curve, validation graphs, decision curves, reclassification table vs predictiveness curve)

82 Which performance measure when? 1.Discrimination: if poor, usefulness unlikely, but NB >= 0 2.Calibration: if poor in new setting, risk of NB<0

83 Conclusions  Statistical evaluations important, but may be at odds with evaluation of clinical usefulness; ROC 0.8 good? 0.6 always poor? NO!  Decision-analytic based performance measures, such as decision curves, are important to consider in the evaluation of the potential of a prediction model to support individualized decision making

84 References  Steyerberg, EW. Clinical prediction models: a practical approach to development, validation, and updating. New York, Springer, 2009.  Vickers AJ, Elkin EB: Decision curve analysis: a novel method for evaluating prediction models. Med Decis Making 26:565-74, 2006  Steyerberg EW, Vickers AJ: Decision Curve Analysis: A Discussion. Med Decis Making 28; 146, 2008  Pencina MJ, D'Agostino RB Sr, Steyerberg EW. Extensions of net reclassification improvement calculations to measure usefulness of new biomarkers. Stat Med 30:11-21, 2011  Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, buchowski N, Pencina MJ, Kattan MW. Assessing the performance of prediction models: a framework for some traditional and novel measures. Epidemiology, 21:128-38, 2010  Steyerberg EW, Pencina MJ, Lingsma HF, Kattan MW, Vickers AJ, Van Calster B. Assessing the incremental value of diagnostic and prognostic markers: a review and illustration. Eur J Clin Invest. 2011.  Steyerberg EW, Van Calster B, Pencina MJ. Performance measures for prediction models and markers: evaluation of predictions and classifications Rev Esp Cardiol 64:788-794, 2011

85 Evaluation of incremental value of markers

86 Case study: CVD prediction  Cohort: 3264 participants in Framingham Heart Study  Age 30 to 74 years  183 developed CHD (10 year risk: 5.6%)  Data as used in  Pencina MJ, D'Agostino RB Sr, D'Agostino RB Jr, Vasan RS. Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond. Stat Med 27:157-172, 2008  Steyerberg EW, Van Calster B, Pencina MJ. Performance measures for prediction models and markers: evaluation of predictions and classifications Rev Esp Cardiol 64:788-794, 2011

87 Analysis  Cox proportional hazards models  Time to event data  Reference model:  Dichotomous: Sex, diabetes, smoking  Continuous: age, systolic blood pressure (SBP), total cholesterol as continuous  All hazard ratios statistically signicant  Add high-density lipoprotein (HDL) cholesterol  continuous predictor highly signicant (hazard ratio = 0.65, P-value <.001)

88 How good are these models?  Performance of reference model  Incremental value of HDL

89 Performance criteria  Steyerberg EW, Van Calster B, Pencina, M. Medidas del rendimiento de modelos de prediccio ´n y marcadores pronosticos: evaluacion de las predicciones y clasificaciones. Rev Esp Cardiol. 2011. doi:10.1016/j.recesp.2011.04.017

90 Case study: quality of predictions

91 Discrimination Area: 0.762 without HDL vs 0.774 with HDL

92 Calibration  Internal: quite good  External: more relevant

93 Performance  Full range of predictions  ROC  R 2 ..  Classifications / decisions  Cut-off to define low vs high risk

94 Determine a cut-off for classification  Data-driven cut-off  Youden’s index: sensitivity + specificity – 1  E.g. sens 80%, spec 80%  Youden = …  E.g. sens 90%, spec 80%  Youden = …  E.g. sens 80%, spec 90%  Youden = …  E.g. sens 40%, spec 60%  Youden = …  E.g. sens 100%, spec 100%  Youden = …  Youden’s index maximized: upper left corner ROC curve  If predictions perfectly calibrated  Upper left corner: cut-off = incidence of the outcome  Incidence = 183/3264 = 5.6%

95 Determine a cut-off for classification  Data-driven cut-off  Youden’s index: sensitivity + specificity – 1  Decision-analytic  Cut-off determined by clinical context  Relative importance (‘utility’) of the consequence of a true or false classification  True-positive classification: correct treatment  False-positive classification: overtreatment  True-negative classification: no treatment  False-negative classification: undertreatment  Harm: net overtreatment (FP-TN)  Benefit: net correct treatment (TP-FN)  Odds of the cut-off = H:B ratio

96 Evaluation of performance  Youden index: “science of the method”  Net Benefit: “utility of the method”  References:  Peirce, Science 1884  Vergouwe, Semin Urol Oncol 2002  Vickers, MDM 2006

97 Net Benefit  Net Benefit = (TP – w FP) / N w = cut-off/ (1 – cut-off)  e.g.: cut-off 50%: w =.5/.5=1; cut-off 20%: w=.2/.8=1/4  w = H : B ratio  “Number of true-positive classifications, penalized for false-positive classifications”

98 Increase in AUC  5.6%: AUC 0.696  0.719  20% : AUC 0.550  0.579

99 Continuous variant Area: 0.762  0.774

100 Addition of a marker to a model  Typically small improvement in discriminative ability according to AUC (or c statistic)  c stat blamed for being insensitive  Study ‘Reclassification’

101

102  Net Reclassification Index:  improvement in sensitivity + improvement in specificity = (move up | event – move down | event) + (move down | non-event – move up | non-event )

103 29 7 173 174 22/183=12% -1/3081=.03%

104 NRI for 5.6% cut-off?  NRI for CHD: 7/183 = 3.8%  NRI for No CHD: 24/3081 = 0.8%  NRI = 4.6%

105 NRI and sens/spec  NRI = delta sens + delta spec  Sens w/out = 135/183 = 73.8%  Sens with HDL= 142/183 = 77.6%

106 NRI better than delta AUC?  NRI = delta(sens) + delta(spec)  AUC for binary classification = (sens + spec) / 2

107 NRI and delta AUC  NRI = delta(sens) + delta(spec)  AUC for binary classification = (sens + spec) / 2  Delta AUC = (delta(sens) + delta(spec)) / 2  NRI = 2 x delta(AUC)  Delta(Youden) = delta(sens) + delta(spec)  NRI = delta(Youden)

108 NRI has ‘absurd’ weighting?

109 Decision-analytic performance: NB  Net Benefit = (TP – w FP) / N  No HDL model:  TP = 3+132 = 135  FP = 166 + 901= 1067  w = 0.056/0.944 = 0.059  N = 3264  NB = (135 – 0.059 x 1067) / 3264 = 2.21%  With HDL model:  NB = (142 – 0.059 x 1043) / 3264 = 2.47%  Delta(NB)  Increase in TP: 10 – 3 = 7  Decrease in FP: 166 – 142 = 24  Increase in NB: (7 + 0.059 x 24) / 3264 = 0.26%  Interpretation:  “2.6 more true CHD events identified per 1000 subjects, at the same number of FP classifications.”  “ HDL has to be measured in 1/0.26% = 385 subjects to identify one more TP”

110 Application to FHS

111 Continuous NRI: no categories  All cut-offs; information similar to AUC and Decision Curve


Download ppt "Relationship between performance measures: From statistical evaluations to decision-analysis Ewout Steyerberg Dept of Public Health, Erasmus MC, Rotterdam,"

Similar presentations


Ads by Google