Presentation is loading. Please wait.

Presentation is loading. Please wait.

Selective cutoff reporting in studies of diagnostic test accuracy of depression screening tools: Selective cutoff reporting in studies of diagnostic test.

Similar presentations


Presentation on theme: "Selective cutoff reporting in studies of diagnostic test accuracy of depression screening tools: Selective cutoff reporting in studies of diagnostic test."— Presentation transcript:

1 Selective cutoff reporting in studies of diagnostic test accuracy of depression screening tools: Selective cutoff reporting in studies of diagnostic test accuracy of depression screening tools: Comparing traditional meta-analysis to individual patient data meta-analysis Brooke Levis, MSc, PhD Candidate Jewish General Hospital and McGill University Montreal, Quebec, Canada

2 Does Selective Reporting of Data-driven Cutoffs Exaggerate Accuracy? The Hockey Analogy 2

3 What is Screening? Illustration: This information was originally developed by the UK National Screening Committee/NHS Screening Programmes (www.screening.nhs.uk) and is used under the Open Government Licence v1.0 Purpose to identify otherwise unrecognisable disease Purpose to identify otherwise unrecognisable disease By sorting out apparently well persons who probably have a condition from those who probably do not By sorting out apparently well persons who probably have a condition from those who probably do not Not diagnostic Not diagnostic Positive tests require referral for diagnosis and, as appropriate, treatment Positive tests require referral for diagnosis and, as appropriate, treatment A program – of which a test is one component A program – of which a test is one component 3

4 Patient Health Questionnaire (PHQ-9) Patient Health Questionnaire (PHQ-9) Depression screening tool Depression screening tool Scores range from 0 to 27 Scores range from 0 to 27 Higher scores = more severe symptoms Higher scores = more severe symptoms The Patient Health Questionnaire (PHQ-9) depression screening tool 4

5 Extreme scenarios: Extreme scenarios: Cutoff of ≥ 0 Cutoff of ≥ 0 All subjects above cutoff All subjects above cutoff sensitivity = 100% sensitivity = 100% Cutoff of ≥ 27 Cutoff of ≥ 27 All subjects below cutoff All subjects below cutoff specificity = 100% specificity = 100% Selective Reporting of Results Using Data-Driven Cutoffs 5

6 Does Selecting Reporting of Data-driven Cutoffs Exaggerate Accuracy? Sensitivity increases from cutoff of 8 to cutoff of 11 For standard cutoff of 10, missing 897 cases (13%) For cutoffs of 7-9 and 11, missing 52-58% of data Manea et al., CMAJ, 2012 6

7 Questions Does selective cutoff reporting lead to exaggerated estimates of accuracy? Does selective cutoff reporting lead to exaggerated estimates of accuracy? Can we identify predictable patterns of selective cutoff reporting? Can we identify predictable patterns of selective cutoff reporting? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? 7

8 Methods Data source: Data source: Studies included in published traditional meta-analysis on the diagnostic accuracy of the PHQ-9. (Manea et al, CMAJ 2012) Studies included in published traditional meta-analysis on the diagnostic accuracy of the PHQ-9. (Manea et al, CMAJ 2012) Inclusion criteria: Inclusion criteria: Unique patient sample Unique patient sample Published diagnostic accuracy for MDD for at least one PHQ-9 cutoff Published diagnostic accuracy for MDD for at least one PHQ-9 cutoff Data transfer: Data transfer: Invited authors of the eligible studies to contribute their original patient data (de-identified) Invited authors of the eligible studies to contribute their original patient data (de-identified) Received data from 13 of 16 eligible datasets (80% of patients, 94% of MDD cases) Received data from 13 of 16 eligible datasets (80% of patients, 94% of MDD cases) 8

9 Methods Data preparation Data preparation For each dataset, extracted PHQ-9 scores and MDD diagnostic status for each patient, and information pertaining to weighting For each dataset, extracted PHQ-9 scores and MDD diagnostic status for each patient, and information pertaining to weighting Statistical analyses (2 sets performed) Statistical analyses (2 sets performed) Traditional meta-analysis Traditional meta-analysis For each cutoff between 7 and 15, included data from the studies that reported accuracy results for the respective cutoff in the original publication For each cutoff between 7 and 15, included data from the studies that reported accuracy results for the respective cutoff in the original publication IPD meta-analysis IPD meta-analysis For each cutoff between 7 and 15, included data from all studies For each cutoff between 7 and 15, included data from all studies 9

10 Published data (traditional MA)All data (IPD MA) Cutoff# of studies# of patients# mdd cases# of studies# of patients# mdd cases 7 420945501345891037 8 420945501345891037 9 415793091345891037 10 1137947231345891037 11 512532161345891037 12 613882611345891037 13 410731861345891037 14 39771501345891037 15 410751931345891037 Comparison of data availability 10

11 Methods Model: Bivariate random-effects* Model: Bivariate random-effects* meta-analysis models meta-analysis models Models sensitivity and specificity at the same time Models sensitivity and specificity at the same time Accounts for clustering by study Accounts for clustering by study Provides an overall pooled sensitivity and specificity for each cutoff, for the 2 sets of analyses Provides an overall pooled sensitivity and specificity for each cutoff, for the 2 sets of analyses Within each set of analyses, each cutoff requires its own model Within each set of analyses, each cutoff requires its own model Estimates between study heterogeneity Estimates between study heterogeneity Note: model accounts for correlation between sensitivity and specificity at each threshold, but not for correlation of parameters across thresholds *Random effects model: sensitivity & specificity assumed to vary across primary studies 11

12 Questions Does selective cutoff reporting lead to exaggerated estimates of accuracy? Does selective cutoff reporting lead to exaggerated estimates of accuracy? Can we identify predictable patterns of selective cutoff reporting? Can we identify predictable patterns of selective cutoff reporting? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? 12

13 Comparison of Diagnostic Accuracy Published data (traditional MA) All data (IPD MA) Cutoff N studies SensSpecCutoff N studies SensSpec 7 40.850.73 7 130.970.73 8 40.790.78 8 130.930.78 9 4 0.82 9 130.890.83 10 110.850.88 10 130.870.88 11 50.920.90 11 130.830.90 12 60.820.92 12 130.770.92 13 40.820.94 13 0.670.94 14 30.710.97 14 130.590.96 15 40.610.98 15 130.520.97 13

14 Comparison of ROC Curves 14

15 Questions Does selective cutoff reporting lead to exaggerated estimates of accuracy? Does selective cutoff reporting lead to exaggerated estimates of accuracy? Can we identify predictable patterns of selective cutoff reporting? Can we identify predictable patterns of selective cutoff reporting? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? 15

16 Publishing trends by study 16

17 Comparison of Sensitivity by Cutoff 17

18 Questions Does selective cutoff reporting lead to exaggerated estimates of accuracy? Does selective cutoff reporting lead to exaggerated estimates of accuracy? Can we identify predictable patterns of selective cutoff reporting? Can we identify predictable patterns of selective cutoff reporting? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? 18

19 Comparison of Diagnostic Accuracy 19 Published data (traditional MA) All data (IPD MA) Cutoff N studies SensSpecCutoff N studies SensSpec 7 40.850.73 7 130.970.73 8 40.790.78 8 130.930.78 9 4 0.82 9 130.890.83 10 110.850.88 10 130.870.88 11 50.920.90 11 130.830.90 12 60.820.92 12 130.770.92 13 40.820.94 13 0.670.94 14 30.710.97 14 130.590.96 15 40.610.98 15 130.520.97

20 Why Sensitivity Changes with Moving Cutoffs, but Not Specificity 20

21 Questions Does selective cutoff reporting lead to exaggerated estimates of accuracy? Does selective cutoff reporting lead to exaggerated estimates of accuracy? Can we identify predictable patterns of selective cutoff reporting? Can we identify predictable patterns of selective cutoff reporting? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Why does selective cutoff reporting appear to impact sensitivity, but not specificity? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? Does selective cutoff reporting transfer high heterogeneity in sensitivity due to small numbers of cases to heterogeneity in cutoff scores, but homogeneous accuracy estimates? 21

22 Heterogeneity 22

23 Summary Selective cutoff reporting in depression screening tool DTA studies may distort accuracy across cutoffs. Selective cutoff reporting in depression screening tool DTA studies may distort accuracy across cutoffs. It will lead to exaggerated estimates of accuracy. It will lead to exaggerated estimates of accuracy. These distortions were relatively minor in the PHQ, but would likely be much larger for other measures where standard cutoffs are less consistently reported and more data-driven reporting seems to occur (e.g., HADS). These distortions were relatively minor in the PHQ, but would likely be much larger for other measures where standard cutoffs are less consistently reported and more data-driven reporting seems to occur (e.g., HADS). IPD meta-analysis can address this and will allow subgroup-based accuracy evaluation. IPD meta-analysis can address this and will allow subgroup-based accuracy evaluation. 23

24 Summary STARD undergoing revision: STARD undergoing revision: Needs to require precision-based sample size calculation to avoid very small samples – particularly number of cases – and unstable estimates Needs to require precision-based sample size calculation to avoid very small samples – particularly number of cases – and unstable estimates Needs to require reporting of spectrum of cutoffs, which is easily done with online appendices Needs to require reporting of spectrum of cutoffs, which is easily done with online appendices 24

25 Acknowledgements Brett Thombs Brett Thombs Andrea Benedetti Andrea Benedetti Roy Ziegelstein Roy Ziegelstein Pim Cuijpers Pim Cuijpers Simon Gilbody Simon Gilbody John Ioannidis John Ioannidis Alex Levis Alex Levis Danielle Rice Danielle Rice Scott Patten Dean McMillan Ian Shrier Russell Steele Lorie Kloda DEPRESSD Investigators Other Contributors 25


Download ppt "Selective cutoff reporting in studies of diagnostic test accuracy of depression screening tools: Selective cutoff reporting in studies of diagnostic test."

Similar presentations


Ads by Google