Download presentation
Presentation is loading. Please wait.
Published byAustin McCormick Modified over 11 years ago
1
Michael A. Kohn, MD, MPP 6/9/2011 Combining Tests and Multivariable Decision Rules
2
Importance of test non- independence Recursive Partitioning Logistic Regression Variable (Test) Selection Importance of validation separate from derivation (calibration and discrimination revisited) Combining Tests/Diagnostic Models
3
Combining Tests Example Prenatal sonographic Nuchal Translucency (NT) and Nasal Bone Exam as dichotomous tests for Trisomy 21* *Cicero, S., G. Rembouskos, et al. (2004). "Likelihood ratio for trisomy 21 in fetuses with absent nasal bone at the 11- 14-week scan." Ultrasound Obstet Gynecol 23(3): 218-23.
4
If NT 3.5 mm Positive for Trisomy 21* *Whats wrong with this definition?
6
In general, dont make multi-level tests like NT into dichotomous tests by choosing a fixed cutoff I did it here to make the discussion of multiple tests easier I arbitrarily chose to call 3.5 mm positive
7
One Dichotomous Test Trisomy 21 Nuchal D+ D- LR Translucency 3.5 mm212 4787.0 < 3.5 mm12147450.4 Total3335223 Do you see that this is (212/333)/(478/5223)? Review of Chapter 3: What are the sensitivity, specificity, PPV, and NPV of this test? (Be careful.)
8
Nuchal Translucency Sensitivity = 212/333 = 64% Specificity = 4745/5223 = 91% Prevalence = 333/(333+5223) = 6% (Study population: pregnant women about to undergo CVS, so high prevalence of Trisomy 21) PPV = 212/(212 + 478) = 31% NPV = 4745/(121 + 4745) = 97.5%* * Not that great; prior to test P(D-) = 94%
9
Clinical Scenario – One Test Pre-Test Probability of Downs = 6% NT Positive Pre-test prob: 0.06 Pre-test odds: 0.06/0.94 = 0.064 LR(+) = 7.0 Post-Test Odds = Pre-Test Odds x LR(+) = 0.064 x 7.0 = 0.44 Post-Test prob = 0.44/(0.44 + 1) = 0.31
10
NT Positive Pre-test Prob = 0.06 P(Result|Trisomy 21) = 0.64 P(Result|No Trisomy 21) = 0.09 Post-Test Prob = ? http://www.quesgen.com/PostProbofDisease.php Slide Rule
11
Nasal Bone Seen NBA=No Neg for Trisomy 21 Nasal Bone Absent NBA=Yes Pos for Trisomy 21
12
Second Dichotomous Test Nasal Bone Tri21+ Tri21-LR Absent Yes 229 12927.8 No10450940.32 Total3335223 Do you see that this is (229/333)/(129/5223)?
13
Pre-Test Probability of Trisomy 21 = 6% NT Positive for Trisomy 21 ( 3.5 mm) Post-NT Probability of Trisomy 21 = 31% Nasal Bone Absent Post-NBA Probability of Trisomy 21 = ? Clinical Scenario –Two Tests Using Probabilities
14
Clinical Scenario – Two Tests Pre-Test Odds of Tri21 = 0.064 NT Positive (LR = 7.0) Post-Test Odds of Tri21 = 0.44 Nasal Bone Absent (LR = 27.8?) Post-Test Odds of Tri21 =.44 x 27.8? = 12.4? (P = 12.4/(1+12.4) = 92.5%?) Using Odds
15
Clinical Scenario – Two Tests Pre-Test Probability of Trisomy 21 = 6% NT 3.5 mm AND Nasal Bone Absent
16
Question Can we use the post-test odds after a positive Nuchal Translucency as the pre-test odds for the positive Nasal Bone Examination? i.e., can we combine the positive results by multiplying their LRs? LR(NT+, NBE +) = LR(NT +) x LR(NBE +) ? = 7.0 x 27.8 ? = 194 ?
17
Answer = No NTNBE Trisomy 21 +% Trisomy 21 -%LR Pos 15847%360.7% 69 PosNeg5416%4428.5% 1.9 NegPos7121%931.8% 12 Neg 5015%465289% 0.2 Total 333100%5223100% Not 194 158/(158 + 36) = 81%, not 92.5%
18
Non-Independence Absence of the nasal bone does not tell you as much if you already know that the nuchal translucency is 3.5 mm.
19
Clinical Scenario Pre-Test Odds of Tri21 = 0.064 NT+/NBE + (LR =68.8) Post-Test Odds = 0.064 x 68.8 = 4.40 (P = 4.40/(1+4.40) = 81%, not 92.5%) Using Odds
20
Non-Independence
21
Non-Independence of NT and NBA Apparently, even in chromosomally normal fetuses, enlarged NT and absence of the nasal bone are associated. A false positive on the NT makes a false positive on the NBE more likely. Of normal (D-) fetuses with NT < 3.5 mm only 2.0% had nasal bone absent. Of normal (D-) fetuses with NT 3.5 mm, 7.5% had nasal bone absent. Some (but not all) of this may have to do with ethnicity. In this London study, chromosomally normal fetuses of Afro-Caribbean ethnicity had both larger NTs and more frequent absence of the nasal bone. In Trisomy 21 (D+) fetuses, normal NT was associated with the presence of the nasal bone, so a false negative on the NT was associated with a false negative on the NBE.
22
Non-Independence Instead of looking for the nasal bone, what if the second test were just a repeat measurement of the nuchal translucency? A second positive NT would do little to increase your certainty of Trisomy 21. If it was false positive the first time around, it is likely to be false positive the second time.
23
Reasons for Non- Independence Tests measure the same aspect of disease. One aspect of Downs syndrome is slower fetal development; the NT decreases more slowly and the nasal bone ossifies later. Chromosomally NORMAL fetuses that develop slowly will tend to have false positives on BOTH the NT Exam and the Nasal Bone Exam.
24
Reasons for Non- Independence Heterogeneity of Disease (e.g. spectrum of severity)*. Heterogeneity of Non-Disease. (See EBD page 158.) *In this example, Downs syndrome is the only chromosomal abnormality considered, so disease is fairly homogeneous
25
Unless tests are independent, we cant combine results by multiplying LRs
26
Ways to Combine Multiple Tests On a group of patients (derivation set), perform the multiple tests and (independently*) determine true disease status (apply the gold standard) Measure LR for each possible combination of results Recursive Partitioning Logistic Regression *Beware of incorporation bias
27
Determine LR for Each Result Combination NTNBATri21+%Tri21-%LR Post Test Prob* Pos 15847%360.7%6981% PosNeg5416%4428.5%1.911% NegPos7121%931.8%1243% Neg 5015%465289.1% 0.21% Total 333100%5223100% *Assumes pre-test prob = 6%
28
Sort by LR (Descending) NTNBATri21+%Tri21-%LR Pos 15847%360.70%69 NegPos7121%931.80%12 PosNeg5416%4428.50%1.9 Neg 5015%465289.10%0.2
29
Apply Chapter 4 – Multilevel Tests Now you have a multilevel test (In this case, 4 levels.) Have LR for each test result Can create ROC curve and calculate AUROC Given pre-test probability and treatment threshold probability (C/(B+C)), can find optimal cutoff.
30
Create ROC Table NTNBETri21+Sens Tri2 1-1 - SpecLR 0% Pos 15847%360.70%69 NegPos7168%933%12 PosNeg5484%44211%1.9 Neg 50100%4652100%0.2
31
AUROC = 0.896
32
Optimal Cutoff NTNBELR Post-Test Prob Pos 690.81 NegPos120.43 PosNeg1.90.11 Neg 0.20.01 Assume Pre-test probability = 6% Threshold for CVS is 2%
33
Determine LR for Each Result Combination 2 dichotomous tests: 4 combinations 3 dichotomous tests: 8 combinations 4 dichotomous tests: 16 combinations Etc. 2 3-level tests: 9 combinations 3 3-level tests: 27 combinations Etc.
34
Determine LR for Each Result Combination How do you handle continuous tests? Not always practical for groups of tests.
35
Recursive Partitioning Measure NT First
36
Recursive Partitioning Examine Nasal Bone First
37
Do Nasal Bone Exam First Better separates Trisomy 21 from chromosomally normal fetuses If your threshold for CVS is between 11% and 43%, you can stop after the nasal bone exam If your threshold is between 1% and 11%, you should do the NT exam only if the NBE is normal.
38
Recursive Partitioning Examine Nasal Bone First CVS if P(Trisomy 21 > 5%)
40
Recursive Partitioning Same as Classification and Regression Trees (CART) Dont have to work out probabilities (or LRs) for all possible combinations of tests, because of tree pruning
41
Recursive Partitioning Does not deal well with continuous test results* *when there is a monotonic relationship between the test result and the probability of disease
42
Logistic Regression Ln(Odds(D+)) = a + b NT NT+ b NBA NBA + b interact (NT)(NBA) + = 1 - = 0 Needs a course of its own!
43
Why does logistic regression model log-odds instead of probability? Related to why the LR Slide Rules log-odds scale helps us visualize combining test results.
44
Probability of Trisomy 21 vs. Maternal Age
45
Ln(Odds) of Trisomy 21 vs. Maternal Age
46
Combining 2 Continuous Tests > 1% Probability of Trisomy 21 < 1% Probability of Trisomy 21
47
Choosing Which Tests to Include in the Decision Rule Have focused on how to combine results of two or more tests, not on which of several tests to include in a decision rule. Variable Selection Options include: Recursive partitioning Automated stepwise logistic regression Choice of variables in derivation data set requires confirmation in a separate validation data set.
48
Variable Selection Especially susceptible to overfitting
49
Need for Validation: Example* Study of clinical predictors of bacterial diarrhea. Evaluated 34 historical items and 16 physical examination questions. 3 questions (abrupt onset, > 4 stools/day, and absence of vomiting) best predicted a positive stool culture (sensitivity 86%; specificity 60% for all 3). Would these 3 be the best predictors in a new dataset? Would they have the same sensitivity and specificity? *DeWitt TG, Humphrey KF, McCarthy P. Clinical predictors of acute bacterial diarrhea in young children. Pediatrics. Oct 1985;76(4):551- 556.
50
Need for Validation Develop prediction rule by choosing a few tests and findings from a large number of candidates. Takes advantage of chance variations* in the data. Predictive ability of rule will probably disappear when you try to validate on a new dataset. Can be referred to as overfitting. e.g., low serum calcium in 12 children with hemolytic uremic syndrome and bad outcomes
51
VALIDATION No matter what technique (CART or logistic regression) is used, the tests included in a model and the way in which their results are combined must be tested on a data set different from the one used to derive the rule. Beware of studies that use a validation set to tweak the model. This is really just a second derivation step.
52
Prognostic Tests and Multivariable Diagnostic Models Commonly express results in terms of a probability --risk of the outcome by a fixed time point (prognostic test) --posterior probability of disease (diagnostic model) Need to assess both calibration and discrimination.
53
Validation Dataset Measure all the variables needed for the model. Determine disease status (D+ or D-) on all subjects.
54
VALIDATION Calibration -- Divide dataset into probability groups (deciles, quintiles, …) based on the model (no tweaking allowed). -- In each group, compare actual D+ proportion to model-predicted probability in each group.
55
VALIDATION Discrimination Discrimination -- Test result is model-predicted probability of disease. -- Use Walking Man to draw ROC curve and calculate AUROC.
56
Importance of test non- independence Recursive Partitioning Logistic Regression Variable (Test) Selection Importance of validation separate from derivation (calibration and discrimination revisited) Combining Tests/Diagnostic Models
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.