Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Medical Epidemiology Interpreting Medical Tests and Other Evidence.

Similar presentations


Presentation on theme: "1 Medical Epidemiology Interpreting Medical Tests and Other Evidence."— Presentation transcript:

1

2 1 Medical Epidemiology Interpreting Medical Tests and Other Evidence

3 2 n Dichotomous model n Developmental characteristics –Test parameters –Cut-points and Receiver Operating Characteristic (ROC) n Clinical Interpretation –Predictive values: keys to clinical practice –Bayes’ Theorem and likelihood ratios –Pre- and post-test probabilities and odds of disease –Test interpretation in context –True vs. test prevalence n Combination tests: serial and parallel testing n Disease Screening n Why everything is a test!

4 3 Dichotomous model Simplification of Scale n Test usually results in continuous or complex measurement n Often summarized by simpler scale -- reductionist, e.g. –ordinal grading, e.g. cancer staging –dichotomization -- yes or no, go or stop

5 4 Dichotomous model Test Errors from Dichotomization Types of errors False Positives = positive tests that are wrong = b False Negatives = negative tests that are wrong = c

6 5 Developmental characteristics: test parameters Error rates as conditional probabilities n Pr(T+|D-) = False Positive Rate (FP rate) = b/(b+d) n Pr(T-|D+) = False Negative Rate (FN rate) = c/(a+c)

7 6 Developmental characteristics: test parameters Complements of error rates as desirable test properties n Sensitivity = Pr(T+|D+) = 1 - FN rate = a/(a+c) Sensitivity is PID (Positive In Disease) [pelvic inflammatory disease] n Specificity = Pr(T-|D-) = 1 - FP rate = d/(b+d) Specificity is NIH (Negative In Health) [national institutes of health]

8 7 Typical setting for finding Sensitivity and Specificity n Best if everyone who gets the new test also gets “gold standard” n Doesn’t happen n Even reverse doesn’t happen n Not even a sample of each (case- control type) n Case series of patients who had both tests

9 8 Setting for finding Sensitivity and Specificity n Sensitivity should not be tested in “sickest of sick” n Should include spectrum of disease n Specificity should not be tested in “healthiest of healthy” n Should include similar conditions.

10 9 Developmental characteristics: Cut-points and Receiver Operating Characteristic (ROC) Healthy

11 10 Developmental characteristics: Cut-points and Receiver Operating Characteristic (ROC) Healthy Sick

12 11 Developmental characteristics: Cut-points and Receiver Operating Characteristic (ROC) Fals pos= 20% True pos=82%

13 12 Developmental characteristics: Cut-points and Receiver Operating Characteristic (ROC) Fals pos= 9% True pos=70%

14 13 Developmental characteristics: Cut-points and Receiver Operating Characteristic (ROC) F pos= 100% T pos=100%

15 14 Developmental characteristics: Cut-points and Receiver Operating Characteristic (ROC) F pos= 50% T pos=90%

16 15

17 16 Developmental characteristics: Cut-points and Receiver Operating Characteristic (ROC) Receiver Operating Characteristic (ROC)

18 17 Developmental characteristics: Cut-points and Receiver Operating Characteristic (ROC) Receiver Operating Characteristic (ROC)

19 18 Receiver Operating Characteristic (ROC) n ROC Curve allows comparison of different tests for the same condition without (before) specifying a cut-off point. n The test with the largest AUC (Area under the curve) is the best.

20 19

21 20 Developmental characteristics: test parameters Problems in Assessing Test Parameters n Lack of objective "gold standard" for testing, because –unavailable, except e.g. at autopsy –too expensive, invasive, risky or unpleasant n Paucity of information on tests in healthy –too expense, invasive, unpleasant, risky, and possibly unethical for use in healthy –Since test negatives are usually not pursued with more extensive work-ups, lack of information on false negatives

22 21 Clinical Interpretation: Predictive Values Most test positives below are sick. But this is because there are as many sick as healthy people overall. What if fewer people were sick, relative to the healthy?

23 22 Clinical Interpretation: Predictive Values Now most test positives below are healthy. This is because the number of false positives from the larger healthy group outweighs the true positives from the sick group. Thus, the chance that a test positive is sick depends on the prevalence of the disease in the group tested!

24 23 Clinical Interpretation: Predictive Values But the prevalence of the disease in the group tested depends on whom you choose to test the chance that a test positive is sick, as well as the chance that a test negative is healthy, are what a physician needs to know. These are not sensitivity and specificity! The numbers a physician needs to know are the predictive values of the test.

25 24 Clinical Interpretation: Predictive Values Sensitivity (Se) Pr{T+|D+} true positives total with the disease Positive Predictive Value (PV+, PPV) Pr{D+|T+} true positives total positive on the test

26 25 Positive Predictive Value n Predictive value positive n The predictive value of a positive test. n If I have a positive test, does that mean I have the disease? n Then, what does it mean? n If I have a positive test what is the chance (probability) that I have the disease? n Probability of having the disease “after” you have a positive test (posttest probability) n (Watch for “OF”. It usually precedes the denominator Numerator is always PART of the denominator)

27 26 Clinical Interpretation: Predictive Values T+ D+ T+ and D+

28 27 Clinical Interpretation: Predictive Value Specificity (Sp) Pr{T-|D-} true negatives total without the disease Negative Predictive Value (PV-, NPV) Pr{D-|T-} true negatives total negative on the test

29 28 Negative Predictive Value n Predictive value negative n If I have a negative test, does that mean I don’t have the disease? n What does it mean? n If I have a negative test what is the chance I don’t have the disease? n The predictive value of a negative test.

30 29 Mathematicians don’t Like PV- n PV- “probability of no disease given a negative test result” n They prefer (1-PV-) “probability of disease given a negative test result” n Also referred to as “post-test probability” (of a negative test) n Ex: PV- = 0.95 “post-test probability for a negative test result = 0.05” n Ex: PV+ = 0.90 “post-test probability for a positive test result = 0.90”

31 30 Mathematicians don’t Like Specificity either n They prefer false positive rate, which is 1 – specificity.

32 31 Where do you find PPV? n Table? n NO n Make new table n Switch to odds

33 32 Use This Table ? NO

34 33 Make a New Table

35 34 Make a New Table

36 35 Switch to Odds n 1000 patients. 100 have disease. 900 healthy. Who will test positive? n Diseased 100__X.95 =_95 Healthy 900 X.08 = 72 n We will end with 95+72= 167 positive tests of which 95 will have the disease n PPV = 95/167

37 36 From pretest to posttest odds n Diseased 100 X.95 =_95 Healthy 900 X.08 = 72 n 100 = Pretest odds 900 n.95 = Sensitivity__ = prob. Of pos test in dis.08 1-Specificity prob. Of pos test in hlth n 95 =Posttest odds. Probability is 95/(95+72) 72

38 37 n Remember to switch back to probability

39 38 What is this second fraction? n Likelihood Ratio Positive n Multiplied by any patient’s pretest odds gives you their posttest odds. n Comparing LR+ of different tests is comparing their ability to “rule in” a diagnosis. n As specificity increases LR+ increases and PPV increases (Sp P In)

40 39 Clinical Interpretation: likelihood ratios n Likelihood ratio = Pr{test result|disease present} Pr{test result|disease absent} n LR+ = Pr{T+|D+}/Pr{T+|D-} = Sensitivity/(1-Specificity) n LR- = Pr{T-|D+}/Pr{T-|D-} = (1-Sensitivity)/Specificity

41 40 Clinical Interpretation: Positive Likelihood Ratio and PV+ O = PRE-TEST ODDS OF DISEASE POST-ODDS (+) = O x LR+ =

42 41 Likelihood Ratio Negative n Diseased 100_ X.05 =_5__ Healthy 900 X.92 = 828 n 100 = Pretest odds 900 n.05 = 1-sensitivity = prob. Of neg test in dis.92 Specificity prob. Of neg test in hlth (LR-) n Posttest odds= 5/828. Probability=5/833=0.6% n As sensitivity increases LR- decreases and NPV increases (Sn N Out)

43 42 Clinical Interpretation: Negative Likelihood Ratio and PV- POST-ODDS (-) = O x LR- =

44 43 n Remember to switch to probability and also to use 1 minus

45 44 Post test probability given a negative test = Post odds (-)/ 1- post odds (-)

46 45 Value of a diagnostic test depends on the prior probability of disease n Prevalence (Probability) = 5% n Sensitivity = 90% n Specificity = 85% n PV+ = 24% n PV- = 99% n Test not as useful when disease unlikely n Prevalence (Probability) = 90% n Sensitivity = 90% n Specificity = 85% n PV+ = 98% n PV- = 49% n Test not as useful when disease likely

47 46 Clinical interpretation of post-test probability Disease ruled out Disease ruled in

48 47 Advantages of LRs n The higher or lower the LR, the higher or lower the post-test disease probability n Which test will result in the highest post-test probability in a given patient? n The test with the largest LR+ n Which test will result in the lowest post-test probability in a given patient? n The test with the smallest LR-

49 48 Advantages of LRs n Clear separation of test characteristics from disease probability.

50 49 Likelihood Ratios - Advantage n Provide a measure of a test’s ability to rule in or rule out disease independent of disease probability n Test A LR+ > Test B LR+ –Test A PV+ > Test B PV+ always! n Test A LR- < Test B LR- –Test A PV- > Test B PV- always!

51 50 Using Likelihood Ratios to Determine Post- Test Disease Probability

52 51

53 52 Predictive Values Alternate formulations:Bayes’ Theorem PV+ = Se  Pre-test Prevalence Se  Pre-test Prevalence + (1 - Sp)  (1 - Pre-test Prevalence) High specificity to “rule-in” disease PV- = Sp  (1 - Pre-test Prevalence) Sp  (1 - Pre-test Prevalence) + (1 - Se)  Pre-test Prevalence High sensitivity to “rule-out” disease

54 53 Clinical Interpretation: Predictive Values

55 54 Clinical Interpretation: Predictive Values

56 55 If Predictive value is more useful why not reported? n Should they report it? n Only if everyone is tested. n And even then. n You need sensitivity and specificity from literature. Add YOUR OWN pretest probability.

57 56 So how do you figure pretest probability? n Start with disease prevalence. n Refine to local population. n Refine to population you serve. n Refine according to patient’s presentation. n Add in results of history and exam (clinical suspicion). n Also consider your own threshold for testing.

58 57 Why everything is a test n Once a tentative dx is formed, each piece of new information -- symptom, sign, or test result -- should provide information to rule it in or out. n Before the new information is acquired, the physician’s rational synthesis of all available information may be embodied in an estimate of pre-test prevalence. n Rationally, the new information should update that estimate to a post-test prevalence, in the manner described above for a diagnostic test. n In practice it is rare to proceed from precise numerical estimates. Nevertheless, implicit understanding of this logic makes clinical practice more rational and effective.

59 58 Pretest Probability: Clinical Significance n Expected test result means more than unexpected. n Same clinical findings have different meaning in different settings (e.g.scheduled versus unscheduled visit). Heart sound, tender area. n Neurosurgeon. n Lupus nephritis.

60 59 What proportion of all patients will test positive? n Diseased X sensitivity + Healthy X (1-specificity) n Prevalence X sensitivity + (1-prevalence)(1-specificity) n We call this “test prevalence” n i.e. prevalence according to the test.

61 60 SENS = SPEC = 95% n What if test prevalence is 5%? n What if it is 95%?

62 61

63 62 Combination tests: serial and parallel testing Combinations of specificity and sensitivity superior to the use of any single test may sometimes be achieved by strategic uses of multiple tests. There are two usual ways of doing this. n Serial testing: Use >1 test in sequence, stopping at the first negative test. Diagnosis requires all tests to be positive. n Parallel testing: Use >1 test simultaneously, diagnosing if any test is positive.

64 63 Combination tests: serial testing n Doing the tests sequentially, instead of together with the same decision rule, is a cost saving measure. n This strategy –increases specificity above that of any of the individual tests, but –degrades sensitivity below that of any of them singly. n However, the sensitivity of the serial combination may still be higher than would be achievable if the cut-point of any single test were raised to achieve the same specificity as the serial combination.

65 64 Combination tests: serial testing Demonstration: Serial Testing with Independent Tests n Se SC = sensitivity of serial combination Sp SC = specificity of serial combination n Se SC = Product of all sensitivities= Se 1 X Se 2 X…etc Hence Se SC < all individual Se n 1-Sp SC = Product of all(1-Sp) Hence Sp SC > all individual Sp i n Serial test to rule-in disease

66 65 Combination tests: parallel testing Parallel Testing n Usual decision strategy diagnoses if any test positive. This strategy –increases sensitivity above that of any of the individual tests, but –degrades specificity below that of any individual test. n However, the specificity of the combination may be higher than would be achievable if the cut-point of any single test were lowered to achieve the same sensitivity as the parallel combination.

67 66 Combination tests: parallel testing Demonstration: Parallel Testing with Independent Tests n Se PC = sensitivity of parallel combination Sp PC = specificity of parallel combination n 1-Se PC = Product of all(1 - Se) Hence Se PC > all individual Se n Sp PC = Product of all Sp Hence Sp PC < all individual Sp i n Parallel test to rule-out disease

68 67 Clinical settings for parallel testing n Parallel testing is used to rule-out serious but treatable conditions (example rule-out MI by CPK, CPK-MB, Troponin, and EKG. Any positive is considered positive) n When a patient has non-specific symptoms, large list of possibilities (differential diagnosis). None of the possibilities has a high pretest probability. Negative test for each possibility is enough to rule it out. Any positive test is considered positive.

69 68 n Because specificity is low, further testing is now required (serial testing) to make a diagnosis (Sp P In).

70 69 Clinical settings for serial testing n When treatment is hazardous (surgery, chemotherapy) we use serial testing to raise specificity.(Blood test followed by more tests, followed by imaging, followed by biopsy).

71 70 Calculate sensitivity and specificity of parallel tests (Serial tests in HIV CDC exercise) n 2 tests in parallel n 1 st test sens = spec = 80% n 2 nd test sens = spec = 90% n 1-Sensitivity of combination = (1-0.8)X(1-0.9)=0.2X0.1=0.02 n Sensitivity= 98% n Specificity is 0.8 X 0.9 = 0.72

72 71 Typical setting for finding Sensitivity and Specificity n Best if everyone who gets the new test also gets “gold standard” n Doesn’t happen n Even reverse doesn’t happen n Not even a sample of each (case- control type) n Case series of patients who had both tests

73 72 EXAMPLE n Patients who had both a stress test and cardiac catheterization. n So what if patients were referred for catheterization based on the results of the stress test? n Not a random or even representative sample. n It is a biased sample.

74 73

75 74 If the test is used to decide referral for gold standard? DiseaseNo Disease Total Test Positive 9572167 Test Negative 5828833 Total100 Sn95/100 =.95 900 Sp 828/900 =.92 1000

76 75 If the test is used to decide referral for gold standard? DiseaseNo Disease Total Test Positive 95 85 72 65 167 167  150 Test Negative 5151 828 99 833 833  100 Total100 86 Sn85/86=.99 900 164 Sp 99/164=.4 1000

77 76 If the test is used to decide referral for gold standard? DiseaseNo Disease Total Test Positive 8565150 Test Negative 199100 Total86 Sn85/86=.99 164 Sp 99/164=.4 250


Download ppt "1 Medical Epidemiology Interpreting Medical Tests and Other Evidence."

Similar presentations


Ads by Google