Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012.

Slides:



Advertisements
Similar presentations
Welcome Back From Lunch
Advertisements

Diagnostic Test Studies Tran The Trung Nguyen Quang Vinh.
Lecture 3 Validity of screening and diagnostic tests
Understanding Statistics in Research Articles Elizabeth Crabtree, MPH, PhD (c) Director of Evidence-Based Practice, Quality Management Assistant Professor,
Studying a Study and Testing a Test: Sensitivity Training, “Don’t Make a Good Test Bad”, and “Analyze This” Borrowed Liberally from Riegelman and Hirsch,
TESTING A TEST Ian McDowell Department of Epidemiology & Community Medicine November, 2004.
Is it True? Evaluating Research about Diagnostic Tests
Critically Evaluating the Evidence: diagnosis, prognosis, and screening Elizabeth Crabtree, MPH, PhD (c) Director of Evidence-Based Practice, Quality Management.
Azita Kheiltash Social Medicine Specialist Tehran University of Medical Sciences Diagnostic Tests Evaluation.
Dr Ali Tompkins,ST6 East and North Herts Hospitals Sensitivity of Computed Tomography Performed Within Six Hours of Onset of Headache for Diagnosis of.
Dr. Simon Benson GP Specialist Trainee. Introduction Diagnosis of pneumonia in children with wheeze is difficult Limited data exists regarding predictors.
Rapid Critical Appraisal of diagnostic accuracy studies Professor Paul Glasziou Centre for Evidence Based Medicine University of Oxford
Journal Club Alcohol and Health: Current Evidence November-December 2005.
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Dr K N Prasad MD., DNB Community Medicine
1 Telba Irony, Ph.D. Mathematical Statistician Division of Biostatistics Statistical Analysis of InFUSE  Bone Graft/LT-Cage Lumbar Tapered Fusion Device.
Information Mastery: A Practical Approach to Evidence-Based Care Course Directors: Allen Shaughnessy, PharmD, MMedEd David Slawson, MD Tufts Health Care.
Studies of Diagnostic Tests
Statistics in Screening/Diagnosis
Multiple Choice Questions for discussion
Prevalence of Retinal Haemorrhages in Critically Ill Children Journal Club Tuesday 26 th June 2012 Louise Ramsden.
Thomas B. Newman, MD, MPH Andi Marmor, MD, MSEd October 21, 2010.
Diagnosis Articles Much Thanks to: Rob Hayward & Tanya Voth, CCHE.
When is it safe to forego a CT in kids with head trauma? (based on the article: Identification of children at very low risk of clinically- important brain.
1 Lecture 2 Screening and diagnostic tests Normal and abnormal Validity: “gold” or criterion standard Sensitivity, specificity, predictive value Likelihood.
Evidence Based Diagnosis Mark J. Pletcher, MD MPH 6/28/2012 Combining Tests.
Diagnostic Cases. Goals & Objectives Highlight Bayesian and Boolean processes used in classic diagnosis Demonstrate use/misuse of tests for screening.
EBM --- Journal Reading Presenter :李政鴻 Date : 2005/10/26.
Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 14, 2010.
Diagnostic Testing Ethan Cowan, MD, MS Department of Emergency Medicine Jacobi Medical Center Department of Epidemiology and Population Health Albert Einstein.
Studies of Medical Tests Thomas B. Newman, MD, MPH September 9, 2008.
Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 16, 2008.
Vanderbilt Sports Medicine How to practice and teach EBM Chapter 3 May 3, 2006.
Evidence Based Medicine Workshop Diagnosis March 18, 2010.
Screening and Diagnostic Testing Sue Lindsay, Ph.D., MSW, MPH Division of Epidemiology and Biostatistics Institute for Public Health San Diego State University.
Evaluation of Diagnostic Tests
EVIDENCE ABOUT DIAGNOSTIC TESTS Min H. Huang, PT, PhD, NCS.
+ Clinical Decision on a Diagnostic Test Inna Mangalindan. Block N. Class September 15, 2008.
Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2007.
INTRODUCTION Upper respiratory tract infections, including acute pharyngitis, are common in general practice. Although the most common cause of pharyngitis.
Diagnosis: EBM Approach Michael Brown MD Grand Rapids MERC/ Michigan State University.
Appraising A Diagnostic Test
TESTING A TEST Ian McDowell Department of Epidemiology & Community Medicine January 2008.
Welcome Back From Lunch. Thursday Afternoon 2:00-3:00 Studies of Diagnostic Test Accuracy (Tom) 3:00-3:45 Combining Tests (Mark) 3:45-4:00 Break 4:00-5:30.
Copyright restrictions may apply JAMA Pediatrics Journal Club Slides: Procalcitonin Use to Predict Bacterial Infection in Febrile Infants Milcent K, Faesch.
Diagnostic Tests Studies 87/3/2 “How to read a paper” workshop Kamran Yazdani, MD MPH.
SCH Journal Club Use of time from fever onset improves the diagnostic accuracy of C-reactive protein in identifying bacterial infections Wednesday 13 th.
Excluding the Diagnosis of Pulmonary Embolism: Is There a Magic Ball? COPYRIGHT © 2015, ALL RIGHTS RESERVED From the Publishers of.
Diagnostic Test Characteristics: What does this result mean
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Evidence based medicine Diagnostic tests Ross Lawrenson.
1 Medical Epidemiology Interpreting Medical Tests and Other Evidence.
Journal club Diagnostic accuracy of Urinalysis for UTI in Infants
EBM --- Journal Reading Presenter :傅斯誠 Date : 2005/10/26.
Common Errors by Teachers and Proponents of EBM
Validation and Refinement of a Prediction Rule to Identify Children at Low Risk for Acute Appendicitis Kharbanda AB, Dudley NC, Bajaj L, et al; Pediatric.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Diagnosis:Testing the Test Verma Walker Kathy Davies.
Biostatistics Board Review Parul Chaudhri, DO Family Medicine Faculty Development Fellow, UPMC St Margaret March 5, 2016.
Pulmonary Embolism in Patients with Unexplained Exacerbation of COPD: Prevalence and Risk Factors Isabelle Tillie-Leblond, MD, PhD; Charles-Hugo Marquette,
Critical Appraisal Course for Emergency Medicine Trainees Module 5 Evaluation of a Diagnostic Test.
Diagnosis Recitation. The Dilemma At the conclusion of my “diagnosis” presentation during the recent IAPA meeting, a gentleman from the audience asked.
Diagnostic studies Adrian Boyle.
Diagnostic Test Studies
M. Lee Chambliss MD MSPH Associate Professor
Impact of the History of Congestive Heart Failure on the Utility of B-Type Natriuretic Peptide in the Emergency Diagnosis of Heart Failure: Results from.
Refining Probability Test Informations Vahid Ashoorion MD. ,MSc,
Impact of the History of Congestive Heart Failure on the Utility of B-Type Natriuretic Peptide in the Emergency Diagnosis of Heart Failure: Results from.
Lecture 4 Study design and bias in screening and diagnostic tests
Evidence Based Diagnosis
Presentation transcript:

Studies of Diagnostic Tests Thomas B. Newman, MD, MPH October 11, 2012

Reminders/Announcements n OK (encouraged!) to help each other, but give credit HW n Write down answers to as many of the problems in the book as you can (not just those assigned) and check your answers! n Homework/exam problem due by 11/15 (preferably sooner) n Final exam to be passed out 11/29, reviewed 12/6 n Tom and Michael away next week at meeting of the Society for Medical Decision Making –Screening lecture by Dr. Andi Marmor

Overview n Common biases of studies of diagnostic test accuracy n Prevalence, spectrum and nonindependence n Meta-analyses of diagnostic tests n Checklist & systematic approach n Examples: –Pain with percussion, hopping or cough for appendicitis –Clinical diagnosis of pertussis

Bias #1 Example n Study of BNP to diagnose congestive heart failure (CHF; Chapter 4, Problem 3)

Bias #1 Example n Gold standard: determination of CHF by two cardiologists blinded to BNP n “The best clinical predictor of congestive heart failure was an increased heart size on chest roentgenogram (accuracy, 81 percent)” n Is there a problem with assessing accuracy of chest x-rays to diagnose CHF in this study? *Maisel AS, Krishnaswamy P, Nowak RM, McCord J, Hollander JE, Duc P, et al. Rapid measurement of B-type natriuretic peptide in the emergency diagnosis of heart failure. N Engl J Med 2002;347(3):161-7.

Bias #1: Incorporation bias n Cardiologists not blinded to chest x-ray n Probably used (incorporated) chest x-ray to make final diagnosis n Incorporation bias for assessment of chest x-ray (not BNP) n Biases both sensitivity and specificity upward ©2000 by British Medical Journal Publishing Group

Bias #2 Example: n Visual assessment of jaundice in newborns –Study patients who are getting a bilirubin measurement –Ask clinicians to estimate extent of jaundice at time of blood draw –Compare with blood test

Visual Assessment of jaundice*: Results *Moyer et al., APAM 2000; 154:391 n Sensitivity of jaundice below the nipple line for bilirubin ≥ 12 mg/dL = 97% n Specificity = 19% n What is the problem? Editor’s Note: The take-home message for me is that no jaundice below the nipple line equals no bilirubin test, unless there’s some other indication. --Catherine D. DeAngelis, MD

Bias #2: Verification Bias* -1 n Inclusion criterion for study: gold standard test was done –in this case, blood test for bilirubin n Subjects with positive index tests are more likely to be get the gold standard and to be included in the study –clinicians usually don’t order blood test for bilirubin if there is little or no jaundice n How does this affect sensitivity and specificity? *AKA Work-up, Referral Bias, or Ascertainment Bias

Bias #2: Verification Bias TSB >12TSB < 12 Jaundice below nipple ab No jaundice below nipple c  d  Sensitivity, a/(a+c), is biased ___. Specificity, d/(b+d), is biased ___. *AKA Work-up, Referral Bias, or Ascertainment Bias

Visual Assessment of jaundice*: Results *Moyer et al., Archives Pediatr Adol Med 2000; 154:391 n Recall “Gold Standard” was bilirubin ≥ 12 mg/dL n Specificity = 19% n This low specificity was a clue! What does it mean? n NIH: 19% of newborns who don’t have a bilirubin ≥ 12 mg/dL are not jaundiced below the nipple line n 81% of babies with bilirubin <12 mg/dL are jaundiced below the nipple line

Copyright restrictions may apply. Does This Child Have Appendicitis? JAMA. 2007;298: RLQ Pain: Sensitivity = 96% Specificity = 5% (1 – Specificity = 95%) Likelihood Ratio =1.0 RLQ pain was present in 96% of those with appendicitis and 95% of those without appendicitis.

Bias #3 n Example: PIOPED study of accuracy of ventilation/perfusion (V/Q) scan to diagnose pulmonary embolism* n Study Population: All patients presenting to the ED who received a V/Q scan n Test: V/Q Scan n Disease: Pulmonary embolism (PE) n Gold Standards: –1. Pulmonary arteriogram (PA-gram) if done (more likely with more abnormal V/Q scan) –2. Clinical follow-up in other patients (more likely with normal VQ scan * (Blood clot in the lungs. PIOPED. JAMA 1990;263(20):

Double Gold Standard Bias n Also called differential verification bias n Two different “gold standards” –One gold standard (usually an immediate, more invasive test, e.g., angiogram, surgery) is more likely to be applied in patients with positive index test –Second gold standard (e.g., clinical follow-up) is more likely to be applied in patients with a negative index test.

Double Gold Standard Bias n There are some patients in whom the two “gold standards” do not give the same answer –Spontaneously resolving disease (positive with immediate invasive test, but not with follow-up) –Newly occurring or newly detectable disease (positive with follow-up but not with immediate invasive test)

Effect of Double Gold Standard Bias 1: Spontaneously resolving disease n Test result will always agree with gold standard n Both sensitivity and specificity increase n Example: Joe has a small pulmonary embolus (PE) that will resolve spontaneously. –If his VQ scan is positive, he will get an angiogram that shows the PE (true positive) –If his VQ scan is negative, his PE will resolve and we will think he never had one (true negative) n VQ scan can’t be wrong!

Effect of Double Gold Standard Bias 2: Newly occurring or newly detectable disease n Test result will always disagree with gold standard n Both sensitivity and specificity decrease n Example: Jane has a nasty breast cancer but it is currently undetectable by biopsy –If her mammogram is positive, she will get biopsies that will not find the tumor (mammogram will look falsely positive) –If her mammogram is negative, she will return in several months an we will think the tumor was initially missed (mammogram will look falsely negative) n Mammogram can’t be right!

Spectrum of Disease, Nondisease and Test Results n Disease is often easier to diagnose if severe n “Nondisease” is easier to diagnose if patient is well than if the patient has other diseases n Test results will be more reproducible if ambiguous results excluded

Spectrum Bias n Sensitivity depends on the spectrum of disease in the population being tested. n Specificity depends on the spectrum of non-disease in the population being tested. n Example: Absence of Nasal Bone (on 13-week ultrasound) as a Test for Chromosomal Abnormality

Spectrum Bias Example: Absence of Nasal Bone as a Test for Chromosomal Abnormality* Sensitivity = 229/333 = 69% BUT the D+ group only included fetuses with Trisomy 21 Cicero et al., Ultrasound Obstet Gynecol 2004; 23: Nasal Bone AbsentD+ D-Total LR Yes No Total

n The D+ group excluded 295 fetuses with other chromosomal abnormalities (mainly Trisomy 18) n Among these fetuses, the sensitivity of nasal bone absence was 32% (not 69%) n What decision is this test supposed to help with? –If it is whether to test chromosomes using chorionic villus sampling or amniocentesis, these 295 fetuses should be included! Spectrum Bias: Absence of Nasal Bone as a Test for Chromosomal Abnormality

Sensitivity = 324/628 = 52% vs. 69% obtained when the D+ group only included fetuses with Trisomy 21 Spectrum Bias: Absence of Nasal Bone as a Test for Chromosomal Abnormality, effect of including other trisomies in D+ group

Quiz: What if we considered the nasal bone absence as a test for Trisomy 21 (only)? n Then instead of excluding subjects with other chromosomal abnormalities or including them as D+, we should count them as D-. Compared with excluding them, n What would happen to sensitivity? n What would happen to positive predictive value?

Quiz: What if we considered the nasal bone absence as a test for Trisomy 21? Nasal Bone AbsentD+ D- Yes = 224 No =5294 Total =5518 n What would happen to sensitivity? n What would happen to positive predictive value? n Sensitivity unchanged. n PPV would decrease (95 more false positives) from 64% to 51%. Compared with excluding patients with other trisomies,

BiasDescriptionSensitivity is falsely … Specificity is falsely … Incorporation Gold standard incorporates index test. Spectrum D+ only includes “sickest of the sick” D- only includes “wellest of the well: Verification Positive index test makes gold standard more likely. Double Gold Standard Disease resolves spontaneously Disease become sdetectable during follow-up

Prevalence, spectrum and nonindependence n Prevalence (prior probability) of disease may be related to disease severity n One mechanism is different spectra of disease or nondisease n Another is that whatever is causing the high prior probability is related to the same aspect of the disease as the test

Prior probability, spectrum and nonindependence: examples n Diseases identified by screening or incidentally – higher prevalence assoc with lower severity –Prostate cancer –Thyroid cancer n Diseases where higher prevalence associated with greater severity –Fe deficiency – Higher prevalence of TB where HIV is more prevalent; TB also more severe there

Prior probability, spectrum and nonindependence: examples n Symptoms of disease associated with the aspect of disease being tested: Urinalysis as a test for UTI in women with more and fewer symptoms (high and low prior probability)* *EBD Table 5.3, from Lachs, Ann Int Med 1992; 117:135-40

Overfitting

n Choosing best cutoff based on the data (small problem) n Choosing best cutoffs on best combination of multiple tests (big problem; 2 weeks)

Meta-analyses of Diagnostic Tests n Systematic and reproducible approach to finding studies n Summary of results of each study n Investigation into heterogeneity n Summary estimate of results, if appropriate n Unlike other meta-analyses (risk factors, treatments), results aren’t summarized with a single number (e.g., RR), but with two related numbers (sensitivity and specificity) n These can be plotted on an ROC plane

MRI for the diagnosis of MS Whiting et al. BMJ 2006;332:875-84

Figure 1. Graph showing the summary receiver operating characteristic curve (SROC) for the 25 stress echocardiography studies (closed diamond) or the 50 stress nuclear scintigraphy studies (open squares). Beattie W S et al. Anesth Analg 2006;102:8-16 ©2006 by Lippincott Williams & Wilkins SROC Predicting post-op MI or death in elective noncardiac surgery patients

Dermoscopy vs Naked Eye for Diagnosis of Malignant Melanoma Br J Dermatol Sep;159(3): Dermoscopy performed unequivocally better in 7 of the 9 studies. Can you call out the coordinates for the 2 studies for which this was not the case?

Studies of Diagnostic Test Accuracy: Checklist n Was there an independent, blind comparison with a reference (“gold”) standard of diagnosis? n Was the diagnostic test evaluated in an appropriate spectrum of patients (like those in whom we would use it in practice)? n Was the reference standard applied regardless of the diagnostic test result? n Was the test (or cluster of tests) validated in a second, independent group of patients? From Sackett et al., Evidence-based Medicine,2 nd ed. (NY: Churchill Livingstone), p 68

Systematic Approach n Authors and funding source n Research question n Study design n Study subjects n Predictor variable n Outcome variable n Results & Analysis n Conclusions

A clinical decision rule to identify children at low risk for appendicitis (Problem 5.6)* n Study design: prospective cohort study n Subjects –4140 patients 3-18 years presenting to Boston Children’s Hospital ED with abdominal pain –767 (19%) received surgical consultation for possible appendicitis 113 Excluded (chronic diseases, recent imaging) 53 missed –601 included in the study (425 in derivation set) *Kharbanda et al. Pediatrics 2005; 116(3):

A clinical decision rule to identify children at low risk for appendicitis n Predictor variables –Standardized assessment by pediatric ED attending –Focus on “Pain with percussion, hopping or cough” (complete data in N=381) n Outcome variable: –Pathologic diagnosis of appendicitis (or not) for those who received surgery (37%) –Follow-up telephone call to family or pediatrician 2-4 weeks after the ED visit for those who did not receive surgery (63%) Kharbanda et al. Pediatrics 116(3):

A clinical decision rule to identify children at low risk for appendicitis n Results: Pain with percussion, hopping or cough n 78% sensitivity and 83% NPV seem low to me. Are they valid for me in deciding whom to image? Kharbanda et al. Pediatrics 116(3):

Checklist n Was there an independent, blind comparison with a reference (“gold”) standard of diagnosis? n Was the diagnostic test evaluated in an appropriate spectrum of patients (like those in whom we would use it in practice)? n Was the reference standard applied regardless of the diagnostic test result? n Was the test (or cluster of tests) validated in a second, independent group of patients? From Sackett et al., Evidence-based Medicine,2 nd ed. (NY: Churchill Livingstone), p 68

In what direction would these biases affect results? n Sample not representative (population referred to pedi surgery)? n Verification bias? n Double-gold standard bias? n Spectrum bias

For children presenting with abdominal pain to SFGH 6-M n Sensitivity probably valid (not falsely low) –But whether all of the kids in the study tried to hop is not clear n Specificity probably low n PPV is too high n NPV is too low n Does not address surgical consultation decision

Does this coughing patient have pertussis?* n RQ (for us): what are LR for coughing fits, whoop, and post-tussive vomiting in adults with persistent cough? n Design (for one study we reviewed**): Prospective cross-sectional study n Subjects: 217 adults ≥18 years with cough days, no fever or other clear cause for cough enrolled by 80 French GPs. –In a subsample from 58 GPs, of 710 who met inclusion criteria only 99 (14%) enrolled * Cornia et al. JAMA 2010;304(8): **Gilberg S et al. J Inf Dis 2002;186:415-8

Pertussis diagnosis n Predictor variables: “GPs interviewed patients using a standardized questionnaire.” n Outcome variable: Laboratory evidence of pertussis based on any of: –Culture (N=1) –PCR (N=36) – ≥ 2-fold change in anti-pertussis toxin IgG (N=40) –Total N = 70/217 with evidence of pertussis (32%) *Gilberg S et al. J Inf Dis 2002;186:415-8

Results n 89% in both groups (with and without laboratory “evidence of pertussis”) met CDC criteria for pertussis* *Gilberg S et al. J Inf Dis 2002;186:415-8

Issues n Verification bias: only 14% of eligible subjects included –Subjects with more pertussis symptoms probably more likely to be included n Questionable “gold standard”

What is wrong with this picture? n Outcome variable: Evidence of pertussis based on any of: –Culture (N=1) –PCR (N=36) – ≥ 2-fold change in anti-pertussis toxin IgG (N=40) –Total N = 70/217 with evidence of pertussis (32%) n Protocol apparently included serologic tests and PCR on all, but culture only if it could be plated in < 4 hours n Not much overlap!

Issues n Correlation between serologic and PCR pertussis tests (derived from Table 1 of Gilberg et al.*). tab PT PCR [fw=pop] PT_IGG_cha | PCR nge | POS NEG | Total POS | 6 30 | 36 NEG | | Total | | 123 *Gilberg S et al. J Inf Dis 2002;186:415-8

Issues n Nice illustration of difficulty doing a systematic review! n Important take-home message: you can’t judge study quality only by looking at the methods! You need to look at results, too!. kap PT PCR [fw=pop] Expected Agreement Agreement Kappa Std. Err. Z Prob>Z % 57.25%

Table 1 from paper *Gilberg S et al. J Inf Dis 2002;186:415-8

Questions?

Additional slides

Double Gold Standard Bias: effect of spontaneously resolving disease PE +PE - V/Q Scan +ab V/Q Scan -cd Sensitivity, a/(a+c) biased __ Specificity, d/(b+d) biased __ Double gold standard compared with immediate invasive test for all Double gold standard compared with follow-up for all

Double Gold Standard Bias: effect of newly occurring cases PE +PE - V/Q Scan +ab V/Q Scan -cd Sensitivity, a/(a+c) biased __ Specificity, d/(b+d) biased __ Double gold standard compared with PA-Gram for all Double gold standard compared with follow-up for all