DIAGNOSTIC & SCREENING Evidence-based Medicine. Pengalaman/Data Empiric Masalah experience-based medicine Nilai-nilai kebenaran Nilai-nilai pembenaran.

Slides:



Advertisements
Similar presentations
Studying a Study and Testing a Test: Sensitivity Training, “Don’t Make a Good Test Bad”, and “Analyze This” Borrowed Liberally from Riegelman and Hirsch,
Advertisements

Step 3: Critically Appraising the Evidence: Statistics for Diagnosis.
TESTING A TEST Ian McDowell Department of Epidemiology & Community Medicine November, 2004.
Is it True? Evaluating Research about Diagnostic Tests
Laboratory Training for Field Epidemiologists Sensitivity and specificity Predictive values positive and negative Interpretation of results Sep 2007.
Critically Evaluating the Evidence: diagnosis, prognosis, and screening Elizabeth Crabtree, MPH, PhD (c) Director of Evidence-Based Practice, Quality Management.
Azita Kheiltash Social Medicine Specialist Tehran University of Medical Sciences Diagnostic Tests Evaluation.
Evaluation of Diagnostic Test Studies
Journal Club Alcohol and Health: Current Evidence July–August 2005.
1 Comunicación y Gerencia 18/4/2011Dr Salwa Tayel (Screening) بسم الله الرحمن الرحيم.
Statistics for Health Care
Diagnosis Concepts and Glossary. Cross-sectional study The observation of a defined population at a single point in time or time interval. Exposure and.
Statistics in Screening/Diagnosis
Thomas B. Newman, MD, MPH Andi Marmor, MD, MSEd October 21, 2010.
Diagnosis Articles Much Thanks to: Rob Hayward & Tanya Voth, CCHE.
DEB BYNUM, MD AUGUST 2010 Evidence Based Medicine: Review of the basics.
Lecture 4: Assessing Diagnostic and Screening Tests
Statistics for Health Care Biostatistics. Phases of a Full Clinical Trial Phase I – the trial takes place after the development of a therapy and is designed.
SCREENING Asst. Prof. Sumattna Glangkarn RN, MSc. (Epidemiology), PhD (Nursing studies)
Reliability of Screening Tests RELIABILITY: The extent to which the screening test will produce the same or very similar results each time it is administered.
Vanderbilt Sports Medicine How to practice and teach EBM Chapter 3 May 3, 2006.
Evidence Based Medicine Workshop Diagnosis March 18, 2010.
Dr K N Prasad Community Medicine
Screening and Diagnostic Testing Sue Lindsay, Ph.D., MSW, MPH Division of Epidemiology and Biostatistics Institute for Public Health San Diego State University.
Evaluation of Diagnostic Tests
EVIDENCE ABOUT DIAGNOSTIC TESTS Min H. Huang, PT, PhD, NCS.
+ Clinical Decision on a Diagnostic Test Inna Mangalindan. Block N. Class September 15, 2008.
1 SCREENING. 2 Why screen? Who wants to screen? n Doctors n Labs n Hospitals n Drug companies n Public n Who doesn’t ?
. Ruling in or out a disease Tests to rule out a disease  You want very few false negatives  High sensitivity 
CHP400: Community Health Program-lI Mohamed M. B. Alnoor Muna M H Diab SCREENING.
Appraising A Diagnostic Test
Likelihood 2005/5/22. Likelihood  probability I am likelihood I am probability.
Evidence-Based Medicine Diagnosis Component 2 / Unit 5 1 Health IT Workforce Curriculum Version 1.0 /Fall 2010.
Screening Puja Myles
SCREENING Dr. Aliya Hisam Community Medicine Dept. Army Medical College, RWP.
TESTING A TEST Ian McDowell Department of Epidemiology & Community Medicine January 2008.
Screening of diseases Dr Zhian S Ramzi Screening 1 Dr. Zhian S Ramzi.
Screening and its Useful Tools Thomas Songer, PhD Basic Epidemiology South Asian Cardiovascular Research Methodology Workshop.
HSS4303B – Intro to Epidemiology Feb 8, Agreement.
Positive Predictive Value and Negative Predictive Value
1 Wrap up SCREENING TESTS. 2 Screening test The basic tool of a screening program easy to use, rapid and inexpensive. 1.2.
Diagnostic Tests Studies 87/3/2 “How to read a paper” workshop Kamran Yazdani, MD MPH.
Assist. Prof. Dr. Memet IŞIK Ataturk University Medical Faculty Department of Family Medicine Class 2:
SCH Journal Club Use of time from fever onset improves the diagnostic accuracy of C-reactive protein in identifying bacterial infections Wednesday 13 th.
Unit 15: Screening. Unit 15 Learning Objectives: 1.Understand the role of screening in the secondary prevention of disease. 2.Recognize the characteristics.
Diagnostic Test Characteristics: What does this result mean
Screening.  “...the identification of unrecognized disease or defect by the application of tests, examinations or other procedures...”  “...sort out.
Screening – a discussion in clinical preventive medicine Galit M Sacajiu MD MPH.
Diagnosis:Testing the Test Verma Walker Kathy Davies.
© 2010 Jones and Bartlett Publishers, LLC. Chapter 12 Clinical Epidemiology.
Screening Tests: A Review. Learning Objectives: 1.Understand the role of screening in the secondary prevention of disease. 2.Recognize the characteristics.
Is suicide predictable? Paul St John-Smith Short Courses in Psychiatry 15/10/2008.
CHP400: Community Health Program-lI Mohamed M. B. Alnoor Muna M H Diab SCREENING.
Diagnostic studies Adrian Boyle.
Performance of a diagnostic test Tunisia, 31 Oct 2014
DR.FATIMA ALKHALEDY M.B.Ch.B;F.I.C.M.S/C.M
Cancer prevention and early detection
Diagnostic Test Studies
Sensitivity and Specificity
Evidence-Based Medicine
Class session 7 Screening, validity, reliability
Dr. Tauseef Ismail Assistant Professor Dept of C Med. KGMC
Comunicación y Gerencia
What is Screening? Basic Public Health Concepts Sheila West, Ph.D.
How do we delay disease progress once it has started?
Diagnosis II Dr. Brent E. Faught, Ph.D. Assistant Professor
What is Screening? Basic Public Health Concepts Sheila West, Ph.D.
Refining Probability Test Informations Vahid Ashoorion MD. ,MSc,
Evidence Based Diagnosis
Presentation transcript:

DIAGNOSTIC & SCREENING Evidence-based Medicine

Pengalaman/Data Empiric Masalah experience-based medicine Nilai-nilai kebenaran Nilai-nilai pembenaran Appendisitis akut McBurney sign (+)

USG CT Scan USG CT Scan Use of USG vs. CT Scan in diagnosing acute appendicitis Sensitivity Specificity Sensitivity Specificity Negative appendectomy: 20-40% Styrud et al,, Intl J for Quality in Health Care, 2000 Sensitivity Specificity Sensitivity Specificity

P atient Or Problem I ntervention C omparison O utcomes Appendicitis CT scan USG More sensitive & specific Clinical Question P ICO Foreground Question

Natural History of Disease ABCDEF A.Biologic onset of the condition B.Pathologic evidence of disease detectable by screening C.Signs and symptoms of disease D.Health care sought E.Diagnosis of disease F.Treatment of disease PreclinicalClinicalOutcome

Issues in Screening Improve outcomes of illness Improve morbidity: exampleImprove morbidity: example Improve mortality: exampleImprove mortality: example Definition Early detection of preclinical disease in asymptomatic persons Purpose of screening

DEFINITION: Screening The assessment or evaluation of people, who have no symptoms of disease, in order to classify them as to likelihood of having a particular disease

Difference Between Diagnostic and Screening Test is used to confirm diagnosis in a patient who is sick is offered to subjects who are free of symptoms or signs of disease Screening test Diagnostic testing

Objectives However, in clinical practice diagnostic results may be in error diagnostic screening detect disease at all stage detect early stage of disease

Examples Of Screening Tests Blood pressureBlood pressure ScoliosisScoliosis Vision/GlaucomaVision/Glaucoma MammographyMammography Pap smearsPap smears CholesterolCholesterol DiabetesDiabetes DepressionDepression Nutrition screeningNutrition screening Drug/alcohol useDrug/alcohol use LeadLead AbuseAbuse Fall riskFall risk

WHY WE NEED A GOOD DIAGNOSTIC TEST

Widal agglutination test Thypoid

1.absence of infection by S typhi 2.the carrier state 3.an inadequate inoculum of bacterial antigen in the host to induce antibody production 4.technical difficulty or errors in the performance of the test 5.previous antibiotic treatment 6.variability in the preparation of commercial antigens Causes of negative Widal agglutination tests 109 years after its invention (1896 – 2005) Postgrad Med J 2000;76:80–84

1.the patient being tested has typhoid fever 2.previous immunisation with Salmonella antigen. 3.cross-reaction with non-typhoidal Salmonella. 4.variability and poorly standardised commercial antigen preparation 5.infection with malaria or other enterobacteriaceae 6.other diseases such as dengue Causes of positive Widal agglutination tests

Multi-Test Dip-S-TicksMulti-Test Dip-S-Ticks for Serotype Typhi TyphiDot 79 89TyphiDot TUBEX 78 94TUBEX Widal testing inWidal testing in the hospitalb Widal testing at theWidal testing at the Pasteur Institute Diagnostic toolSensitivity Specificity (%)(%) (%)(%)

Characteristics of Validity the ability of a test to determine those who do not have the disease the ability of a test to correctly identify those who have the disease or condition Sensitivity Specificity

test result negatif No treatment might be given Wrong treatment (medical error) Test result positive

Diagnostic/screening Misleading Accurate Best diagnostic tools Highly sensitive/specific False positive/negative

Pneumonia NoPneumonia Pneumonia NoPneumoniaPneumonia NoPneumonia DIAGNOSTIC TEST PROCEDURE GOLD STANDARD Respiratoryrate

Disease (+) No Disease Gold Standard for Dx Disease(+) No Disease TEST Truepositive TruenegativeFalsenegative Falsepositive

Sensitivity& Specificity How to calculate these using a 2 X 2 table

a b c d Disease (+) No Disease Gold Standard for Dx Disease(+) No Disease TEST a + c b + d Sensitivity = a / (a+c) x 100% Sensitivity is the ability of the test to detect the presence of disease is the proportion of patients with disease who test positive

Sensitivity = 90% of those who have anemia, 90% will test positive It also means that in those that have anemia, 10% will test negative (i.e. there is a 10% false negative rate in those with anemia) Diagnostic test for anemia using ferritin anemia Dx test Positive = 90% Negative = 10%

a b c d Disease (+) No Disease Gold Standard for Dx Disease(+) No Disease TEST a + c b + d Specificity = d / (b+d) x 100% Specificity is the ability of the test to detect the absence of disease is the proportion of patients without disease who test negative

Spesificity = 85% of those who do not have anemia, 85% will test negative It also means that in those that do not have anemia, 15% will test positive (i.e. there is a 15% false positive rate in those without anemia) Diagnostic test for anemia using ferritin Non anemia Dx test Positive = 15 % Negative = 85%

If a defined population of patients is being evaluated, the pretest probability is equal to the prevalence of disease in the population. Pretest Probability is the estimated likelihood of disease before the test is doneis the estimated likelihood of disease before the test is done = prior probability= prior probability It is the proportion of total patients who have the disease.

Note that ……………………….. Sensitivity is calculated based only on those who have disease, and Therefore, neither sensitivity nor specificity are affected by the prevalence of the target condition specificity is calculated only on those who do not have disease

The trade off between sensitivity and specificity In most cases as sensitivity increases, specificity decreases, and vice versa (i.e they are inversely related to each other

The Sensitivity/specificity trade off An example of when you might want a high sensitivity is a screen for neonatal hypothyroidism (you wouldn’t want many false negatives which might lead to irreversible cognitive damage

The Sensitivity/specificity trade off An example of when you might want a high specificity is a screen for HIV (you wouldn’t want many false positives due to the emotional trauma)

(Knapp and Miller, 1992) “... individuals with the condition who are correctly identified as diseased by the new test” True positive: False positive:... individuals without the condition who are falsely identified as diseased by the new test”. This is also referred to as a mis-diagnosis.

True negative “... individuals without the condition who are correctly identified as diseased-free by the new test” False negative: “... individuals with the condition who are falsely identified as disease-free by the new test”. This is also referred to as a missed diagnosis.

“... probability that an individual with a positive test result has the disease.“... probability that an individual with a positive test result has the disease. PVP is also known as the posterior probability, positive predictive value or posttest probability or disease”PVP is also known as the posterior probability, positive predictive value or posttest probability or disease” “... probability that an individual with a negative test result does not have the disease.“... probability that an individual with a negative test result does not have the disease. PVN is also known as negative predictive value”PVN is also known as negative predictive value” Predictive Value Positive (PVP) Predictive Value Negative

Predictive value of a positive test is the proportion of patients with positive tests who have disease.Predictive value of a positive test is the proportion of patients with positive tests who have disease. This is the same thing as posttest probability of disease given a positive test. It measures how well the test rules in disease.This is the same thing as posttest probability of disease given a positive test. It measures how well the test rules in disease.

a b c d Disease (+) No Disease Gold Standard for Dx Disease(+) No Disease TEST Positive Predictive Value (PV+) = (a/a+b) X 100 (%) + PV = a/ (a+b) = Posttest probability= Posttest probability = the proportion of patients with positive tests who have disease= the proportion of patients with positive tests who have disease

a b c d Disease (+) No Disease Gold Standard for Dx Disease(+) No Disease TEST Negative Predictive Value (PV-) = (d/c+d) X 100 (%) Proportion of true negatives among all those with negatives results - PV = d/ (c+d)

Note that ……………………….. Positive and negative predictive values are calculated using both those with disease, and those without disease Therefore, both positive and negative predictive values are affected by prevalence of the target condition

Pneumonia No Pneumonia No Pneumonia RESPI RATORY CHEST X-RAY a b c d - PV = d/ (c+d) + PV = a/ (a+b) Sensitivity a / (a+c) Specificity d / (b+d)

Pneumonia No Pneumonia No Pneumonia RESPIRASI FOTO RONTGEN a b c d Accuracy = (a+d) / N Prevalence = (a+c) / N

a/a + c LR + = b/b + d c/a + c LR - = d/b + d Pneumonia No Pneumonia No Pneumonia RESPIRASI FOTO RONTGEN a b c d

Probability of having the test result positive among those who have disease Likelihood ratio positive: Probability of having the test result negative among those who don’t have disease Likelihood ratio negative:

Diagnostic test for anemia using ferritin

Test-Treatment Threshold Post-test probability

LIKELIHOOD RATIO Probability of having a test result (positive or negative) in a person with disease compared to the test result (positive or negative) of person with disease free LR (+) = 12,3 LR (-) = 0,39 What does this mean?

1.Is this evidence about the accuracy of a diagnostic test valid? Primary questions for Diagnostic Test 2. Does this (valid) evidence demonstrate an important ability of this test to accurately distinguish patients who do and don’t have a specific disorder? 3. Can I apply this valid, important diagnostic test to a specific patient?

1. Was there an independent, blind comparison with a reference (“gold”) standard of diagnosis? 2. Was the diagnostic test evaluated in an appropriate spectrum of patients (like those in whom we would use it in practice)? 3. Was the reference standard applied regardless of the diagnostic test result? 4. Was the test (or cluster of tests) validated in a second, independent group of patients? Is this evidence about a diagnostic test valid?

1. Was there an independent, blind comparison with a reference (“gold”) standard of diagnosis? diagnostic test in question history or physical examination, a blood test history or physical examination, a blood test “gold” standard autopsy or biopsy autopsy or biopsy BLINDING

1. Was there an independent, blind comparison with a reference (“gold”) standard of diagnosis? 2. Was the diagnostic test evaluated in an appropriate spectrum of patients (like those in whom we would use it in practice)? 3. Was the reference standard applied regardless of the diagnostic test result? 4. Was the test (or cluster of tests) validated in a second, independent group of patients? Is this evidence about a diagnostic test valid?

high medium low clinical suspicion 35 positive Carcino Embryonic Antigen (CEA) mild colon/ Rectal cancer other gastro- intestinal cancer 36 advanced colon/ rectum cancer Pooraccuracy

1. Was there an independent, blind comparison with a reference (“gold”) standard of diagnosis? 2. Was the diagnostic test evaluated in an appropriate spectrum of patients (like those in whom we would use it in practice)? 3. Was the reference standard applied regardless of the diagnostic test result? 4. Was the test (or cluster of tests) validated in a second, independent group of patients? Is this evidence about a diagnostic test valid?

Do more good Invasive Was the reference standard applied regardless of the diagnostic test result? Standard Do more harm Non invasive NEGATIF

1. Was there an independent, blind comparison with a reference (“gold”) standard of diagnosis? 2. Was the diagnostic test evaluated in an appropriate spectrum of patients (like those in whom we would use it in practice)? 3. Was the reference standard applied regardless of the diagnostic test result? 4. Was the test (or cluster of tests) validated in a second, independent group of patients? Is this evidence about a diagnostic test valid?

Disease (+) Disease (-) Disease (+) Disease (-) Was the test (or cluster of tests) validated in a second, independent group of patients? Study Patiens Independent Group

1. Is the dx test available, affordable, accurate, and precise in our setting? 2. Can we generate a clinically sensible estimate of our patient’s pre- test probability? · From personal experience, prevalence statistics, practice databases, or primary studies · Are the study patients similar to our own? · Is it unlikely that the disease possibilities or probabilities have changed since this evidence was gathered? 3. Will the resulting post-test probabilities affect our management and help our patient? · Could it move us across a test-treatment threshold? · Would our patient be a willing partner in carrying it out? · Would the consequences of the test help our patient reach his or her goals in all this? Table 2 applying a valid diagnostic test to an individual patient Table 2 applying a valid diagnostic test to an individual patient

1. Is the dx test available, affordable, accurate, and precise in our setting? available affordable accurate Precise EXPERT????

2. Can we generate a clinically sensible estimate of our patient’s pre-test probability? From personal experience, prevalence statistics, practice databases, or primary studies Diagnostic tests that produce big changes from pretest to post-test probabilities are important and likely to be useful to us in our practice “pre-test” probability (what we estimated before the test) “pre-test” probability (what we estimated before the test) “post-test” probability (what we estimates after the test) “post-test” probability (what we estimates after the test)

2. Can we generate a clinically sensible estimate of our patient’s pre-test probability? Are the study patients similar to our own? Is it unlikely that the disease possibilities or probabilities have changed since this evidence was gathered? Widal Appendicitis Hipertension Typhoid??? Mc Burney vs. USG Criteria of hipertension

Sensitivity = Specificity = LR + = LR – = PPV = NPV = Prevalence = Pre-test odds = Post-test odds = Post-test prob = Dx test result (ferritin) (+) (<65ml/L) (-) (>65ml/L) Target disorder (iron deficiency anemia) Totals PresentAbsent a/(a + c) = 731/809 = 90% d/(b + d) = 1500/1770 = 85% sens/(1 – spec) = 90%/15% = 6 (1 – sens)/spec = 10%/85% = 0.12 a/(a + b) = 731/1001 = 73% d/(c + d) = 1500/1578 = 95% (a + c)/(a + b + c + d) = 809/2579 = 32%. prevalence/(1 – prevalence) = 31%/69% = 0.4 pre-test odds × likelihood ratio. post-test odds/(post-test odds + 1).