Critical Appraisal Dr. Chris Hall – Facilitator Dr. Dave Dyck R3 March 20/2003.

Slides:



Advertisements
Similar presentations
Agency for Healthcare Research and Quality (AHRQ)
Advertisements

How would you explain the smoking paradox. Smokers fair better after an infarction in hospital than non-smokers. This apparently disagrees with the view.
Sample size estimation
Designing Clinical Research Studies An overview S.F. O’Brien.
Observational Studies and RCT Libby Brewin. What are the 3 types of observational studies? Cross-sectional studies Case-control Cohort.
Study Designs in Epidemiologic
Epidemiologic study designs
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
1 Case-Control Study Design Two groups are selected, one of people with the disease (cases), and the other of people with the same general characteristics.
STUDY DESIGN CASE SERIES AND CROSS-SECTIONAL
Reading the Dental Literature
CRITICAL APPRAISAL Dr. Cristina Ana Stoian Resident Journal Club
Critical Appraisal of an Article on Therapy. Why critical appraisal? Why therapy?
Journal Club Alcohol and Health: Current Evidence July-August 2006.
Non-Experimental designs: Developmental designs & Small-N designs
Cohort Studies.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Critical Appraisal of an Article on Therapy (2). Formulate Clinical Question Patient/ population Intervention Comparison Outcome (s) Women with IBS Alosetron.
Epidemiological Study Designs And Measures Of Risks (2) Dr. Khalid El Tohami.
Part 3 of 3 By: Danielle Davidov, PhD & Steve Davis, MSW, MPA INTRODUCTION TO RESEARCH: SAMPLING & DESIGN.
Are the results valid? Was the validity of the included studies appraised?
Multiple Choice Questions for discussion
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Screening and Prevention of Illnesses and Injuries: Research Methods.
DEB BYNUM, MD AUGUST 2010 Evidence Based Medicine: Review of the basics.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
Lecture 6 Objective 16. Describe the elements of design of observational studies: (current) cohort studies (longitudinal studies). Discuss the advantages.
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
Evidence Based Medicine Workshop Diagnosis March 18, 2010.
EBCP. Random vs Systemic error Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie:
+ Clinical Decision on a Diagnostic Test Inna Mangalindan. Block N. Class September 15, 2008.
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
Lecture 7 Objective 18. Describe the elements of design of observational studies: case ‑ control studies (retrospective studies). Discuss the advantages.
Clinical Writing for Interventional Cardiologists.
VSM CHAPTER 6: HARM Evidence-Based Medicine How to Practice and Teach EMB.
Study Designs for Clinical and Epidemiological Research Carla J. Alvarado, MS, CIC University of Wisconsin-Madison (608)
How to read a paper D. Singh-Ranger. Academic viva 2 papers 1 hour to read both Viva on both papers Summary-what is the paper about.
Causal relationships, bias, and research designs Professor Anthony DiGirolamo.
Wipanee Phupakdi, MD September 15, Overview  Define EBM  Learn steps in EBM process  Identify parts of a well-built clinical question  Discuss.
Issues concerning the interpretation of statistical significance tests.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Study designs. Kate O’Donnell General Practice & Primary Care.
BC Jung A Brief Introduction to Epidemiology - XIII (Critiquing the Research: Statistical Considerations) Betty C. Jung, RN, MPH, CHES.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Compliance Original Study Design Randomised Surgical care Medical care.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
بسم الله الرحمن الرحیم.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
Critical Appraisal Course for Emergency Medicine Trainees Module 3 Evaluation of a therapy.
Introduction to General Epidemiology (2) By: Dr. Khalid El Tohami.
Uses of Diagnostic Tests Screen (mammography for breast cancer) Diagnose (electrocardiogram for acute myocardial infarction) Grade (stage of cancer) Monitor.
Journal Club Curriculum-Study designs. Objectives  Distinguish between the main types of research designs  Randomized control trials  Cohort studies.
Chapter 12 Quantitative Questions and Procedures.
Critically Appraising a Medical Journal Article
Types of Research Studies Architecture of Clinical Research
How to read a paper D. Singh-Ranger.
Study design.
Chapter 7 The Hierarchy of Evidence
Interpreting Epidemiologic Results.
Objectives: To know the different types and varieties of designs that are commonly used in medical researches. To know the characteristics, advantages.
HEC508 Applied Epidemiology
Evidence Based Diagnosis
Basic statistics.
Presentation transcript:

Critical Appraisal Dr. Chris Hall – Facilitator Dr. Dave Dyck R3 March 20/2003

Objectives: Review study design and the advantages/ disadvantages of each Review key concepts in hypothesis, measurement, and analysis Article appraisal –Treatment articles –Diagnosis articles –Harm articles –Overviews/meta-analysis Survive the next hour and still be able to smile

Study Design: Ecological studies Case Reports Case Series Cross-Sectional Studies Case Control and Retrospective Cohort Studies Prospective Cohort Studies Randomized Controlled Trials

Ecological Studies: Studies of a group rather than individual subjects Supplies data on exposure and disease as a summary measure of the total population as an aggregate eg. Incidence studies Berkson’s Bias: ie. The correlation between the variables is not the same on the individual level as it is for the group. Therefore you cannot link exposures to disease on an individual basis Also, difficult to account for confounding variables

Case Reports Submission of individual cases with rare or interesting findings ++++ subject to bias (selection / submission and publication) Should not infer causality or suggest practice change

Case Series: A group of “consecutive cases” with unifying features Selection bias = what constitutes a case, is it truly consecutive, response bias Publication bias Measurement bias (presence of ‘disease’ or exposure may be variable)

Cross Sectional Studies: Ie. Prevalence study Presence or absence of a specific disease compared with one or several variables within a defined population at a specific point in time

Cross Sectional Studies disadvantages: Subject to selection bias (see HO) Cause and effect cannot be determined (see HO) (ie. Don’t know whether the exposure occurs before the outcome or the outcome occurs before the exposure) Temporal trends may be missed (seasonal variations) Previous deaths, drop-outs, and migration are not counted; and short lived, transient outcomes are underrepresented. Thus, CSS are best suited to study chronic, non-fatal conditions.

Cross sectional studies – advantages: Can do quickly May provide enough of an association between an exposure/outcome to generate a hypothesis which can be studied by another method. Useful for descriptive/analytical studies

Case Control Studies: Starts now and goes back in time Start with the outcome and ask or find out about prior exposure Specific hypothesis usually tested Select all cases of a specific disease during a certain time and select a number of controls who represent general population  then determine exposure to factor in each  odds ratio May match controls to patients (but can never be sure of similar baseline states)

Case Control Study:

CCS cont Odds ratio provides an estimate of the relative risk (esp when disease is rare) Thus, use CCS only when disease is rare (< 10% of population) As OR increases (>1)  greater risk As OR decreases (<1)  reduced risk

CCS advantages: Small # needed (good for rare diseases or when outcomes are rare or delayed) Quick Inexpensive Can study many factors

CCS disadvantages: Problems selecting/matching controls Only an estimate of relative risk No incidence rates Biases (? Unequal ascertainment of exposure between cases and controls) –Ie recall bias= cases are more likely to remember exposure than controls –Selection bias = cases and controls should be selected according to predetermined, strict, objective criteria

Cohort Study (prospective) Start with 2 groups free of disease and follow forward for a period of time 1 group has the factor (eg. Smoking) the other group does not Define 1 or more outcomes (eg. Lung CA) Tabulate the # of persons who develop the outcome Provides estimates of incidence, relative risk, and attributable risk

Relative risk / Attributable risk Relative risk = measures the strength of association between exposure and disease Attributable risk = measures the number of cases of disease that can be attributed to exposure Given a constant relative risk, attributable risk rises with incidence of the disease in members of the population who are not exposed

Cohort Study Cannot by itself establish causation, but can show an association between a factor and an outcome Generally provides stronger evidence for causation than case control studies

Cohort Study advantages: Lack of bias in factor Uncovers natural history Can study many diseases Yields incidence rates, relative, and attributable risk Allows for more control of confounding variables

Cohort Study Disadvantages: Possible bias in ascertainment of disease. Need large numbers and long follow-up Easy to lose patients in follow-up (attrition of subjects). This may introduce bias if lost subjects are different from those who continue to be followed Hard to maintain comparable follow-up for all levels of exposure

Cohort Study disadvantages cont. Expensive Locked into the factor(s) measured Measurement bias (eg. Unblinded physician who looks harder for + outcomes in the exposed pt) Confounding variables still present

Randomized Control Trials: To test the hypothesis that an intervention (treatment or manipulation) makes a difference. An experimental group is manipulated while a control group receives a placebo or standard procedure All other conditions are kept the same between the groups

RCTs Goals= –Prevention (to decrease risk of disease or death) –Therapeutic (decrease symptoms, prevent recurrences, decrease mortality) –Diagnostic (evaluate new diagnostic procedures)

RCT problems: Ethical issues Difficulty to test an intervention that is already widely used Randomization Blinding techniques (may be difficult due to common SE of drugs) Control group (placebo, conventional tx, specific tx) Subject selection and issues of generalizability Are refusers different in some way

Causation:

Key Terms for diagnostic tests: Sensitivity= proportion with the disease identified by the test Specificity= proportion without the disease with a negative test

Sensitivity= a/a+c Specificity=d/b+d

Other key terms: Positive Predictive Value= This is the probability of having the disease given a positive test (a/a+b) Negative Predictive Value= The probability of not having the disease given a negative test (d/c+d)

Statistical Hypothesis: Null Hypothesis –Hypothesis of no difference between a test group and a control group (ie. There is no association between the disease and the risk factor in the population) Alternative Hypothesis –Hypothesis that there is some difference between a test group and control group

Measurements and Analysis: Sampling bias = selecting a sample that does not truly represent the population Sampling size = contributes to the credibility of “positive” studies and the power of “negative studies”. Increasing the sample size decreases the probability of making type I and type II errors.

Errors Type I Error (alpha error) = the probability that a null hypothesis is considered false when it is actually true. (ie. Declaring an effect to be present when it is not) This probability is represented by the p value or alpha; the probability the difference is due to chance alone.

Errors cont. Type II Error (Beta Error) = the probability of accepting a null hypothesis as true when it is actually false (ie. Declaring a difference/effect to be absent when it is present) –The probability that a difference truly exists –Reflects the power (1-Beta) of a study

Significance: Statistical Significance: determination by a statistical test that there is evidence against the null hypothesis. The level of significance depends on the values chosen for alpha error Usually alpha 80%)

Significance cont. Clinical Significance: statistical significance is necessary but not sufficient for clinical significance which reflects the meaningfulness of the difference (eg. A statistically significant 1mm Hg BP reduction is not clinically significant) Also includes such factors as cost, SE.

Other terms: Accuracy= how closely a measurement approaches the true value Reliability= how consistent or reproducible a measurement is when performed by different observers under the same conditions or the same observer under different conditions Validity= describes the accuracy and reliability of a test (ie. The extent to which a measurement approaches what it is designed to measure)

Validity and Reliability

Appraising an article (JAMA): 3 basic stages –1) the validity – are the conclusions justified? –2) the message – what are the results? –3) the utility – can I generalise the findings to my patients?

Are the results valid? – (therapy article) Primary guides –Was the assignment of patients to treatment randomized? –Were all patients who entered the trial properly accounted for and attributed at its conclusion? –Was follow-up complete? –Were patients analyzed in the groups to which they were randomized? Ie. Intention to treat analysis

Are the results valid? Secondary guides: –Were patients, their clinicians, and study personnel “blind” to treatment? (avoids bias) –Were the groups similar at the start of the trial? (randomization not always effective if sample size small) –Aside from the experimental intervention, were the groups treated equally? (ie. Cointerventions)

What are the results? How large was the treatment effect? –Relative risk reduction vs absolute risk reduction

Eg. Baseline risk of death without therapy=20/100 =.20 = 20% (X =.20) Risk with therapy reduced to 15/100 =.15 = 15% (Y =.15) Absolute Risk Reduction = (X-Y) = =.05 (5%) Relative Risk = (Y/X) =.15/.20 =.75 Relative Risk Reduction = [1-(Y/X)] x 100% = [1-(.75)] x 100% = 25%

Number needed to treat = NNT To calculate simply take the inverse of the absolute risk reduction In last example= 1/.05 = 20 is the NNT

What are the results? Cont. How precise was the estimate of treatment effect? –Use confidence intervals (CI) = a range of values reflecting the statistical precision of an estimate (eg. A 95% CI has a 95% chance of including the true value) –CI narrow as sample size increases eg. In last example of 100 patients with 20 pts dying in the control group and 15 in the tx group the 95%CI for the RRR was -38% - 59%. If 1000 patients were enrolled in each group with 200 dying in the controls and 150 in the tx group the 95% CI for the RRR is 9%-41%.

CI cont If CI cross 0 they are generally unhelpful in making conclusions When is the sample size big enough? –If the lower boundary of the CI is still clinically significant to you (in + studies) –(or if the upper CI boundary is not clinically significant in negative studies)

What if no CI reported? 1) use the p value = as the p value decreased below.05, the lower bound of the 95% confidence limit for the RRR rises above 0 2) If the standard error (SE) of the RRR is presented it is easy to calculate the CI as 2xSE +/- point estimate (RRR) 3) Calculate CI yourself or with a statistician

Will the results help me in caring for my patients? Can the results be applied to my patient population? Were all clinically important outcomes considered? Ie. Mortality, morbitity, quality of life endpoints Are the likely treatment benefits worth the potential harm and costs? Ie. What is the patient’s baseline risk if left untreated. (NNT is helpful here)

Article about a diagnostic test:

Are the results valid? Primary guides: –Was there an independent, blind comparison with a reference standard? (ie. Gold standard) –Did the patient sample include an appropriate spectrum of patients to whom the diagnostic test will be applied in clinical practice?

Are the results valid? Secondary guides –Did the results of the test being evaluated influence the decision to perform the reference standard? Ie verification bias eg. Pioped = normal, near normal, low prob V/Q scans had only 69% going on for pulmonary angiogram whereas more positive V/Q scans had 92% going on for angiograms –Were the methods for performing the test described in sufficient detail to permit replication?

What are the results? Are likelihood ratios for the test results presented or data necessary for their calculation included? Likelihood ratio = the ratio between the likelihoods of having the disease, and not having the disease, with a + test

Likelihood Ratios: LR>10 and <.1 generate large and often conclusive changes from pretest to posttest probability LR of 5-10 and.1-.2 generate moderate shifts in pretest and posttest probability LR of 2-5 and.5-.2 generate small (but sometimes important) changes in probability LR of 1-2 and.5-1 are generally insignificant

Bayesian analysis Makes use of LR to change pretest probabilities to posttest probabilities. (can use Fagan’s nomogram):

Will the results help me in caring for my patients? Will the reproducibility of the test result and its interpretation be satisfactory in my setting? Are the results applicable to my patient? Will the results change my management? Will patients be better off as a result of the test?

Articles about Harm? 1 st – what is the study design (RCT, cohort, case control, case series, etc) –Most important is that there is an appropriate control population

Are the results valid? Were the exposures and outcomes measured in the same way in the groups being compared? (minimize recall/interviewer bias) Was follow-up sufficiently long and complete? Is the temporal relationship correct? Is there a dose response gradient?

What are the results? How strong is the association between exposure and outcome? Ie. Relative risk (if >1= increase in risk associated with exposure and <1= decrease in risk associated with exposure) How precise is the estimate of risk? Ie. CI

What are the implications for my practice? Are the results applicable to my practice? What is the magnitude of the risk? Should I attempt to stop the exposure?

Overviews, Systemic Reviews, and Meta-analysis Did the overview address a focussed clinical question? Were the criteria used to select articles for inclusion appropriate? - these should be revealed in the paper Is it unlikely that important, relevant studies were missed? (avoids publication bias- a higher likelihood for studies with positive results to be published) Was the validity of the included studies appraised? (peer review does not guarantee the validity of published research)

Cont. Were assessments of studies reproducible? (better if there are more reviewers who are deciding which articles to include) Were the results similar from study to study? (can use “tests of homogeneity” statistical analysis)

Results? What are the overall results of the overview? (are studies weighted according to their size?) There should be a summary measure which clearly conveys the practical importance of the result – eg. RRR, LR, NNT etc. How precise were the results? CI still very helpful

Will the results help me in caring for my patients? Can the results be applied to my patient care? (subgroup analysis should be critiqued closely) Were all clinically important outcomes considered? ( a clinical decision will require considering all outcomes both good and bad) Are the benefits worth the harms and costs?

Does your brain ache? THE END