Introduction to Critical Appraisal January 9, 2006 Carin Gouws.

Slides:



Advertisements
Similar presentations
Designing Clinical Research Studies An overview S.F. O’Brien.
Advertisements

Study Designs in Epidemiologic
8. Evidence-based management Step 3: Critical appraisal of studies
Reading the Dental Literature
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Elements of a clinical trial research protocol
Writing a Research Protocol Michael Aronica MD Program Director Internal Medicine-Pediatrics.
Introduction to Research
Sampling and Experimental Control Goals of clinical research is to make generalizations beyond the individual studied to others with similar conditions.
Clinical Trials Hanyan Yang
Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Critical Appraisal of an Article on Therapy (2). Formulate Clinical Question Patient/ population Intervention Comparison Outcome (s) Women with IBS Alosetron.
Epidemiological Study Designs And Measures Of Risks (2) Dr. Khalid El Tohami.
Studying treatment of suicidal ideation & attempts: Designs, Statistical Analysis, and Methodological Considerations Jill M. Harkavy-Friedman, Ph.D.
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
Chapter 8 Experimental Research
Are the results valid? Was the validity of the included studies appraised?
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Lecture 8 Objective 20. Describe the elements of design of observational studies: case reports/series.
OKU 9 Chapter 15: ORTHOPAEDIC RESEARCH Brian E. Walczak.
Department of O UTCOMES R ESEARCH. Daniel I. Sessler, M.D. Michael Cudahy Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical.
Methodology Describe Context & setting Design Participants Sampling Power Analysi s Interventions Outcome (study variables) Data Collection Procedures.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Research Synthesis of Population-Based Prevalence Studies ORC Macro Benita J. O’Colmain, M.P.H. Wanda Parham, M.P.A. Arlen Rosenthal, M.A. Adrienne Y.
Excellence in Nursing Practice Through Research, EBP & Application to Bedside Patient Care Chesapeake Bay Society of PeriAnesthesia Nurses Saturday, February.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
Research Study Design. Objective- To devise a study method that will clearly answer the study question with the least amount of time, energy, cost, and.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
Consumer behavior studies1 CONSUMER BEHAVIOR STUDIES STATISTICAL ISSUES Ralph B. D’Agostino, Sr. Boston University Harvard Clinical Research Institute.
Understanding real research 4. Randomised controlled trials.
EBC course 10 April 2003 Critical Appraisal of the Clinical Literature: The Big Picture Cynthia R. Long, PhD Associate Professor Palmer Center for Chiropractic.
Critical Appraisal of the Scientific Literature
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
Clinical Writing for Interventional Cardiologists.
VSM CHAPTER 6: HARM Evidence-Based Medicine How to Practice and Teach EMB.
RevMan for Registrars Paul Glue, Psychological Medicine What is EBM? What is EBM? Different approaches/tools Different approaches/tools Systematic reviews.
EXPERIMENTAL EPIDEMIOLOGY
Causal relationships, bias, and research designs Professor Anthony DiGirolamo.
Wipanee Phupakdi, MD September 15, Overview  Define EBM  Learn steps in EBM process  Identify parts of a well-built clinical question  Discuss.
Evidence Based Practice RCS /9/05. Definitions  Rosenthal and Donald (1996) defined evidence-based medicine as a process of turning clinical problems.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Study designs. Kate O’Donnell General Practice & Primary Care.
1 Study Design Issues and Considerations in HUS Trials Yan Wang, Ph.D. Statistical Reviewer Division of Biometrics IV OB/OTS/CDER/FDA April 12, 2007.
CAT 5: How to Read an Article about a Systematic Review Maribeth Chitkara, MD Rachel Boykan, MD.
PTP 661 EVIDENCE ABOUT INTERVENTIONS CRITICALLY APPRAISE THE QUALITY AND APPLICABILITY OF AN INTERVENTION RESEARCH STUDY Min Huang, PT, PhD, NCS.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
This material was developed by Oregon Health & Science University, funded by the Department of Health and Human Services, Office of the National Coordinator.
Systematic Synthesis of the Literature: Introduction to Meta-analysis Linda N. Meurer, MD, MPH Department of Family and Community Medicine.
Finding, Evaluating, and Presenting Evidence Sharon E. Lock, PhD, ARNP NUR 603 Spring, 2001.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Lecture 2: Evidence Level and Types of Research. Do you recommend flossing to your patients? Of course YES! Because: I have been taught to. I read textbooks.
Strengthening Research Capabilities Professor John B. Kaneene DVM, MPH, PhD, FAES, FAVES Center for Comparative Epidemiology Michigan State University.
بسم الله الرحمن الرحیم.
Corso di clinical writing. What to expect today? Core modules IntroductionIntroduction General principlesGeneral principles Specific techniquesSpecific.
Research design By Dr.Ali Almesrawi asst. professor Ph.D.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
Introduction to General Epidemiology (2) By: Dr. Khalid El Tohami.
Quantitative Research Design Dr. Mahmoud Al-Hussami.
Journal Club Curriculum-Study designs. Objectives  Distinguish between the main types of research designs  Randomized control trials  Cohort studies.
Critically Appraising a Medical Journal Article
The Research Design Continuum
CLINICAL PROTOCOL DEVELOPMENT
Randomized Trials: A Brief Overview
Chapter 7 The Hierarchy of Evidence
Presentation transcript:

Introduction to Critical Appraisal January 9, 2006 Carin Gouws

Objectives What is evidence-based clinical practice? A Hierarchy of Evidence Review of study designs and concepts Review of criteria for a good Randomized Control Trials (RCTs) Review of criteria for a good systematic review and meta-analyses

Evidence-Based Clinical Practice Using scientific evidence to inform clinical decision-making in conjunction with experience, a strong educational background and consideration of the patient’s values and situation Evidence alone is never enough –Framing the case correctly (e.g. correct diagnosis) –Value judgments, weighing costs and benefits to individuals and stakeholders –Ethics All evidence is not equal

A Hierarchy of Evidence Unsystematic clinical observation Physiologic studies (BP, CO, exercise capacity) Single observational study addressing patient- important outcomes Systematic review of observational studies addressing patient-important outcomes Single randomized control trial (RCT) Systematic reviews of RCTs N of 1 RCT

Hierarchy within the Hierarchy Internal Validity The ability of the study results to support a cause-effect relationship between the treatment and the observed outcome External Validity The generalizability of the study results to patients outside the study

Steps to Evidence-Based Practice 1.Identify the question (yours and the study’s) –Who are the patients? –What is the intervention? –What is the outcome of interest? 2.Collect the data 3.Critically appraise the evidence 4.Integrate what we have learned into clinical practice

Research Design Elements Experimental vs. Observational Intervention controlled by researcher or not Prospective vs. Retrospective Collecting data after the start of a study (forward) vs. looking at data already collected (backward) Concurrent Control vs. Historical Control Compare the outcome of the control group who participated in the study at the same time as the treatment group Compare the results of the treatment group with patient data that already exist from previous studies Dependent variable, Independent variable, Control groups, Confounders, Interactions, Bias, Surrogate Endpoints, Composite Endpoints

Research Design Elements Superiority, Non-inferiority, Equivalence Statistical significance vs. Clinical significance Efficacy (Explanatory) trial vs. Effectiveness (Management) trial –Can work under ideal circumstances vs. real world –Positive explanatory trial? –Negative efficacy trial? –Positive management trial? –Negative effectiveness trial? 0 Lower Threshold Upper Threshold

Research Designs Theory Building –Descriptive Observational Case Reports, Case series, Descriptive statistics, Surveys, Correlation studies, Qualitative Hypothesis Testing –Experimental random assignment, e.g. RCTs –Quasi-experimental Non-random assignment –Analytical Observational No intervention Cohort (incidence), case-control (proportion, use OR), cross-sectional (prevalence) Evidence Summaries –Systematic reviews, meta-analyses

Critical Appraisal Approach 1: A general approach that can be used for most types of studies Approach 2: Specific to RCTs assessing treatment outcome Approach 3: Specific to systematic reviews and meta-analyses –What is the difference between a systematic review and a meta-analyses?

Critical Appraisal 1: most studies 1.Clearly stated objectives & appropriate study design? 2.Is the study relevant – external validity? 3.Inclusions and exclusions described & justified? 4.Group allocation – control group? Random? 5.Procedures appropriately defined & applied? 6.Equal scrutiny of ALL groups (“blinding” & objective measures)? 7.Outcome measures & follow up adequate? 8.Biases / confounders dealt with?

Critical Appraisal 1: most studies 9.Statistical Analysis – appropriate tools? 10.Statistical Analysis– proper interpretation? 11.Sample Size or power described? “n” increases with SMALLER treatment effect, LARGER variation among individuals, MORE covariates, MORE groups 12.Clear and Complete Results? 13.Conclusions appropriate and complete?

Critical Appraisal 2: RCT INTERNAL VALIDITY: Are the results valid? Were patients randomized? –Adequacy of allocation sequence –Simple & restricted randomization ~ balanced prognostics Was randomization concealed? –Selection bias, Confounding bias Were patients/clinicians/outcome assessors aware of group allocation? –“Blinding”: placebo effect, interviewer bias, bias of interpretation –Trials where blinding was inadequate demonstrate (on average) a 17% overestimation of treatment effect

Critical Appraisal 2: RCT INTERNAL VALIDITY: Are the results valid? Were patients in treatment and control groups similar? –Balance of known prognostic factors (e.g. age, gender) Were patients analyzed in the groups to which they were randomized? –Intention-to-treat analysis –More conservative, preserve balance of prognostic factors, minimize type I error, greater generalizability Was follow-up complete? –Techniques for missing data: exclusion, assume worst-case scenario, last outcome carried forward, growth curve analysis, hot-deck method, regression, multiple imputation methods

Critical Appraisal 2: RCT What are the results? How large was the treatment effect? How precise was the estimate of the treatment effect? EXTERNAL VALIDITY: How can I apply the results to patient care? Were the study patients similar to the patients in my practice? Were all clinically important outcomes considered? Are the likely treatment benefits worth the potential harm and costs

Critical Appraisal 2: RCT ADDITIONAL ELEMENTS Subgroup Analysis (Interactions) –Indirect evidence supporting the hypothesized interaction? –Hypothesis precedes the analysis –How many comparisons were made? –Statistically significant effect given appropriate analysis (one test for both groups) –Magnitude of the effect Large differences may be by chance alone esp if small SS, therefore look at the precision (confidence intervals) –Interaction consistent across studies

Critical Appraisal 2: RCT ADDITIONAL ELEMENTS Surrogate Endpoints –Valid if established causal relationship between surrogate and patient-important outcome Composite Endpoints –Are the component endpoints of similar patient importance? –Do the more and less important endpoints occur with similar frequency? –Is the underlying biology for each outcome similar enough that comparable risk reductions are expected? –Are the point estimates of each component similar and are the confidence intervals narrow?

Critical Appraisal 3: Syst. Rev. A priori question/hypothesis formation Appropriate/relevant eligibility criteria Complete search of the literature Select relevant studies and resolving disagreement about inclusion appropriately Assessing publication bias Quality assessment of included studies and appropriate resolution of disagreement between assessors –Blinding was the only criterion with a significant influence on effect size (overestimation by 25%) – not concealment of randomization or loss to follow-up –Concealment of randomization may be a greater threat to internal validity in placebo-controlled trials –Blinding may be a greater threat to internal validity in trials using subjective outcomes –Completeness of follow-up may be of greater threat to internal validity when there is a higher expected rate of lost-to-follow-up or loss is related to the treatment

Critical Appraisal 3: Syst. Rev. Blinded extraction of the data, agreement between independent assessors Analysis and presentation of results –Original patient data vs. summary data –Pooling the data: Fixed-Effects and Random-Effects Models –Choice of summary measure (RR, OR, etc.) –Forest plot –Sensitivity analysis Examination of the robustness of the statistical findings How does the estimate of effect change? Differences in Study Results: Heterogeneity and appropriateness of pooling Subgroup Analysis

Summary There is always evidence: –Critical appraisal: Given the estimated treatment effect, the quality of the evidence and the direction of influence of biases needs to be considered in assessing what the true treatment effect might be Evidence alone is never enough: –When applying results, clinical judgment, patient values and correct clinical context is important

Preparation for Next Session February meeting: workshop in critical appraisal Materials: I will select 3-4 studies of relevant topics and make them available to you in advance Please read and analyze according to most suitable approach or using your own criteria (slides will be available in the resource centre) Be ready to discuss the studies

Conclusion Questions? Feedback? Further Reading: Guyatt, G., and Rennie, D. User’s Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. USA: AMA Press, (originally published as a series in JAMA) Sackett, D.L., Haynes, R.B. and Tugwell, P. Clinical Epidemiology: A Basic Science for Clinical Medicine. 2 nd ed. Boston: Little, Brown & Co., 1991.