Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.

Slides:



Advertisements
Similar presentations
Randomized controlled trials
Advertisements

Randomized Controlled Trial
Department of O UTCOMES R ESEARCH. Daniel I. Sessler, M.D. Professor and Chair Department of O UTCOMES R ESEARCH The Cleveland Clinic Clinical Research.
Designing Clinical Research Studies An overview S.F. O’Brien.
Observational Studies and RCT Libby Brewin. What are the 3 types of observational studies? Cross-sectional studies Case-control Cohort.
Study Designs in Epidemiologic
Critical Appraisal: Epidemiology 101 POS Lecture Series April 28, 2004.
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Basic Design Consideration. Previous Lecture Definition of a clinical trial The drug development process How different aspects of the effects of a drug.
天 津 医 科 大 学天 津 医 科 大 学 Clinical trail. 天 津 医 科 大 学天 津 医 科 大 学 1.Historical Background 1537: Treatment of battle wounds: 1741: Treatment of Scurvy 1948:
Design and Analysis of Clinical Study 12. Randomized Clinical Trials Dr. Tuan V. Nguyen Garvan Institute of Medical Research Sydney, Australia.
Elements of a clinical trial research protocol
Biostatistics ~ Types of Studies. Research classifications Observational vs. Experimental Observational – researcher collects info on attributes or measurements.
Journal Club Alcohol and Health: Current Evidence July-August 2006.
Biostatistics. But why? Why do we read scientific litterature? How do we read scientific litterature?
Clinical Trials Hanyan Yang
EVIDENCE BASED MEDICINE
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Epidemiological Study Designs And Measures Of Risks (2) Dr. Khalid El Tohami.
Study Designs By Az and Omar.
Pilot Study Design Issues
Are the results valid? Was the validity of the included studies appraised?
STrengthening the Reporting of OBservational Studies in Epidemiology
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
CHP400: Community Health Program - lI Mohamed M. B. Alnoor Research Methodology STUDY DESIGNS Observational / Analytical Studies Present: Disease Past:
Gil Harari Statistical considerations in clinical trials
Research Study Design and Analysis for Cardiologists Nathan D. Wong, PhD, FACC.
Lecture 16 (Oct 28, 2004)1 Lecture 16: Introduction to the randomized trial Introduction to intervention studies The research question: Efficacy vs effectiveness.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Types of study designs Arash Najimi
EVIDENCE BASED MEDICINE Effectiveness of therapy Ross Lawrenson.
Applied Epidemiology Sharla Smith. Discussion Assignments How to complete a discussion assignment –Read the chapters –Evaluate the question –Be very specific.
Study Designs in Epidemiologic
Bias Defined as any systematic error in a study that results in an incorrect estimate of association between exposure and risk of disease. To err is human.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
CAT 3 Harm, Causation Maribeth Chitkara, MD Rachel Boykan, MD.
Understanding real research 4. Randomised controlled trials.
EBCP. Random vs Systemic error Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie:
CHP400: Community Health Program - lI Research Methodology STUDY DESIGNS Observational / Analytical Studies Present: Disease Past: Exposure Cross - section.
Successful Concepts Study Rationale Literature Review Study Design Rationale for Intervention Eligibility Criteria Endpoint Measurement Tools.
Landmark Trials: Recommendations for Interpretation and Presentation Julianna Burzynski, PharmD, BCOP, BCPS Heme/Onc Clinical Pharmacy Specialist 11/29/07.
1 Statistics in Drug Development Mark Rothmann, Ph. D.* Division of Biometrics I Food and Drug Administration * The views expressed here are those of the.
BIOE 301 Lecture Seventeen. Progression of Heart Disease High Blood Pressure High Cholesterol Levels Atherosclerosis Ischemia Heart Attack Heart Failure.
TUJUAN MEMONITOR HASIL KERJA MAHASISWA (MEMBACA ARTIKEL) MENJELASKAN BAGIAN-BAGIAN KERTAS KERJA UNTUK MENELAAH ARTIKEL DAN KRITERIA PENILAIAN KUALITAS.
EXPERIMENTAL EPIDEMIOLOGY
1 Statistics in Research & Things to Consider for Your Proposal May 2, 2007.
A Randomised, Controlled Trial of Acetaminophen, Ibuprofen, and Codeine for Acute Pain relief in Children with Musculoskeletal Trauma Clark et al, Paediatrics.
System error Biases in epidemiological studies FETP India.
Critical Reading. Critical Appraisal Definition: assessment of methodological quality If you are deciding whether a paper is worth reading – do so on.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Study designs. Kate O’Donnell General Practice & Primary Care.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
1 Study Design Issues and Considerations in HUS Trials Yan Wang, Ph.D. Statistical Reviewer Division of Biometrics IV OB/OTS/CDER/FDA April 12, 2007.
Sifting through the evidence Sarah Fradsham. Types of Evidence Primary Literature Observational studies Case Report Case Series Case Control Study Cohort.
Compliance Original Study Design Randomised Surgical care Medical care.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Course: Research in Biomedicine and Health III Seminar 4: Critical assessment of evidence.
بسم الله الرحمن الرحیم.
Measures of disease frequency Simon Thornley. Measures of Effect and Disease Frequency Aims – To define and describe the uses of common epidemiological.
Chapter 12 Quantitative Questions and Procedures.
P. Olliaro WHO/TDR & University of Oxford
PLANNING RESEARCH.
CLINICAL PROTOCOL DEVELOPMENT
How to read a paper D. Singh-Ranger.
Interventional trials
Randomized Trials: A Brief Overview
Design of Clinical Trials
The Anglo Scandinavian Cardiac Outcomes Trial
Critical Reading of Clinical Study Results
Evidence Based Practice
Presentation transcript:

Study design P.Olliaro Nov04

Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional study What will happen?  Cohort study  Clinical trial

What happened? Case-control study Time Onset of study Direction of enquiry Cases Controls Exposed Non-exposed

What is happening? Cross-sectional study Time Onset of study No direction of enquiry With oucome Subjects Selected for Study Without outcome

What will happen? Cohort study With outcome Cohort Selected For Study Without outcome With outcome Time Onset of study Direction of enquiry Exposed OR Subjects Unexposed OR Controls

Randomised Controlled Clinical Trial With outcome Subjects Meeting Entry Criteria Without outcome With outcome Time Onset of study Direction of enquiry Experimental Subjects Controls (Treated OR Untreated)  Intervention

Trial profile for Controlled Clinical Trials (e.g. Malaria) - Patients attrition Total patient population (# screened) Total # patients in trial (# eligible, randomised) # treated Test intervention # controls (placebo, std Tx) Withdrawals: # treatment failures # lost to follow up # adverse event # others # with outcome on day X # with outcome on day X Withdrawals: # treatment failures # lost to follow up # adverse event # others # Non eligible (reasons:…)

Issues in design & interpretation of clinical trials Randomisation  Treatments still developed/recommended without properly randomised trials Overemphasis on significance testing  “magical” p=0.05 barrier. P-values only a guideline to the strength of evidence contradicting the null hypothesis of no treatment difference – NOT proof of treatment efficacy  Use interval estimation methods, e.g. confidence intervals  Often trial generate too many data (e.g. interim & subgroup analyses, multiple endpoints) & significance tests Size of trial  Often trials do not have enough patients to all reliable comparison  At planning stage, power calculation should be used realistically (but often produce samples >> number of patients available!)

Checklist for Assessing Clinical Trials General Characteristics Reasons the study is needed Purpose/Objectives: Major & Subsidiary Type: Experimental, Observational  Phase: I, II, III, IV, other Design: Controlled, Uncontrolled

Checklist for Assessing Clinical Trials Population Type (Healthy volunteers; Patients) How chosen/recruited? Entry/eligibility criteria: Inclusion, Exclusion Comparability of treatment groups: demography, prognostic criteria, stage of disease, associated disease, etc Similarity of participants to usual patient population

Checklist for Assessing Clinical Trials Treatments compared Dose rationale & details Dosage form & route of administration Ancillary therapy Biopharmaceutics: source, lot No (Test & Standard medications/Placebo)

Checklist for Assessing Clinical Trials Experimental Design Controls (active/inactive; concurrent/historical) Assignment of treatment: randomised? Timing

Checklist for Assessing Clinical Trials Procedures Terms & measures Data quality Common procedural biases: Procedure bias Recall bias Insensitive measure bias Detection bias Compliance bias

Checklist for Assessing Clinical Trials Study outcomes & interpretation Reliability of assessment Appropriate sample size Statistical methods  Use for what? Questions re: differences? Associations? Predictions?  “fishing expedition”  Multiple significance tests  Migration bias

Checklist for Assessing Clinical Trials Data Collection Measurements used to assess goal attainment (Appropriate type? Sensitivity? Timing?) Observers (Who? Variable?) Methods of collection (Standard? Reproducible?) Adverse events: Subjective (volunteered, elicited?); Objective (laboratory, ECG, etc)

Checklist for Assessing Clinical Trials Bias control Bias = measurement or systematic errors (≠ random errors) Subject selection  Prevalence or incidence (Neyman) bias: e.g. early fatalities, “silent” cases  Admission rate bias (Berkson’s fallacy): distortions in RR  Non-response bias or volunteers effect  Procedure selection bias Concealment of allocation Blinding:  Subjects  Observers  Others

Checklist for Assessing Clinical Trials Results Primary outcome measures Secondary outcome measures Drop outs (reasons, effects on results) Compliance: Participants (with treatment); Investigators (with protocol) Subgroup analysis Predictors of response

Checklist for Assessing Clinical Trials Data analysis Comparability of treatment groups Missing data Statistical tests: if differences observed, are they clinically meaningful? If no difference, insufficient power?

Hospital files CRF Entry 1 Study Participant Entry 2 Analysis Report Publication