Download presentation
Presentation is loading. Please wait.
Published byBlaise O’Neal’ Modified over 9 years ago
1
Study design P.Olliaro Nov04
2
Study designs: observational vs. experimental studies What happened? Case-control study What’s happening? Cross-sectional study What will happen? Cohort study Clinical trial
3
What happened? Case-control study Time Onset of study Direction of enquiry Cases Controls Exposed Non-exposed
4
What is happening? Cross-sectional study Time Onset of study No direction of enquiry With oucome Subjects Selected for Study Without outcome
5
What will happen? Cohort study With outcome Cohort Selected For Study Without outcome With outcome Time Onset of study Direction of enquiry Exposed OR Subjects Unexposed OR Controls
6
Randomised Controlled Clinical Trial With outcome Subjects Meeting Entry Criteria Without outcome With outcome Time Onset of study Direction of enquiry Experimental Subjects Controls (Treated OR Untreated) Intervention
7
Trial profile for Controlled Clinical Trials (e.g. Malaria) - Patients attrition Total patient population (# screened) Total # patients in trial (# eligible, randomised) # treated Test intervention # controls (placebo, std Tx) Withdrawals: # treatment failures # lost to follow up # adverse event # others # with outcome on day X # with outcome on day X Withdrawals: # treatment failures # lost to follow up # adverse event # others # Non eligible (reasons:…)
8
Issues in design & interpretation of clinical trials Randomisation Treatments still developed/recommended without properly randomised trials Overemphasis on significance testing “magical” p=0.05 barrier. P-values only a guideline to the strength of evidence contradicting the null hypothesis of no treatment difference – NOT proof of treatment efficacy Use interval estimation methods, e.g. confidence intervals Often trial generate too many data (e.g. interim & subgroup analyses, multiple endpoints) & significance tests Size of trial Often trials do not have enough patients to all reliable comparison At planning stage, power calculation should be used realistically (but often produce samples >> number of patients available!)
9
Checklist for Assessing Clinical Trials General Characteristics Reasons the study is needed Purpose/Objectives: Major & Subsidiary Type: Experimental, Observational Phase: I, II, III, IV, other Design: Controlled, Uncontrolled
10
Checklist for Assessing Clinical Trials Population Type (Healthy volunteers; Patients) How chosen/recruited? Entry/eligibility criteria: Inclusion, Exclusion Comparability of treatment groups: demography, prognostic criteria, stage of disease, associated disease, etc Similarity of participants to usual patient population
11
Checklist for Assessing Clinical Trials Treatments compared Dose rationale & details Dosage form & route of administration Ancillary therapy Biopharmaceutics: source, lot No (Test & Standard medications/Placebo)
12
Checklist for Assessing Clinical Trials Experimental Design Controls (active/inactive; concurrent/historical) Assignment of treatment: randomised? Timing
13
Checklist for Assessing Clinical Trials Procedures Terms & measures Data quality Common procedural biases: Procedure bias Recall bias Insensitive measure bias Detection bias Compliance bias
14
Checklist for Assessing Clinical Trials Study outcomes & interpretation Reliability of assessment Appropriate sample size Statistical methods Use for what? Questions re: differences? Associations? Predictions? “fishing expedition” Multiple significance tests Migration bias
15
Checklist for Assessing Clinical Trials Data Collection Measurements used to assess goal attainment (Appropriate type? Sensitivity? Timing?) Observers (Who? Variable?) Methods of collection (Standard? Reproducible?) Adverse events: Subjective (volunteered, elicited?); Objective (laboratory, ECG, etc)
16
Checklist for Assessing Clinical Trials Bias control Bias = measurement or systematic errors (≠ random errors) Subject selection Prevalence or incidence (Neyman) bias: e.g. early fatalities, “silent” cases Admission rate bias (Berkson’s fallacy): distortions in RR Non-response bias or volunteers effect Procedure selection bias Concealment of allocation Blinding: Subjects Observers Others
17
Checklist for Assessing Clinical Trials Results Primary outcome measures Secondary outcome measures Drop outs (reasons, effects on results) Compliance: Participants (with treatment); Investigators (with protocol) Subgroup analysis Predictors of response
18
Checklist for Assessing Clinical Trials Data analysis Comparability of treatment groups Missing data Statistical tests: if differences observed, are they clinically meaningful? If no difference, insufficient power?
19
Hospital files CRF Entry 1 Study Participant Entry 2 Analysis Report Publication
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.