Meta-analysis: summarising data for two arm trials and other simple outcome studies Steff Lewis statistician.

Slides:



Advertisements
Similar presentations
Designing an impact evaluation: Randomization, statistical power, and some more fun…
Advertisements

The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Comparing Two Proportions (p1 vs. p2)
Statistical Analysis and Data Interpretation What is significant for the athlete, the statistician and team doctor? important Will Hopkins
Significance and effect sizes What is the problem with just using p-levels to determine whether one variable has an effect on another? Don’t EVER just.
Conducting systematic reviews for development of clinical guidelines 8 August 2013 Professor Mike Clarke
Introduction to Critical Appraisal : Quantitative Research
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Sample Size Determination Ziad Taib March 7, 2014.
Study Designs By Az and Omar.
The Bahrain Branch of the UK Cochrane Centre In Collaboration with Reyada Training & Management Consultancy, Dubai-UAE Cochrane Collaboration and Systematic.
Critical appraisal Systematic Review กิตติพันธุ์ ฤกษ์เกษม ภาควิชาศัลยศาสตร์ มหาวิทยาลัยเชียงใหม่
Making all research results publically available: the cry of systematic reviewers.
Are the results valid? Was the validity of the included studies appraised?
9.0 A taste of the Importance of Effect Size The Basics of Effect Size Extraction and Statistical Applications for Meta- Analysis Robert M. Bernard Philip.
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
Systematic Reviews Professor Kate O’Donnell. Reviews Reviews (or overviews) are a drawing together of material to make a case. These may, or may not,
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Confidence Intervals Nancy D. Barker, M.S.. Statistical Inference.
Data Analysis in Systematic Reviews-Meta Analysis.
Data Analysis in Systematic Reviews
1 ICEBOH Split-mouth studies and systematic reviews Ian Needleman 1 & Helen Worthington 2 1 Unit of Periodontology UCL Eastman Dental Institute International.
How to Analyze Systematic Reviews: practical session Akbar Soltani.MD. Tehran University of Medical Sciences (TUMS) Shariati Hospital
Simon Thornley Meta-analysis: pooling study results.
Statistical Applications for Meta-Analysis Robert M. Bernard Centre for the Study of Learning and Performance and CanKnow Concordia University December.
A systematic meta-analysis of randomized controlled trials for adjuvant chemotherapy for localized resectable soft-tissue sarcoma Nabeel Pervaiz Nigel.
Understanding real research 4. Randomised controlled trials.
EBCP. Random vs Systemic error Random error: errors in measurement that lead to measured values being inconsistent when repeated measures are taken. Ie:
Systematic Reviews By Jonathan Tsun & Ilona Blee.
Meta-analysis and “statistical aggregation” Dave Thompson Dept. of Biostatistics and Epidemiology College of Public Health, OUHSC Learning to Practice.
Meta-analysis 統合分析 蔡崇弘. EBM ( evidence based medicine) Ask Acquire Appraising Apply Audit.
The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to meta-analysis.
META-ANALYSIS: THE ART AND SCIENCE OF COMBINING INFORMATION Ora Paltiel, October 28, 2014.
Lenalidomide Maintenance Therapy in Multiple Myeloma: A Meta-Analysis of Randomized Trials Singh PP et al. Proc ASH 2013;Abstract 407.
PH 401: Meta-analysis Eunice Pyon, PharmD (718) , HS 506.
EBM Conference (Day 2). Funding Bias “He who pays, Calls the Tune” Some Facts (& Myths) Is industry research more likely to be published No Is industry.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Levels of evidence and Interpretation of a systematic review
Development and the Role of Meta- analysis on the Topic of Inflammation Donald S. Likosky, Ph.D.
Objectives  Identify the key elements of a good randomised controlled study  To clarify the process of meta analysis and developing a systematic review.
Retain H o Refute hypothesis and model MODELS Explanations or Theories OBSERVATIONS Pattern in Space or Time HYPOTHESIS Predictions based on model NULL.
This material was developed by Oregon Health & Science University, funded by the Department of Health and Human Services, Office of the National Coordinator.
Basics of Meta-analysis
116 (27%) 185 (43%) 49 (11%) How to critically appraise a systematic review Igho J. Onakpoya MD MSc University of Oxford Centre for Evidence-Based Medicine.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 18 Systematic Review and Meta-Analysis.
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
Systematic reviews and meta-analyses: when and how to do them Andrew Smith Royal Lancaster Infirmary 18 May 2015.
بسم الله الرحمن الرحیم.
Is a meta-analysis right for me? Jaime Peters June 2014.
Meta-analysis of observational studies Nicole Vogelzangs Department of Psychiatry & EMGO + institute.
Meta-analysis Overview
Systematic review of Present clinical reality
Experiments, Simulations Confidence Intervals
Logistic Regression APKC – STATS AFAC (2016).
Benefits and Pitfalls of Systematic Reviews and Meta-Analyses
The binomial applied: absolute and relative risks, chi-square
Heterogeneity and sources of bias
Lecture 4: Meta-analysis
Association between risk-of-bias assessments and results of randomized trials in Cochrane reviews: the ROBES study Jelena Savović1, Becky Turner2, David.
Gerald Dyer, Jr., MPH October 20, 2016
Narrative Reviews Limitations: Subjectivity inherent:
Interpreting Basic Statistics
Dr. Maryam Tajvar Department of Health Management and Economics
Understanding evidence: The big picture
EAST GRADE course 2019 Introduction to Meta-Analysis
Hepatic Vein Pressure Gradient Reduction and Prevention of Variceal Bleeding in Cirrhosis: A Systematic Review  Gennaro D’Amico, Juan Carlos Garcia-Pagan,
Fogarty International Training Program
Understanding evidence: The big picture
Relative Risk of Onset of Cancer from the Cholesterol Treatment Trialists’ (CTT) Meta-Analysis of Statin Trials, According to Year of Onset Risk ratios.
Presentation transcript:

Meta-analysis: summarising data for two arm trials and other simple outcome studies Steff Lewis statistician

When can/should you do a meta-analysis? When more than one study has estimated an effect When there are no differences in the study characteristics that are likely to substantially affect outcome When the outcome has been measured in similar ways When the data are available (take care with interpretation when only some data are available)

Types of data Dichotomous/ binary data Counts of infrequent events Short ordinal scales Long ordinal scales Continuous data Censored data

What to collect Need the total number of patients in each treatment group Plus: Binary data –The number of patients who had the relevant outcome in each treatment group Continuous data –The mean and standard deviation of the effect for each treatment group

Then enter data into RevMan / MIX (easy to use and free) Or R (harder to use and free) Or Stata (harder to use and costs) Etc etc....

Summary statistic for each study Calculate a single summary statistic to represent the effect found in each study For binary data –Risk ratio with rarer event as outcome For continuous data –Difference between means

Meta-analysis

Averaging studies Starting with the summary statistic for each study, how should we combine these? A simple average gives each study equal weight This seems intuitively wrong Some studies are more likely to give an answer closer to the ‘true’ effect than others

Weighting studies More weight to the studies which give us more information –More participants –More events –Lower variance Weight is closely related to the width of the study confidence interval: wider confidence interval = less weight

Displaying results graphically RevMan (the Cochrane Collaboration’s free meta-analysis software) and MIX produce forest plots (as do R and Stata and some other packages)

Heterogeneity

What is heterogeneity? Heterogeneity is variation between the studies’ results

Causes of heterogeneity Differences between studies with respect to: Patients: diagnosis, in- and exclusion criteria, etc. Interventions: type, dose, duration, etc. Outcomes: type, scale, cut-off points, duration of follow-up, etc. Quality and methodology: randomised or not, allocation concealment, blinding, etc.

How to deal with heterogeneity 1. Do not pool at all 2. Ignore heterogeneity: use fixed effect model 3. Allow for heterogeneity: use random effects model 4. Explore heterogeneity: meta-regression (tricky)

How to assess heterogeneity from a forest plot

Statistical measures of heterogeneity The Chi 2 test measures the amount of variation in a set of trials, and tells us if it is more than would be expected by chance

0.61 ( 0.46, 0.81 ) Study Liggins 1972 Pooled Block 1977 Morrison 1978 Taeusch 1979 Papageorgiou 1979 Schutte 1979 Collaborative Group 1981 Estimates with 95% confidence intervals Corticosteroids betterCorticosteroids worse Odds ratio 1 Heterogeneity test Q = 11.2 (6 d.f.) p = 0.08 Trials from Cochrane logo: Corticosteroids for preterm birth (neonatal death)

Heterogeneity test Q = 44.7 (27 d.f.) p = 0.02 Estimates with 95% confidence intervals Study Liggins 1972 Block 1977 Morrison 1978 Taeusch 1979 Papageorgiou 1979 Schutte 1979 Collaborative Group Odds ratio Odds ratio 1 Heterogeneity test Q = 11.2 (6 d.f.) p = 0.08 Corticosteroids for preterm birth (neonatal death)

where Q = heterogeneity  2 statistic I 2 can be interpreted as the proportion of total variability explained by heterogeneity, rather than chance I squared quantifies heterogeneity

Roughly, I 2 values of 25%, 50%, and 75% could be interpreted as indicating low, moderate, and high heterogeneity For more info see: Higgins JPT et al. Measuring inconsistency in meta-analyses. BMJ 2003;327:

Fixed and random effects

Fixed effect Philosophy behind fixed effect model: there is one real value for the treatment effect all trials estimate this one value Problems with ignoring heterogeneity: confidence intervals too narrow

Random effects Philosophy behind random effects model: there are many possible real values for the treatment effect (depending on dose, duration, etc etc). each trial estimates its own real value

Example

Could we just add the data from all the trials together? One approach to combining trials would be to add all the treatment groups together, add all the control groups together, and compare the totals This is wrong for several reasons, and it can give the wrong answer

If we add up the columns we get 34.3% vs 32.5%, a RR of 1.06, a higher chance of death in the steroids group From a meta-analysis, we get RR=0.96, a lower chance of death in the steroids group

Problems with simple addition of studies breaks the power of randomisation imbalances within trials introduce bias

* The Pitts trial contributes 17% (201/1194) of all the data to the experimental column, but 8% (74/925) to the control column. Therefore it contributes more information to the average chance of death in the experimental column than it does to the control column. There is a high chance of death in this trial, so the chance of death for the expt column is higher than the control column.

Interpretation

Interpretation - “Evidence of absence” vs “Absence of evidence” If the confidence interval crosses the line of no effect, this does not mean that there is no difference between the treatments It means we have found no statistically significant difference in the effects of the two interventions

In the example below, as more data is included, the overall odds ratio remains the same but the confidence interval decreases. It is not true that there is ‘no difference’ shown in the first rows of the plot – there just isn’t enough data to show a statistically significant result.

Interpretation - Weighing up benefit and harm When interpreting results, don’t just emphasise the positive results. A treatment might cure acne instantly, but kill one person in 10,000 (very important as acne is not life threatening).

Interpretation - Quality Rubbish studies = unbelievable results If all the trials in a meta-analysis were of very low quality, then you should be less certain of your conclusions. Instead of “Treatment X cures depression”, try “There is some evidence that Treatment X cures depression, but the data should be interpreted with caution.”

Summary Choose an appropriate effect measure Collect data from trials and do a meta- analysis if appropriate Interpret the results carefully –Evidence of absence vs absence of evidence –Benefit and harm –Quality –Heterogeneity

Sources of statistics help and advice Cochrane Handbook for Systematic Reviews of Interventions The Cochrane distance learning material The Cochrane RevMan user guide.