Download presentation
Presentation is loading. Please wait.
Published byElmer McDonald Modified over 9 years ago
1
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods of analysis and presentation Sources of bias
2
2 Definitions Traditional (narrative) review: –Selective, biassed Systematic review (overview): –Synthesis of studies of a research question –Explicit methods for study selection, data abstraction, and analysis (repeatable) Meta-analysis: –Quantitative pooling of study results
3
3 Source: l’Abbé et al, Ann Intren Med 1987, 107: 224-233
4
4 Protocol preparation Research question Study “population”: –search strategy –inclusion/exclusion criteria
5
5 Protocol: Search strategy –computerized databases E.g. Medline, CINAHL, Embase, Cochrane clinical trial database, PubMed, Psychinfo test sensitivity and predictive value of search strategy –hand-searches (reference list, relevant journals, colleagues) –“grey” (unpublished) literature: pro: publication bias con: results less reliable –Search strategy should be reliable
6
6 Identifying relevant studies for systematic reviews of RCTs in vision research (Dickerson, in Systematic Reviews, BMJ,1995) Sensitivity and “precision” of Medline searches Gold standard: –registry of RCTs in vision research extensive computer and hand searches contacts with investigators to clarify design Sensitivity: –proportion of known RCTs identified by the search “Precision” (PV+): –proportion of publications identified by search that were RCTs
7
7 Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995
8
8
9
9
10
10 Protocol preparation Study “population”: –inclusion/exclusion criteria: language study design outcome of interest etc. Source: Data abstraction form for meta-analysis project
11
11 Protocol preparation Data collection: –standardized abstraction form –number of abstractors –blinding of abstractors –rules for resolving discrepancies (consensus, other) –use of quality scores
12
12 Source: l’Abbé et al, Ann Intren Med 1987, 107:224-233
13
13 Analysis Measure of effect: –odds ratio, risk/rate ratio –risk/rate difference –relative risk reduction Graphical methods: –conventional (individual studies) –cumulative –exploring heterogeneity
14
14 Analyses Pooling results: –is it appropriate? –equivalent to pooling results from multi-centre trials –fixed effect (e.g., Mantel-Haenzel) methods assume that all trials have same underlying treatment effect – random effect methods (e.g., DerSimonian & Laird): allow for heterogeneity of treatment effects
15
15 Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995
16
16 Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995
17
17 Heterogeneity Sources of heterogeneity: –Study design –Study population –Intervention –Methodological features Approaches: –Descriptive and graphical analyses –Meta-regression Effect size is outcome
18
18 Source: l’Abbé et al, Ann Intren Med 1987, 107:224-233
19
19 Quality scores Rating scales and checklists to assess methodological quality of RCTs How should they be used? –Qualitative assessment –Exclusion of weaker studies –Weighting of estimates
20
20 Does quality of trials affect estimate of intervention efficacy? (Moher et al, 1998) Random sample of 11 meta-analyses of 127 RCTs Replicated analysis Used quality scales/measures Results: –masked abstraction provided higher quality score than unmasked –low quality trials found stronger effects than high quality trials –quality-weighted analysis resulted in lower statistical heterogeneity
21
21 Source: Moher et al, Lancet 1998, 352: 609-13
22
22 Source: Moher et al, Lancet 1998, 352; 609-13
23
23 Unresolved questions about meta-analysis Apples and oranges? –Between-study differences in study population, design, outcome measures, etc. Inclusion of weak studies? Publication bias –methods to evaluate impact –- particularly with small studies Is it better to do good original studies?
24
24 Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996) Selected meta-analyses from Medline and Cochrane pregnancy and childbirth database with at least 1 “large” study and 2 smaller studies: –sample size approach (n=1000+) - 79 meta-analyses –statistical power approach (adequate size to detect treatment effect from pooled analysis - 61 meta- analyses Results: –agreement between larger trials and meta-analysis 82- 90% using random effects models –greater disagreement using fixed effects models
25
25 Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996) Results: –agreement between larger trials and meta-analysis 82- 90% using random effects models –greater disagreement using fixed effects models Conclusion: –large and small trial results generally agree –each type of trial has advantages and disadvantages: large trials provide more stable estimates of effect small trials may better effect heterogeneity of clinical populations
26
26 Risk ratios from large studies vs pooled smaller studies (Cappeleri et al,1996) (Left- sample size approach; right - statistical power approach) Source: Cappeleri et al, JAMA 1996, 276: 1332-1338
27
27 Source: Cappeleri et al, JAMA 1996, 276: 1332-1338
28
28 Discrepancies between meta-analyses and subsequent large RCTs (LeLorier et al, 1997) Compared results of 12 large (n=1000+) RCTs with results of 19 prior meta-analyses on same topics
29
29 Source: Lelorier et al, NEJM 1997, 337: 536-42
30
30 Source: Lelorier et al, NEJM 1997, 337: 536-42
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.