Download presentation
Presentation is loading. Please wait.
Published byAnne Sylvia Palmer Modified over 8 years ago
1
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods of analysis and presentation Sources of bias
2
2 Definitions Traditional (narrative) review: –Selective, biassed Systematic review (overview): –Synthesis of studies of a research question –Explicit methods for study selection, data abstraction, and analysis (repeatable) Meta-analysis: –Quantitative pooling of study results
3
3 Source: l’Abbé et al, Ann Intren Med 1987, 107: 224-233
4
4 Protocol preparation Research question Study “population”: –search strategy –inclusion/exclusion criteria
5
5 Protocol preparation Search strategy: –computerized databases (Medline, CINAHL, Psychinfo, etc.): test sensitivity and predictive value of search strategy –hand-searches (reference list, relevant journals, colleagues) –“grey” (unpublished) literature: pro: publication bias con: results less reliable
6
6 Identifying relevant studies for systematic reviews of RCTs in vision research (Dickerson, in Systematic Reviews, BMJ,1995) Sensitivity and precision” of Medline searching Gold standard: –registry of RCTs in vision research extensive computer and hand searches contacts with investigators to clarify design Sensitivity: –proportion of known RCTs identified by the search “Precision”: –proportion of publications identified by search that were RCTs
7
7 Source: Chalmers + Altman, Systematic Reviews, BMJ Publishing Group, 1995
8
8
9
9
10
10 Protocol preparation Study “population”: –inclusion/exclusion criteria: language study design outcome of interest etc. Source: Data abstraction form for meta-analysis project
11
11 Protocol preparation Data collection: –standardized abstraction form –number of abstractors –blinding of abstractors –rules for resolving discrepancies (consensus, other) –use of quality scores
12
12 Source: l’Abbé et al, Ann Intren Med 1987, 107:224-233
13
13 Analysis Measure of effect: –odds ratio, risk/rate ratio –risk/rate difference –relative risk reduction Graphical methods: –conventional (individual studies) –cumulative –exploring heterogeneity
14
14 Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995
15
15 Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995
16
16 Analyses Pooling results: –is it appropriate? –equivalent to pooling results from multi-centre trials –fixed (e.g., Mantel-Haenzel) methods assume that all trials have same underlying treatment effect – random effects methods (e.g., DerSimonian & Laird): allow for heterogeneity of treatment effects
17
17 Source: Chalmers + Altman Systematic Reviews, BMJ Publishing Group, 1995
18
18
19
19
20
20 Source: l’Abbé et al, Ann Intren Med 1987, 107:224-233
21
21 Quality scores Rating scales and checklists to assess methodological quality of RCTs How should they be used? –Qualitative assessment –Exclusion of weaker studies –Weighting of estimates
22
22 Does quality of trials affect estimate of intervention efficacy? (Moher et al, 1998) Random sample of 11 meta-analyses of 127 RCTs Replicated analysis Used quality scales/measures Results: –masked abstraction provided higher quality score than unmasked –low quality trials found stronger effects than high quality trials –quality-weighted analysis resulted in lower statistical heterogeneity
23
23 Source: Moher et al, Lancet 1998, 352: 609-13
24
24 Source: Moher et al, Lancet 1998, 352: 609-13
25
25 Source: Moher et al, Lancet 1998, 352; 609-13
26
26 Unresolved questions about meta-analysis Apples and oranges? –Between-study differences in study population, design, outcome measures, etc. Inclusion of weak studies? Publication bias –methods to evaluate impact –- particularly with small studies Is it better to do good original studies?
27
27 Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996) Selected meta-analyses from Medline and Cochrane pregnancy and childbirth database with at least 1 “large” study and 2 smaller studies: –sample size approach (n=1000+) - 79 meta-analyses –statistical power approach (adequate size to detect treatment effect from pooled analysis - 61 meta- analyses Results: –agreement between larger trials and meta-analysis 82- 90% using random effects models –greater disagreement using fixed effects models
28
28 Large trials vs meta-analyses of smaller trials (Cappelleri et al, 1996) Results: –agreement between larger trials and meta-analysis 82- 90% using random effects models –greater disagreement using fixed effects models Conclusion: –large and small trial results generally agree –each type of trial has advantages and disadvantages: large trials provide more stable estimates of effect small trials may better effect heterogeneity of clinical populations
29
29 Risk ratios from large studies vs pooled smaller studies (Cappeleri et al,1996) (Left- sample size approach; right - statistical power approach) Source: Cappeleri et al, JAMA 1996, 276: 1332-1338
30
30 Source: Cappeleri et al, JAMA 1996, 276: 1332-1338
31
31 Discrepancies between meta-analyses and subsequent large RCTs (LeLorier et al, 1997) Compared results of 12 large (n=1000+) RCTs with results of 19 prior meta-analyses (M-A)on same topics For total of 40 primary and secondary outcomes, agreement between large trial and M-A only fair (kappa = 0.35, 95% CI.06 to.64) Positive predictive value of M-A = 68% Negative predictive value of M-A= 67%
32
32 Source: Lelorier et al, NEJM 1997, 337: 536-42
33
33 Source: Lelorier et al, NEJM 1997, 337: 536-42
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.