Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to Systematic Reviews.

Similar presentations


Presentation on theme: "The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to Systematic Reviews."— Presentation transcript:

1 The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to Systematic Reviews

2 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Part 1: The science of research synthesis Summarize existing empirical research to Inform policy and practice Provide directions for further research Using empirical evidence about the reliability and validity of review methods Cochrane Collaboration (methodological reviews), Campbell Collaboration

3 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Evidence for practice and policy Adapted from: Gibbs (2003), Davies (2004)

4 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Evidence for practice and policy Adapted from: Gibbs (2003), Davies (2004)

5 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Efficacy and Effectiveness Much attention paid to these issues – Not more important than other topics, but effects matter Much room for improvement in how we analyze, synthesize, and understand treatment effects – Even though many reviews deal with these topics

6 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org The problem: Studies pile up “What can you build with thousands of bricks?” (Lipsey, 1997) Many studies are conducted on the same topic Which one(s) do we use? How do we use them?

7 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Rationale for Research Synthesis Combining results of multiple studies 1. Provides more compelling evidence than results of any single study Single studies can have undue influence on practice and policy We don’t use single subject (N=1) designs to assess public opinion, shouldn’t rely on single studies to answer important questions (e.g., about treatment effects)

8 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Rationale for Research Syntheses (cont.) 2. Provides new opportunities to investigate What works best for whom under what conditions Why results may vary across studies that differ in – Research designs – Sample characteristics (populations/problems) – Intervention/implementation – Comparison conditions – Measures – Geo/cultural context/setting Using analyses that capitalize on natural variations across studies

9 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What can we build with thousands of bricks?

10 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org How do we build evidence? What are our blueprints? What are the raw materials? What methods are used to combine results across studies?

11 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Blue prints Plans for review Reviews vary in amount of planning, transparency, rigor Three approaches: – Traditional, narrative reviews (still very common in social and behavioral sciences) – Systematic reviews – Meta-analysis

12 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Traditional reviews Convenience samples of published studies Narrative description of studies Cognitive algebra or “vote counting” to synthesize results – Relies on statistical significance in primary studies, which may be “underpowered” (too small or too weak to detect effects) Decision rules are not transparent Vulnerable to many sources of bias…

13 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Publication bias Studies with statistically significant, positive results are approx. 3 times more likely to be published than similar studies with null or negative results (Song et al., 2009, inception cohort) – i.e., likelihood of publication is related to direction and significance of results--net of influence of other variables – (Dickersin, 2005; Scherer et al., 2004; Song et al., 2009; Torgerson, 2006) Sources of publication bias are complex – Investigators less likely to submit null results for conference presentations (Song et al., 2009) & publication (Dickersin, 2005; Song et al., 2009) – Peer reviewers & editors less likely to accept/publish null results? (Mahoney, 1977 vs. Song et al., 2009)

14 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Dissemination biases Studies with significant results are – Published faster (Hopewell et al., 2001) – Cited and reprinted more often (Egger & Smith) – More likely to be published in English than other languages (Egger, Zellweger-Zahner et al.) Easier to locate (esp. in English)

15 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Outcome reporting bias Within studies with mixed results, significant results are more likely to be – reported (mentioned at all) – fully reported (i.e., data provided) – Chan et al., 2004, 2005; Williamson et al., 2006 – Recent article in the New York Review of Books on Tamiflu

16 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Confirmation bias Tendency to seek and accept information that confirms prior expectations (hypotheses) and ignore evidence to the contrary – (Bacon 1621/1960, Watson 1960, 1968; Mahoney, 1977; Fugelsang et al., 1994; Nickerson, 1998; Schrag, 1999 Allegiance bias – Researchers’ preferences predict results (Luborsky et al., 1999)

17 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Other sources of bias Selection bias – Trivial properties of studies or reports affect recall and evaluation of information – Memorable titles (Bushman & Wells, 2001)

18 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Problems Publication and reporting biases are cumulative (Altman, 2006) – Tend to inflate estimates of effects – Serve to maintain orthodoxy (popular theories/treatments) These biases are ubiquitous, but often ignored

19 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Better blueprints: Systematic reviews Aim to minimize bias and error in the review process Develop & follow pre-determined plan (protocol) Use transparent (well-documented, replicable) procedures to locate, analyze, and synthesize results of previous studies

20 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Systematic reviews (SRs) Steps to reduce bias and error: Set explicit inclusion/exclusion criteria Develop and document strategies for locating all relevant studies (regardless of publication status) Inter-rater agreement (reliability) on key decisions, data extraction, coding Formal study quality assessment (risk of bias) Meta-analysis (when possible) to synthesize results across studies

21 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Meta-analysis (MA) Set of statistical procedures used to assess Averages across studies Variations across studies Potential sources of variation (moderators) Risk of bias (e.g., tests for publication & small sample bias)

22 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Systematic reviews don’t always include meta-analysis – Might include narrative synthesis (or no synthesis) – Can include multiple meta- analyses Meta-analyses are not always based on systematic reviews – Many use convenience sample of published studies – Vulnerable to publication and dissemination biases

23 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Some “systematic reviews” aren’t Evidence-based standards for SRs & MA – based on methodological research (Cochrane Library) Standards for conduct of SRs – developed by Cochrane and Campbell Collaborations (Higgins & Green, 2009) Standards for reporting SRs & MA – PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses; Moher et al., 2009) Standards not followed by US Evidence-based Practice Centers, most peer-reviewed journals, etc.

24 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Quality of raw materials matters High quality materials (studies) needed to produce a strong, reliable product “Best” materials depend what we are building (aims)

25 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What are studies made of? Multiple concepts and measures

26 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Building evidence A simple example: one study, 30 outcome measures What did investigators make of results? What did reviewers make of results? (Littell, 2008)

27 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org An Example: (Brunk et al. 1987) Parent training vs Multisystemic Therapy 43 families of abused/neglected children randomly assigned to Parent training (PT) groups or Multisystemic Therapy (MST) 33 /43 families completed treatment and provided data on outcomes immediately after treatment 30 outcomes (scales and subscales )

28 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Results expected by chance (30 outcomes)

29 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Results obtained (Brunk et al., 1987)

30 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Results obtained (Brunk et al. 1987) Parent Training vs Multisystemic Therapy Outcome data

31 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What did the investigators make of these results? Data provided on all (7) statistically significant results 12/22 non-significant results Outcome reporting bias Data provided Outcome data

32 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What did the investigators make of these results? Both groups showed decreased psychiatric symptoms, reduced stress, and reduced severity of identified problems. MST was more effective than PT at restructuring parent-child relations. PT was more effective than MST at reducing identified social problems. Abstract Data provided Outcome data

33 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What did the investigators build? A balanced report (with some missing data) Abstract Data provided Outcome data

34 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What did the published reviews build? “ Parents in both groups reported decreases in psychiatric [symptoms] and reduced overall stress....both groups demonstrated decreases in the severity of the identified problems....[MST] improved [parent-child] interactions, implying a decreased risk for maltreatment of children in the MST condition” (p. 293). Outcome data

35 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What did the published reviews build? Outcome data

36 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What did the published reviews build? Outcome data

37 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What did the published reviews build? Outcome data

38 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Summary: Published reviews describing Brunk et al. Outcome data

39 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Summary: Published reviews describing Brunk et al. Most reviews used a single phrase to characterize results of this study, highlighting advantages of one approach (MST) Ignoring valuable information on relative advantages, disadvantages, and equivalent results of different approaches Outcome data

40 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Reviews include multiple studies How do reviewers add them up? (synthesize results)

41 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What methods do reviewers use? Analysis of reviews of research on effects of MST published after 1996 – 86+ reviews – more reviews than studies! – Assessed 66 reviews – Many “lite” reviews (rely on other reviews) – 37 reviews cited one or more primary studies (Littell, 2008)

42 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org What methods do reviewers use? 37 reviews cited one or more primary studies – Most were traditional, narrative summaries of convenience samples of published reports (Littell, 2008) – Most conclude that MST “works” (is consistently more effective than alternatives) – Some conclude that MST is effective across problems, populations and settings citing Brunk et al. (1987) [only] as the evidence for effects in cases of child abuse and neglect (Burns et al., 2000; Kazdin, 2003; Kazdin & Weisz, 1998; Lansverk, 2007)

43 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Better ways to build evidence Using the science of research synthesis An example…

44 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org A Cochrane/Campbell review Multisystemic Therapy for social, emotional, and behavioral problems in youth aged 10-17 (Littell, Popa, Forsythe, 2004, 2005) – Protocol published in 2004, review published in 2005 in the Cochrane Library and the Campbell Library – Series of articles in Children and Youth Services Review (includes debate with MST developers) – Update underway now

45 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Campbell/Cochrane review Search for relevant studies regardless of publication status – Information retrieval specialists – Multiple strategies for locating grey literature Search for missing data from published studies – Contacts with investigators Formal data extraction & coding – Reliability checks Formal study quality assessment – Identified methodological problems that had not be mentioned in the literature – Ranked studies by methodological quality Separate syntheses (meta-analyses) for conceptually-distinct outcomes

46 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Out of home placement

47 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Self-reported delinquency

48 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Family cohesion

49 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Summary: SR of MST Effects are not consistent across studies – Few studies, most conducted by program developers in USA – All studies have mixed results across outcomes, except those that have null results on all outcomes Contrary to conclusions of most published reviews – Which suggest the effectiveness of MST is well established, consistent across studies

50 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Why traditional reviews and well-meaning experts can be misleading Scholars are human Rely on “natural” methods to filter and synthesize data Human brain – Good at detecting patterns, maintaining homeostasis, defending territory – Bad at complex math, revising beliefs (Runciman, 2007) Research synthesis is too complex for informal methods, “cognitive algebra”

51 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Conclusions Different review methods produce different results – Traditional methods are “haphazard” (Petticrew & Roberts, 2006) and can lead to the wrong conclusions – Scientific methods are needed to minimize bias and error “Science is cumulative but scientists rarely cumulate evidence scientifically” (Chalmers, Hedges, Cooper, 2002) We can use scientific principles and methods to synthesize evidence…


Download ppt "The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Introduction to Systematic Reviews."

Similar presentations


Ads by Google