Presentation is loading. Please wait.

Presentation is loading. Please wait.

Replication in Prevention Science Valentine, et al.

Similar presentations


Presentation on theme: "Replication in Prevention Science Valentine, et al."— Presentation transcript:

1 Replication in Prevention Science Valentine, et al

2 Expanding on the Flay et al 2005 article by… Addressing the role of replication in prevention research or the ways in which replication should influence decisions regarding suitability of programs & policies for dissemination – “Does Study B replicate Study A?”

3 Document foundation: 1.Replication is an ongoing process of assembling a body of empirical evidence that speaks to the authenticity, robustness, & size of an effect. – “What does the available evidence say about the size of the effect attributable to Intervention A?” – The use of meta-analysis principles (even with only 2 studies) 2.Questions considering replication vary a great deal, therefore this article: – Explores reasons for conducting replication – Types of research that can be considered replications – Considers factors that could influence the extent of replication research

4 Replication Research (RR) Context: Reproducibility implies: a.Specifics of a study design & implementation are reported at a level of detail that allows other researchers to completely repeat the experiment b.Results are equivalent (& not, “both studies reject the null”) What challenges do social scientists face with replicating studies?

5 Replications are vital: a.In efforts to identify effective interventions b.To spur development of new interventions base don theoretical & empirical considerations Understanding RR is critical in the evolution of prevention science & practice.

6 Types of RR: a. Statistical replication: Testing the effects found in one study were due to chance b. Generalizability replication: Testing whether relationship observed in one study would generalize to conditions not observed in that study c. Implementation replication: Testing the effects of variations on program implementation d. Theory development generalization: Testing causal mechanisms underlying an intervention e. Ad hoc replications: Study conditions are not systematic or covary with other changes

7 Interpreting RR: a.Can a study be considered a replicate of another? – Subjective logic – Empirical evidence b.Inferential framework – Ongoing evaluation necessary c.Statistical framework d.Important background assumptions – All relevant studies available (publication bias an issue) – Comparable study designs e.Reframing the question about replication – “What does the available evidence say about the seize of the effect attribute to Intervention A?” (i.e. a focus on effect size and & range of intervention effects)

8 Statistical Options for Results of a Small Number of Studies- Do 2 studies agree? Statistical optionDescription Vote counting based on statistical significance 1.Most studies have to be statistically significant to claim the intervention works 2.Statistical conclusions reached in individual studies are too dependent on statistical assumptions used Comparing the directions of the effects 1.Direction is considered without reference to other info 2.Statistical power improves the info increases even when stats power is low in individual studies 3.Sufficient studies = reasonable approximation of population effect size Comparability of effect sizes & the role of the confidence intervals 1.Comparable effect sizes = study results replicated 2.CI show the likely range of a population effect 3.Determine whether the mean from an attempted replication fell within the CI of the mean from the original study 4.Applicable with 2 studies only

9 Statistical Options for Results of a Small Number of Studies- What does the available evidence say about the size of the effect attributable to intervention A? Statistical optionDescription Combining effects using techniques borrowed from fixed effects meta-analysis 1.All studies are presumed to estimate the same population parameters & yield sample stats that differ only b/c of random error 2. Advantages : Larger studies’ effect sizes = proportionally more weight; smaller studies’ effect sizes = proportionally less weight; focuses attention on weighted average effect size and its CI 3. Limitations: Not good for ad hoc replications Combining effects using techniques borrowed from random effects meta- analysis 1.Effects are presumed not to share the same underlying effect size & are due to unknown study characteristics 2. Advantages : Used for ad hoc replications 3. Limitations: Stats power often low & when there are few studies this is problematic; reduces to a fixed effects approach if the study effects differ by no more than expected given subject-level sampling error alone & this adds uncertainty to the estimation of the weighted average effect size

10 Statistical Option for Results of a Small Number of Studies- Using multiple inferential strategies Description Combine options when considering the state of the cumulative evidence generated from the ad hoc replication

11 Group Exercise- Part 2: Refer to the 6 case studies using multiple inferential strategies. Work in groups of 4 to come up with a case study that uses multiple inferential strategies.

12 A Non-Statistical Approach Description Proximal similarity: Program implementer uses their own judgment to make a decision. -The assumption that sample characteristics moderate the intervention effect -Stats may be just as effective

13 Investigator Independence in RR: Group Exercise- Part 3: 1.Scientists are human. What’s the problem with this with regards to RR? 2.Should scientists/program developers be trusted? 3.Why is it important to disclose financial incentives? 4.Why are replications an appropriate stage for investigators with regards to prevention science? Work in groups of 4. Briefly answer the following questions.

14 Highest standards of investigator independence: a.Funded by a body unrelated to the program under investigation & its developer b.Foresee no involvement with the development of the program being evaluated Investigators involved in RR should strive for this.

15 Fostering a Replication-Friendly Environment a.Incentives & disincentives for doing replications 1.Braided funding 2.Place priority on programs with replicated results 3.Value RR & therefore publish RR 4.Reward scientists doing RR 5.Guide program, policy, & practice relevant to PH b.Improve reporting standards – CONSORT – SPR’s Standards for Efficacy, Effectiveness, & Dissemination – Finding ways to report negative results (e.g. Journal of Negative Results in Biomedical Research)

16 Replication & Dissemination of Evidence-Based Practice Replication can be done efficiently – As an early stage of testing an effective program in a new community – Dynamic waitlists – Early state of partnership building is critical – Integrating replication into dissemination ensures effectiveness of the program & training

17 Summary Prevention science can impact the public’s health if: – More replications are conducted – Replications are systematic, thoughtful, & conducted with full knowledge of the trials that have preceded them – State-of-the art techniques are used to summarize the body of evidence on the effects of interventions


Download ppt "Replication in Prevention Science Valentine, et al."

Similar presentations


Ads by Google