Replication in Prevention Science Valentine, et al.

Slides:



Advertisements
Similar presentations
Introduction to Psychology
Advertisements

When are Impact Evaluations (IE) Appropriate and Feasible? Michele Tarsilla, Ph.D. InterAction IE Workshop May 13, 2013.
Mywish K. Maredia Michigan State University
1 G Lect 2a G Lecture 2a Thinking about variability Samples and variability Null hypothesis testing.
Chapter 1: An Overview of Program Evaluation Presenter: Stephanie Peterson.
Reading the Dental Literature
Estimation of Sample Size
DECO3008 Design Computing Preparatory Honours Research KCDCC Mike Rosenman Rm 279
Today’s Agenda Review Homework #1 [not posted]
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 9-1 Introduction to Statistics Chapter 10 Estimation and Hypothesis.
PSY 1950 Confidence and Power December, Requisite Quote “The picturing of data allows us to be sensitive not only to the multiple hypotheses that.
Scientific method - 1 Scientific method is a body of techniques for investigating phenomena and acquiring new knowledge, as well as for correcting and.
11-1 Copyright  2006 McGraw-Hill Australia Pty Ltd Revised PPTs t/a Auditing and Assurance Services in Australia 3e by Grant Gay and Roger Simnett Slides.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Chapter One: The Science of Psychology
Confidence Intervals and Hypothesis Tests
Codex Guidelines for the Application of HACCP
Are the results valid? Was the validity of the included studies appraised?
Chapter 1: Introduction to Statistics
RESEARCH A systematic quest for undiscovered truth A way of thinking
Sampling. Concerns 1)Representativeness of the Sample: Does the sample accurately portray the population from which it is drawn 2)Time and Change: Was.
EVAL 6970: Cost Analysis for Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
Fundamentals of Data Analysis Lecture 4 Testing of statistical hypotheses.
Systematic Reviews Professor Kate O’Donnell. Reviews Reviews (or overviews) are a drawing together of material to make a case. These may, or may not,
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Charteredaccountants.com.au/training Fundamentals of Auditing in 2007 Chartered Accountants Audit Conference ASA 530 – Audit Sampling and Other Means of.
Chapter One: The Science of Psychology. Ways to Acquire Knowledge Tenacity Tenacity Refers to the continued presentation of a particular bit of information.
Chapter 8 Introduction to Hypothesis Testing
MODULE 3 INVESTIGATING HUMAN AND SOCIL DEVELOPMENT IN THE CARIBBEAN.
Highlights from Educational Research: Its Nature and Rules of Operation Charles and Mertler (2002)
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
Simon Thornley Meta-analysis: pooling study results.
STANDARDS OF EVIDENCE FOR INFORMING DECISIONS ON CHOOSING AMONG ALTERNATIVE APPROACHES TO PROVIDING RH/FP SERVICES Ian Askew, Population Council July 30,
Agresti/Franklin Statistics, 1 of 122 Chapter 8 Statistical inference: Significance Tests About Hypotheses Learn …. To use an inferential method called.
Systematic reviews to support public policy: An overview Jeff Valentine University of Louisville AfrEA – NONIE – 3ie Cairo.
URBDP 591 I Lecture 3: Research Process Objectives What are the major steps in the research process? What is an operational definition of variables? What.
Maria E. Fernandez, Ph.D. Associate Professor Health Promotion and Behavioral Sciences University of Texas, School of Public Health.
What is Science? or 1.Science is concerned with understanding how nature and the physical world work. 2.Science can prove anything, solve any problem,
Adjusted from slides attributed to Andrew Ainsworth
Section A Confidence Interval for the Difference of Two Proportions Objectives: 1.To find the mean and standard error of the sampling distribution.
Issues concerning the interpretation of statistical significance tests.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
Review I A student researcher obtains a random sample of UMD students and finds that 55% report using an illegally obtained stimulant to study in the past.
+ Evidence Based Practice University of Utah Evidence-Based Treatment and Practice: New Opportunities to Bridge Clinical Research and Practice, Enhance.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
The Psychologist as Detective, 4e by Smith/Davis © 2007 Pearson Education Chapter One: The Science of Psychology.
URBDP 591 A Lecture 16: Research Validity and Replication Objectives Guidelines for Writing Final Paper Statistical Conclusion Validity Montecarlo Simulation/Randomization.
Understanding the Research Process
How Psychologists Do Research Chapter 2. How Psychologists Do Research What makes psychological research scientific? Research Methods Descriptive studies.
Uncertainty and confidence Although the sample mean,, is a unique number for any particular sample, if you pick a different sample you will probably get.
CHAPTER 1 THE FIELD OF SOCIAL PSYCHOLOGY. CHAPTER OBJECTIVES After reading this chapter, you should be able to: Offer a definition of social psychology.
Research methods revision The next couple of lessons will be focused on recapping and practicing exam questions on the following parts of the specification:
STEPS IN RESEARCH PROCESS 1. Identification of Research Problems This involves Identifying existing problems in an area of study (e.g. Home Economics),
Quantitative Methods for Business Studies
Critically Appraising a Medical Journal Article
Power Analysis and Meta-analysis
Supplementary Table 1. PRISMA checklist
Statistical Data Analysis
Strategies to incorporate pharmacoeconomics into pharmacotherapy
Lecture 4: Meta-analysis
Reading Research Papers-A Basic Guide to Critical Analysis
Stat 217 – Day 28 Review Stat 217.
Narrative Reviews Limitations: Subjectivity inherent:
Statistical Data Analysis
EAST GRADE course 2019 Introduction to Meta-Analysis
STEPS Site Report.
Presentation transcript:

Replication in Prevention Science Valentine, et al

Expanding on the Flay et al 2005 article by… Addressing the role of replication in prevention research or the ways in which replication should influence decisions regarding suitability of programs & policies for dissemination – “Does Study B replicate Study A?”

Document foundation: 1.Replication is an ongoing process of assembling a body of empirical evidence that speaks to the authenticity, robustness, & size of an effect. – “What does the available evidence say about the size of the effect attributable to Intervention A?” – The use of meta-analysis principles (even with only 2 studies) 2.Questions considering replication vary a great deal, therefore this article: – Explores reasons for conducting replication – Types of research that can be considered replications – Considers factors that could influence the extent of replication research

Replication Research (RR) Context: Reproducibility implies: a.Specifics of a study design & implementation are reported at a level of detail that allows other researchers to completely repeat the experiment b.Results are equivalent (& not, “both studies reject the null”) What challenges do social scientists face with replicating studies?

Replications are vital: a.In efforts to identify effective interventions b.To spur development of new interventions base don theoretical & empirical considerations Understanding RR is critical in the evolution of prevention science & practice.

Types of RR: a. Statistical replication: Testing the effects found in one study were due to chance b. Generalizability replication: Testing whether relationship observed in one study would generalize to conditions not observed in that study c. Implementation replication: Testing the effects of variations on program implementation d. Theory development generalization: Testing causal mechanisms underlying an intervention e. Ad hoc replications: Study conditions are not systematic or covary with other changes

Interpreting RR: a.Can a study be considered a replicate of another? – Subjective logic – Empirical evidence b.Inferential framework – Ongoing evaluation necessary c.Statistical framework d.Important background assumptions – All relevant studies available (publication bias an issue) – Comparable study designs e.Reframing the question about replication – “What does the available evidence say about the seize of the effect attribute to Intervention A?” (i.e. a focus on effect size and & range of intervention effects)

Statistical Options for Results of a Small Number of Studies- Do 2 studies agree? Statistical optionDescription Vote counting based on statistical significance 1.Most studies have to be statistically significant to claim the intervention works 2.Statistical conclusions reached in individual studies are too dependent on statistical assumptions used Comparing the directions of the effects 1.Direction is considered without reference to other info 2.Statistical power improves the info increases even when stats power is low in individual studies 3.Sufficient studies = reasonable approximation of population effect size Comparability of effect sizes & the role of the confidence intervals 1.Comparable effect sizes = study results replicated 2.CI show the likely range of a population effect 3.Determine whether the mean from an attempted replication fell within the CI of the mean from the original study 4.Applicable with 2 studies only

Statistical Options for Results of a Small Number of Studies- What does the available evidence say about the size of the effect attributable to intervention A? Statistical optionDescription Combining effects using techniques borrowed from fixed effects meta-analysis 1.All studies are presumed to estimate the same population parameters & yield sample stats that differ only b/c of random error 2. Advantages : Larger studies’ effect sizes = proportionally more weight; smaller studies’ effect sizes = proportionally less weight; focuses attention on weighted average effect size and its CI 3. Limitations: Not good for ad hoc replications Combining effects using techniques borrowed from random effects meta- analysis 1.Effects are presumed not to share the same underlying effect size & are due to unknown study characteristics 2. Advantages : Used for ad hoc replications 3. Limitations: Stats power often low & when there are few studies this is problematic; reduces to a fixed effects approach if the study effects differ by no more than expected given subject-level sampling error alone & this adds uncertainty to the estimation of the weighted average effect size

Statistical Option for Results of a Small Number of Studies- Using multiple inferential strategies Description Combine options when considering the state of the cumulative evidence generated from the ad hoc replication

Group Exercise- Part 2: Refer to the 6 case studies using multiple inferential strategies. Work in groups of 4 to come up with a case study that uses multiple inferential strategies.

A Non-Statistical Approach Description Proximal similarity: Program implementer uses their own judgment to make a decision. -The assumption that sample characteristics moderate the intervention effect -Stats may be just as effective

Investigator Independence in RR: Group Exercise- Part 3: 1.Scientists are human. What’s the problem with this with regards to RR? 2.Should scientists/program developers be trusted? 3.Why is it important to disclose financial incentives? 4.Why are replications an appropriate stage for investigators with regards to prevention science? Work in groups of 4. Briefly answer the following questions.

Highest standards of investigator independence: a.Funded by a body unrelated to the program under investigation & its developer b.Foresee no involvement with the development of the program being evaluated Investigators involved in RR should strive for this.

Fostering a Replication-Friendly Environment a.Incentives & disincentives for doing replications 1.Braided funding 2.Place priority on programs with replicated results 3.Value RR & therefore publish RR 4.Reward scientists doing RR 5.Guide program, policy, & practice relevant to PH b.Improve reporting standards – CONSORT – SPR’s Standards for Efficacy, Effectiveness, & Dissemination – Finding ways to report negative results (e.g. Journal of Negative Results in Biomedical Research)

Replication & Dissemination of Evidence-Based Practice Replication can be done efficiently – As an early stage of testing an effective program in a new community – Dynamic waitlists – Early state of partnership building is critical – Integrating replication into dissemination ensures effectiveness of the program & training

Summary Prevention science can impact the public’s health if: – More replications are conducted – Replications are systematic, thoughtful, & conducted with full knowledge of the trials that have preceded them – State-of-the art techniques are used to summarize the body of evidence on the effects of interventions