Regression Discontinuity Design William Shadish University of California, Merced.

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

Standardized Scales.
EMR 6550: Experimental and Quasi- Experimental Designs Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2013.
Regression Discontinuity. Basic Idea Sometimes whether something happens to you or not depends on your ‘score’ on a particular variable e.g –You get a.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
Regression Discontinuity Design Thanks to Sandi Cleveland and Marc Shure (class of 2011) for some of these slides.
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Innovations: Using a Clinical Pharmacist as a Vehicle for Successful P4P Outcomes Lisa Meland, B.S., PharmD. Helen Pervanas, R.Ph. WellPoint-WellPoint.
Copyright © 2013 Wolters Kluwer Health | Lippincott Williams & Wilkins Statistical Methods for Health Care Research Chapter 1 Using Research and Statistics.
Epidemiological Study Designs And Measures Of Risks (2) Dr. Khalid El Tohami.
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
Are the results valid? Was the validity of the included studies appraised?
Advanced Statistics for Interventional Cardiologists.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
Research Study Design and Analysis for Cardiologists Nathan D. Wong, PhD, FACC.
TRANSLATING RESEARCH INTO ACTION What is Randomized Evaluation? Why Randomize? J-PAL South Asia, April 29, 2011.
1 Experimental Study Designs Dr. Birgit Greiner Dep. of Epidemiology and Public Health.
به نام ايزد يکتا Clinical Trial Design Dr. Khalili 1  The common types  Advantages and limitations  Usual applications.
© Nuffield Trust Evaluation methods – where can predictive risk models help? Adam Steventon Nuffield Trust 8 July 2013.
Research Study Design. Objective- To devise a study method that will clearly answer the study question with the least amount of time, energy, cost, and.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Less Pain, More Gain: An Evidence-Based Approach to Long-term Deficit Reduction Jon Baron Coalition for Evidence-Based Policy March 2013.
Session III Regression discontinuity (RD) Christel Vermeersch LCSHD November 2006.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
A Randomized Experiment Comparing Random to Nonrandom Assignment William R Shadish University of California, Merced and M.H. Clark Southern Illinois University,
Clinical Trials: Introduction from an Epidemiologic Study Design Perspective Health Sciences Center Health Sciences Center School of Public Health & Stanley.
WWC Standards for Regression Discontinuity Study Designs June 2010 Presentation to the IES Research Conference John Deke ● Jill Constantine.
EXPERIMENTAL EPIDEMIOLOGY
EDCI 696 Dr. D. Brown Presented by: Kim Bassa. Targeted Topics Analysis of dependent variables and different types of data Selecting the appropriate statistic.
Applying impact evaluation tools A hypothetical fertilizer project.
 Getting Outside of the Box: Designing and Implementing Innovative Ways to Prevent Infection Willo Pequegnat, Ph.D., Salix Health Consulting/USAID Susan.
Can Mental Health Services Reduce Juvenile Justice Involvement? Non-Experimental Evidence E. Michael Foster School of Public Health, University of North.
What is randomization and how does it solve the causality problem? 2.3.
Comments on Tradeoffs and Issues William R. Shadish University of California, Merced.
Research Methods Ass. Professor, Community Medicine, Community Medicine Dept, College of Medicine.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation.
CLINICAL RESEARCH: PART 3. Overview  Randomized Controlled Trials  Experiments in clinical settings  Key considerations  Control groups  Basics are.
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Using Propensity Score Matching in Observational Services Research Neal Wallace, Ph.D. Portland State University February
CROSS SECTIONAL STUDIES
Generalized Additive Models: An Introduction and Example
Randomized Assignment Difference-in-Differences
Research Design Evidence Based Medicine Concepts and Glossary.
Headlines Introduction General concepts
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
Chronic Disease Tracking System  Problem  The current healthcare model focuses on one patient at a time in the office.  Chronic disease leads to higher.
Regression Discontinuity Design Case Study : National Evaluation of Early Reading First Peter Z. Schochet Decision Information Resources, Inc.
Engineering Statistics Design of Engineering Experiments.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
Introduction to General Epidemiology (2) By: Dr. Khalid El Tohami.
Issues in Selecting Covariates for Propensity Score Adjustment William R Shadish University of California, Merced.
David M. Murray, Ph.D. Associate Director for Prevention Director, Office of Disease Prevention Multilevel Intervention Research Methodology September.
Impact Evaluation Methods Regression Discontinuity Design and Difference in Differences Slides by Paul J. Gertler & Sebastian Martinez.
Chapter 12 Quantitative Questions and Procedures.
Don't Sweat the Simple Stuff (But it's not all Simple Stuff)
Lezione di approfondimento su RDD (in inglese)
Regression in Practice: Observational studies with controls for pretests work better than you think Shadish, W. R., Clark, M. H., & Steiner, P. M. (2008).
An Empirical Test of the Regression Discontinuity Design
William R. Shadish University of California, Merced
Sec 9C – Logistic Regression and Propensity scores
Chapter 7 The Hierarchy of Evidence
Empirical Tools of Public Finance
Quasi-Experimental Design
1/18/2019 1:17:10 AM1/18/2019 1:17:10 AM Discussion of “Strategies for Studying Educational Effectiveness” Mark Dynarski Society for Research on Educational.
Explanation of slide: Logos, to show while the audience arrive.
HEC508 Applied Epidemiology
Use Antibiotics Responsibly
Detecting Treatment by Biomarker Interaction with Binary Endpoints
Presentation transcript:

Regression Discontinuity Design William Shadish University of California, Merced

Regression Discontinuity Design Units are assigned to conditions based on a cutoff score on a measured covariate, For example, communities that exceed a certain cutoff on arrests for drunk driving for young drivers per 100,000 receive treatment, and communities below that cutoff are in the comparison condition. The effect is measured as the discontinuity between treatment and control regression lines at the cutoff (it is not the group mean difference).

Advantages When properly implemented and analyzed, RD yields an unbiased estimate of treatment effect (see Rubin, 1977). Communities are assigned to treatment based on their need for treatment, consistent with how many policies are implemented.

Disadvantages Statistical power is considerably less than a randomized experiment of the same size. Careful attention to power is crucial. Effects are unbiased only if the functional form of the relationship between the assignment variable and the outcome variable is correctly modeled, including: –Nonlinear Relationships –Interactions

Citations to Med/PH Examples Cullen, K.W., Koehly, L.M., Anderson, C., Baranowski, T., Prokhorov, A., Basen-Engquist, K., Wetter, D., & Hergenroeder, A. (1999). Gender differences in chronic disease risk behaviors through the transition out of high school. American Journal of Preventive Medicine, 17, 1-7. Finkelstein, M.O., Levin, B., & Robbins, H. (1996a). Clinical and prophylactic trials with assured new treatment for those at greater risk: I. A design proposal. American Journal of Public Health, 86, Finkelstein, M.O., Levin, B., & Robbins, H. (1996b). Clinical and prophylactic trials with assured new treatment for those at greater risk: II. Examples. American Journal of Public Health, 86,

Improvements to the Design Modeling of functional form is improved if it can be observed prior to implementation of treatment (e.g., if archival data is used). Using all the standard methods to improve power (e.g., add covariates). Combining randomized and nonrandomized designs

Using Regression Discontinuity as a Design Element For those who are cut out of the experiment based on quantitative eligibility, continue to measure their outcome, and they can be added to the design to increase power. For those falling below a cutoff on a measure of outcome, or of receipt of treatment, give a booster and reanalyze that part of the data as an RDD.

Summary Of the designs being considered for this intervention, RD is the only one that yields an unbiased estimate. RD can be used with both archival data and original data. But there is question about whether it can be implemented with sufficient power in this case.