Differences-in- Differences. Identifying Assumption Whatever happened to the control group over time is what would have happened to the treatment group.

Slides:



Advertisements
Similar presentations
MICS4 Survey Design Workshop Multiple Indicator Cluster Surveys Survey Design Workshop Household Questionnaire: Education.
Advertisements

Treatment Evaluation. Identification Graduate and professional economics mainly concerned with identification in empirical work. Concept of understanding.
Angela Donkin UCL Institute of Health Equity Setting the Context JSNA workshop for Southampton.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
From language to literacy. Discuss What do these findings mean? Are certain children "doomed"? How can these findings be applied to your early childhood.
1 of 17 Carol K. Sigelman, Elizabeth A. Rider Life-Span Human Development, 4th Edition Chapter 1: Understanding Life-Span Human Development Chapter 1 Understanding.
Differences-in-Differences
VIII Evaluation Conference ‘Methodological Developments and Challenges in UK Policy Evaluation’ Daniel Fujiwara Senior Economist Cabinet Office & London.
Impact Evaluation Methods. Randomized Trials Regression Discontinuity Matching Difference in Differences.
Difference in difference models
Differences-in- Differences November 10, 2009 Erick Gong Thanks to Null & Miguel.
QUASI-EXPERIMENTS w Compare subjects in different conditions on a DV w Lacks one or more criteria for an experiment (cause, comparison, control) w Interpreted.
Public Policy & Evidence: How to discriminate, interpret and communicate scientific research to better inform society. Rachel Glennerster Executive Director.
FCS Program Focus Area – Healthy Eating/Active Lifestyles Dr. Virginie Zoumenou UMES/ Maryland Cooperative Extension 11/01/07.
1 Difference in Difference Models Bill Evans Spring 2008.
ABHISHEK CHAKRAVARTY LECTURE 9 EC336 Economic Development in a Global Perspective.
Assessing Financial Education: A Practitioner’s Guide December 2010.
1 Lecture 20: Non-experimental studies of interventions Describe the levels of evaluation (structure, process, outcome) and give examples of measures of.
Extension to ANOVA From t to F. Review Comparisons of samples involving t-tests are restricted to the two-sample domain Comparisons of samples involving.
Health in an egalitarian society Espen Dahl Professor Oslo and Akershus University College Harvard Club of New York, April 22th 2015.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
TRANSLATING RESEARCH INTO ACTION What is Randomized Evaluation? Why Randomize? J-PAL South Asia, April 29, 2011.
Impact of Immigration on the Distribution of Well-Being by Gary Burtless The Brookings Institution August 11, 2009 Social Security Administration and Retirement.
Health Programme Evaluation by Propensity Score Matching: Accounting for Treatment Intensity and Health Externalities with an Application to Brazil (HEDG.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Social Capital and Early Childhood Development Evidence from Rural India Wendy Janssens Washington, 20 May 2004.
Using the National Health Interview Survey to Evaluate State Health Reform: Findings from New York and Massachusetts Sharon K. Long SHADAC/University of.
Reading skills and graph interpretation. Demographers use histograms Graph of age distribution at particular time  Cohort size  Male vs. female  Pre-reproductive.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Intergenerational Poverty and Mobility. Intergenerational Mobility Leblanc’s Random Family How does this excerpt relate to what we have been talking about?
Welfare Reform and Lone Parents Employment in the UK Paul Gregg and Susan Harkness.
Presentation to Education Portfolio Committee, 29 May 2001 White Paper Early Childhood Development.
GAP Report 2014 People left behind: People aged 50 years and older Link with the pdf, People aged 50 years and older.
CAMPBELL SCORES: ELIMINATING COMPETING HYPOTHESES.
Propensity Score Matching for Causal Inference: Possibilities, Limitations, and an Example sean f. reardon MAPSS colloquium March 6, 2007.
MIRG Meeting 5: Impact of Microfinance Aruna Ranganathan.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Our children, our families and our communities How are we doing? Yasmin Harman-Smith Deputy Director Fraser Mustard Centre.
Emily H. Sparer, Paul J. Catalano, Robert F. Herrick, Jack T. Dennerlein May 8, 2015 Improving Construction Site Safety Communication and Climate through.
Mixed ANOVA Models combining between and within. Mixed ANOVA models We have examined One-way and Factorial designs that use: We have examined One-way.
Oregon's Coordinated Care Organizations: First Year Expenditure and Utilization Authors: Neal Wallace, PhD, Peter Geissert, MPH 1, and K. John McConnell,
1 Impact of Implementing Designed Nursing Intervention Protocol on Clinical Outcome of Patient with Peptic Ulcer By Amal Mohamed Ahmad Assistant Professor,
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 51.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Comments on: The Evaluation of an Early Intervention Policy in Poor Schools Germano Mwabu June 9-10, 2008 Quebec City, Canada.
Professor Byrne, Nov.4, Types of Quasi-Experiments 1.Matched group designs: Example: Boot Camps Example: Intensive Probation Supervision 2. Cohort.
1 Validating Ex Ante Impact Evaluation Models: An Example from Mexico Francisco H.G. Ferreira Phillippe G. Leite Emmanuel Skoufias The World Bank PREM.
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
1/L Get Training or Wait? Long-Run Employment Effects of Training Programs for the Unemployed in West Germany Bernd Fitzenberger, Aderonke Osikominu, and.
Sources of Increasing Differential Mortality among the Aged by Socioeconomic Status Barry Bosworth, Gary Burtless and Kan Zhang T HE B ROOKINGS I NSTITUTION.
We take a multi-period model of childhood investment, based on Cuhna, Heckman et al (2005), which distinguishes early from late investments. In particular,
1 Impact Evaluation in the European Commission Adam Abdulwahab Evaluation Unit, DG Regional Policy Budapest, 6 th May 2010.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Technical Track Session III
Differences-in-Differences
Is High Placebo Response Really a Problem in Clinical Trials?
Differences-in-Differences
Difference-in-Differences
Kevin Croke World Bank Long run educational gains from malaria control in Tanzania: Preliminary results CSAE Conference March 19,
March 2017 Susan Edwards, RTI International
Differences-in-Differences
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Impact Evaluation Methods
Impact Evaluation Toolbox
Matching Methods & Propensity Scores
Sample Sizes for IE Power Calculations.
Presentation transcript:

Differences-in- Differences

Identifying Assumption Whatever happened to the control group over time is what would have happened to the treatment group in the absence of the program.

Identifying Assumption Whatever happened to the control group over time is what would have happened to the treatment group in the absence of the program. PrePost

Identifying Assumption Whatever happened to the control group over time is what would have happened to the treatment group in the absence of the program. PrePost Effect of program using only pre- & post- data from T group (ignoring general time trend).

Identifying Assumption Whatever happened to the control group over time is what would have happened to the treatment group in the absence of the program. PrePost Effect of program using only T & C comparison from post-intervention (ignoring pre-existing differences between T & C groups).

Identifying Assumption Whatever happened to the control group over time is what would have happened to the treatment group in the absence of the program. PrePost

Identifying Assumption Whatever happened to the control group over time is what would have happened to the treatment group in the absence of the program. PrePost Effect of program difference-in-difference (taking into account pre- existing differences between T & C and general time trend).

Uses of Diff-in-Diff Simple two-period, two-group comparison very useful in combination with randomization, matching, or RD Can also do much more complicated “cohort” analysis, comparing many groups over many time periods

Cohort Analysis Example #1 Bleakley’s paper on malaria eradication in the Americas Treated vs Control – those who were (were not) children in malaria endemic regions Pre vs Post – DDT spraying “In both absolute terms and relative to the comparison group of non-malarious areas, cohorts born after eradication had higher income and literacy as adults than the preceding generations.”

Cohort Analysis Example #2 Frankenburg et. al.’s paper on expansion in health services (midwives) Treated vs Control – those who were (were not) children in areas that eventually received midwives Pre vs Post – government placement of midwives in villages “[T]he nutritional status of children fully exposed to a midwife during early childhood is significantly better than that of their peers of the same age and cohort in communities without a midwife. These children are also better off than children measured at the same age from the same communities, but who were born before the midwife arrived.”

Robustness Checks If possible, use data on multiple pre-program periods to show that difference between treated & control is stable Not necessary for trends to be parallel, just to know function for each If possible, use data on multiple post-program periods to show that unusual difference between treated & control occurs only concurrent with program Alternatively, use data on multiple indicators to show that response to program is only manifest for those we expect it to be (e.g. the diff-in-diff estimate of the impact of ITN distribution on diarrhea should be zero)