Identification: Difference-in-Difference estimator

Slides:



Advertisements
Similar presentations
Differences-in-Differences and A Brief Introduction to Panel Data
Advertisements

Review of Identifying Causal Effects Methods of Economic Investigation Lecture 13.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
HSRP 734: Advanced Statistical Methods July 24, 2008.
Differences-in-Differences
Differences-in- Differences November 10, 2009 Erick Gong Thanks to Null & Miguel.
Economics Prof. Buckles1 Time Series Data y t =  0 +  1 x t  k x tk + u t 1. Basic Analysis.
John Snow web-site - UCLA
The Experimental Method in Psychology Explaining Behaviour
Difference-in-Difference Development Workshop. Typical problem in proving causal effects Using differences to estimate causal effects in experimental.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Matching Estimators Methods of Economic Investigation Lecture 11.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
Non-Experimental Evaluations Methods of Economic Investigation Lecture 5 1.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Difference in Difference 1. Preliminaries Office Hours: Fridays 4-5pm 32Lif, 3.01 I will post slides from class on my website
Latent Growth Modeling Byrne Chapter 11. Latent Growth Modeling Measuring change over repeated time measurements – Gives you more information than a repeated.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
How to conduct a valid experiment.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
The Scientific Method How to design a good experiment.
Measuring causal impact 2.1. What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between.
Experimental Evaluations Methods of Economic Investigation Lecture 4.
Differences-in-Differences
Statistics 200 Lecture #10 Thursday, September 22, 2016
Difference-in-Differences Models
EXPERIMENTAL RESEARCH
Statistical Data Analysis - Lecture /04/03
Debugging Intermittent Issues
Quasi Experimental Methods I
Differences-in-Differences
Pooling Cross Sections across Time: Simple Panel Data Methods
Impact Evaluation using DID
Applied Biostatistics: Lecture 2
Econometrics ITFD Week 8.
Quasi Experimental Methods I
An introduction to Impact Evaluation
Difference-in-Differences
Debugging Intermittent Issues
Quasi-Experimental Methods
Quasi-Experimental Methods
Re-randomising patients within clinical trials
close reading scanning skimming
Pooling Cross Sections across Time: Simple Panel Data Methods
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Methods of Economic Investigation Lecture 12
Chapter 6 Research Validity.
Scientific Investigations
“Swingers” What is a pendulum?.
1 Causal Inference Counterfactuals False Counterfactuals
Scientific Method 1. State the Problem 2. Research
Matching Methods & Propensity Scores
Instrumental Variables
Impact Evaluation Methods: Difference in difference & Matching
Internal Validity - The extent to which all explanations for changes in the DV between conditions have been eliminated -- other than the IV. ie(7a)
Scientific Method.
Title of your experimental design
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Positive analysis in public finance
Non-Experimental designs
a planned, organized approach to solving a problem
Scientific Investigations
Reminder for next week CUELT Conference.
DOE Terminologies IE-432.
Presentation transcript:

Identification: Difference-in-Difference estimator Presented by Ziyodullo Parpiev at Regional Summer School for Young Economists June 19, 2017

Today’s Class Non-experimental Methods: Difference-in-differences Understanding how it works How to test the assumptions Some problems and pitfalls

Experiments are good, but what if we don’t have an experiment? Would like to find a group that is exactly like the treatment group but didn’t get the treatment Hard to do because Lots of unobservables Data is limited Selection into treatment

Water pump on Broadwick street, Soho, London

Background Information Water supplied to households by competing private companies Sometimes different companies supplied households in same street In south London two main companies: Lambeth Company (water supply from Thames Ditton, 22 miles upstream) Southwark and Vauxhall Company (water supply from Thames)

In 1853/54 cholera outbreak Death Rates per 10000 people by water company Lambeth 10 Southwark and Vauxhall 150 Might be water but perhaps other factors Snow compared death rates in 1849 epidemic Lambeth 150 Southwark and Vauxhall 125 In 1852 Lambeth Company had changed supply from Hungerford Bridge

The effect of clean water on cholera death rates Counterfactual 2: ‘Control’ group time difference. Assume this would have been true for ‘treatment’ group 1849 1853/54 Difference Lambeth 150 10 -140 Vauxhall and Southwark 125 25 -25 140 -165 Counterfactual 1: Pre-Experiment difference between treatment and control—assume this difference is fixed over time

This is basic idea of Differences-in-Differences Have already seen idea of using differences to estimate causal effects Treatment/control groups in experimental data We need a counterfactual because we don’t observe the outcome of the treatment group when they weren’t treated (i.e. (Y0 | T=1)) Often would like to find ‘treatment’ and ‘control’ group who can be assumed to be similar in every way except receipt of treatment

A Weaker Assumption is.. Assume that, in absence of treatment, difference between ‘treatment’ and ‘control’ group is constant over time With this assumption can use observations on treatment and control group pre- and post-treatment to estimate causal effect Idea Difference pre-treatment is ‘normal’ difference Difference pre-treatment is ‘normal’ difference + causal effect Difference-in-difference is causal effect

A Graphical Representation Treatment y C counterfactual B Control Pre- Post- Time A – B = Standard differences estimator C – B = Counterfactual ‘normal’ difference A – C = Difference-in-Difference Estimate

Assumption of the D-in-D estimate D-in-D estimate assumes trends in outcome variables the same for treatment and control groups Fixed difference over time This is not testable because we never observe the counterfactual Is this reasonable? With two periods can’t do anything With more periods can see if control and treatment groups ‘trend together’

Some Notation Define: μit = E(yit) Where i=0 is control group, i=1 is treatment Where t=0 is pre-period, t=1 is post-period Standard ‘differences’ estimate of causal effect is estimate of: μ11— μ01 ‘Differences-in-Differences’ estimate of causal effect is estimate of: (μ11—μ10) —(μ01—μ00)

How to estimate? Can write D-in-D estimate as: (μ11 — μ10) — (μ01 — μ00) This is simply the difference in the change of treatment and control groups so can estimate as: Before-After difference for ‘treatment’ group Before-After difference for ‘control’ group

Can we do this? This is simply ‘differences’ estimator applied to the difference To implement this need to have repeat observations on the same individuals May not have this – individuals observed pre- and post-treatment may be different

In this case can estimate…. Main effect of Treatment group (in before period because T=0) Main effect of the After period (for control group because X=0)

Recap: Assumptions for Diff-in-Diff Additive structure of effects. We are imposing a linear model where the group or time specific effects only enter additively. No spillover effects The treatment group received the treatment and the control group did not Parallel time trends: there are fixed differences over time. If there are differences that vary over time then our second difference will still include a time effect.

Issue 1: Other Regressors Can put in other regressors just as usual think about way in which they enter the estimating equation E.g. if level of W affects level of y then should include ΔW in differences version Conditional comparisons might be useful if you think some groups may be more comparable or have different trends than others

Issue 2: Differential Trends in Treatment and Control Groups Key assumption underlying validity of D-in-D estimate is that differences between treatment and control group would have remained constant in absence of treatment Can never test this With only two periods can get no idea of plausibility But can with more than two periods

Differences-in-Differences: Summary A very useful and widespread approach Validity does depend on assumption that trends would have been the same in absence of treatment Often need more than 2 periods to test: Pre-treatment trends for treatment and control to see if “fixed differences” assumption is plausible or not See if there’s an Ashenfelter Dip