Impact Evaluation using DID

Slides:



Advertisements
Similar presentations
Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
Advertisements

Introduction Describe what panel data is and the reasons for using it in this format Assess the importance of fixed and random effects Examine the Hausman.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
REGRESSION, IV, MATCHING Treatment effect Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
Measuring Impact: lessons from the capacity building cluster SWF Impact Summit 2 nd October 2013 Leroy White University of Bristol Capacity Building Cluster.
Mywish K. Maredia Michigan State University
Review of Identifying Causal Effects Methods of Economic Investigation Lecture 13.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Experimental and Quasi-Experimental Research
Random Assignment Experiments
The counterfactual logic for public policy evaluation Alberto Martini hard at first, natural later 1.
Pooled Cross Sections and Panel Data II
TOOLS OF POSITIVE ANALYSIS
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Applying impact evaluation tools A hypothetical fertilizer project.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
1 The Training Benefits Program – A Methodological Exposition To: The Research Coordination Committee By: Jonathan Adam Lind Date: 04/01/16.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Stats Methods at IC Lecture 3: Regression.
Looking for statistical twins
Differences-in-Differences
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
Differences-in-Differences
General belief that roads are good for development & living standards
Pooling Cross Sections across Time: Simple Panel Data Methods
Econometrics ITFD Week 8.
Quasi Experimental Methods I
An introduction to Impact Evaluation
Experimental Design-Chapter 8
Difference-in-Differences
Analysis of Covariance (ANCOVA)
Quasi-Experimental Methods
Experiments and Observational Studies
Pooling Cross Sections across Time: Simple Panel Data Methods
Impact evaluation: The quantitative methods with applications
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Experimental Design.
Development Impact Evaluation in Finance and Private Sector
Experimental Design.
Empirical Tools of Public Finance
1 Causal Inference Counterfactuals False Counterfactuals
The Use of Counterfactual Impact Evaluation Methods in Cohesion Policy
Matching Methods & Propensity Scores
Implementation Challenges
Ch. 13. Pooled Cross Sections Across Time: Simple Panel Data.
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Construct an idea/concept constructed by the scientist to explain events observed, e.g. self-concept; intelligence; social identity not necessarily clearly.
Explanation of slide: Logos, to show while the audience arrive.
Class 2: Evaluating Social Programs
Experimental Research
Sampling for Impact Evaluation -theory and application-
Class 2: Evaluating Social Programs
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Experimental Research
Positive analysis in public finance
Chapter 9 Dummy Variables Undergraduated Econometrics Page 1
Ch. 13. Pooled Cross Sections Across Time: Simple Panel Data.
Monitoring and Evaluating FGM/C abandonment programs
Advanced Panel Data Methods
Presentation transcript:

Impact Evaluation using DID Credit Seminar by Gourav Kumar Vani Roll No. 10678

Division of Agricultural Economics, IARI, New Delhi-110012 Content Introduction Monitoring & Evaluation Framework Basics of Impact Evaluation Problem of Counterfeit Counterfactual Methods to Overcome Bias DID and it’s Assumptions Advantages Limitations of DID Case Study Cases Where DID is Applicable Alternatives to DID 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

INTRODUCTION

Division of Agricultural Economics, IARI, New Delhi-110012 Introduction Projects are part of every organization engaged in making lives better for people in project area. Project is defined as the “an investment activity meant for providing the returns for specific clientele group for specific activity, specific objective, specific area development (subject to time and budget constrains). It should facilitate analysis in planning, financing, implementation, monitoring, controlling and evaluation”. Source: Agricultural Economics (2004), S. Subba Reddy et. al., pp: 467-468. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Division of Agricultural Economics, IARI, New Delhi-110012 Project Cycle Identification Formulation Appraisal Implementation Monitoring Evaluation Monitoring & Evaluation Source: Agricultural Economics (2004), S. Subba Reddy et. al., pp: 468 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

MONITORING & EVALUATION FRAMEWORK

Monitoring & Evaluation Framework Identify the “goals” that project is designed to achieve. Ex. Poverty Reduction. Identify “key indicators” that can be used to monitor progress against these goals. Ex. For proportion of individuals living on < $1 a day. Set “targets” which quantify the level of indicators set earlier. Ex. To halve proportion of people living on < $1 a day. Establish “monitoring system” to track progress towards achieving targets. Evaluation: Systematic and objective assessment of the results achieved by the project. It seeks to prove that changes in the target are due only to the specific policies undertaken. Process Evaluation Cost-Benefit Analysis Impact Evaluation Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:7-11. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Monitoring & Evaluation Framework Allocation of Resources Resources at the disposal of the project, including staff and budget Tangible goods & services that the project activities produce (They are directly under the control of implementing agency) Actions taken or work performed to convert inputs into outputs Results likely to be achieved once the beneficiary population uses the project outputs (Achieved in short to medium term) The final objective of the project, i.e., long term goals Allocation Inputs Activities Outputs Outcomes Impact Source: Impact Evaluation in Practice (2011), Gertler et. al., pp: 24-26. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

BASICS OF IMPACT EVALUATION

Basics of Impact Evaluation (IE) Whether the changes in well being are indeed due only to project or program intervention and not due to other causes. Treatment: Program intervention is referred to as treatment. Ex. Participation in training program. Treatment Group: Group receiving the treatment. Ex. Group receiving training. Control Group (Comparison group): Group not receiving the treatment but is (statistically) identical in all respect# to treatment group except for treatment. A good counterfactual must be In absence of program (treatment), treatment group must be identical (on an average w.r.t. characteristics) to counterfactual. Counterfactual must react to treatment in the same way as treatment group. Counterfactual must not be exposed to any other treatment/program than treatment group except for the treatment/program under consideration. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:17-19. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Basics of Impact Evaluation (IE) Counterfactual is the outcome for the individual from treated group had (s)he not participated in the program. It is possible to treat control group as proper/good counterfactual provided treatment is allotted randomly to the individuals. Program Effect ~ Impact or treatment effect Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:17-19. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Basics of Impact Evaluation (IE) Program effect for individual: Difference between outcome under treated individual and counterfactual. For group, Program effect: Difference in average outcome for treated group and average of counterfactuals for the treatment group. In reality, counterfactuals are not observed. IE is therefore all about finding a good counterfactual to participants. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:17-19. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

PROBLEM OF COUNTERFEIT COUNTERFACTUAL

Let’s begin with a research question Government program which provided subsidized fertilizer to all the Paddy growing farmers in the area on a first come first receive basis. Goal was food sufficiency for the area. Target was to increase Paddy yield in this area by at least by 0.5 quintal per hectare. Government had budget of ` 40,00,000. At the end of the program, Government wants to know what is impact of providing subsidy on fertilizer on Paddy yield. Source: Presenter’s own example 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Problem of Counterfeit Counterfactuals Real Program effect with proper counterfactual is Y4-Y2. With-Without Comparison Programme Paddy Yield Time Y2 Y3 Y4 Program effect under with and without comparison (Y4-Y3) Proper Counterfactual (Beneficiaries group had they not availed subsidy from Government ) Control group (Non-beneficiaries) Treatment group (Beneficiaries) Y1 Y0 Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:22-23. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Problem of Counterfeit Counterfactuals Y4-Y2=Y4-Y3+Y3-Y2 = Y4-Y3 + Y3-Y2 Because of this bias, we term control group in this case as counterfeit counterfactual. Real program effect with proper counterfactual Program effect under with-without comparison Bias Very optimistic, highly motivated beneficiaries; Spillover effect; Better Networking/Corruption Imperfect information flow; Initial differences among two groups; Having received subsidy might have enabled him to apply extra dose of other inputs. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:22-23. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Problem of Counterfeit Counterfactuals Before-After Comparison Program effect under Before-After comparison is Y2-Y0. Programme Yo Time Y1 Y2 Proper Counterfactual (Beneficiaries group had they not availed subsidy from Government ) Treatment group (Beneficiaries after availing subsidy from Government) Paddy Yield Real Program effect with proper counterfactual is Y2-Y1. Control group (Beneficiaries before availing subsidy ) Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:23-24. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Problem of Counterfeit Counterfactuals Y2-Y1=Y2-Y0+Y0-Y1 = Y2-Y0 - (Y1-Y0) Because of this bias, we term control group in this case as counterfeit counterfactual. Real program effect with proper counterfactual Program effect under before-after comparison Bias Weather as well as institutional environment might have changed over the time. Farmers financial, technical and social conditions might have changed over time. Having received subsidy might have enabled him to apply extra dose of other inputs. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:23-24. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

METHODS TO OVERCOME BIAS

Division of Agricultural Economics, IARI, New Delhi-110012 How to overcome bias ? Regression Analysis 𝑌=𝛽𝑋+𝛾𝑇+𝜀 Where 𝑌 is the outcome variable, 𝑇 is the dummy variable 𝑋 is the vector of variables that needs to be controlled to get proper estimate of program effect. It includes all observed variables. 𝜀 is the error term which includes factors not accounted in regression equation. 0: if no treatment is given/received 1: if treatment is given/received Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:25-27. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Impact Evaluation under two types of Comparison Under Before-After Comparison 𝑌= 𝛽 0 + 𝛽 ′ 𝑋+ 𝛿 0 𝑑2+𝑒, 𝑑2 is the time dummy Under With-Without Comparison 𝑌= 𝛽 0 ′ +𝛽𝑋+ 𝛽 1 𝑑𝑇+𝜀, d𝑇 is the treatment dummy 0: for outcome before receiving treatment 1: for outcome after receiving treatment 0: for outcome control group (without treatment) 1: for outcome treatment group Before After Impact Outcome 𝛽 0 + 𝛽 ′ 𝑋 𝛽 0 + 𝛽 ′ 𝑋+ 𝛿 0 𝛿 0 Without With Impact Outcome 𝛽 0 ′ +𝛽𝑋 𝛽 0 ′ +𝛽𝑋+ 𝛽 1 𝛽 1 Source: Impact Evaluation in Practice (2011), Gertler et. al.,pp:40-47. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Problems with such regression analysis are Unobserved variables: although important but not accounted, related to program participation and outcome. Therefore 𝑐𝑜𝑣 𝑑𝑇,𝜀 ≠0 𝑎𝑛𝑑 𝑐𝑜𝑣 𝑑2,𝑒 ≠0 . Thus violating the critical assumption of regression analysis. This will cause the confounding problem, i.e., coefficient of dummy variable will not reflect the true level of program effect. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:25-27. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Randomization as a possible solution All selection bias can be eliminated at the level of randomization. With-without comparison are valid with randomizations. Most robust technique available till date It has certain limitations such as ethical concerns, external validity, Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:33,38. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Randomization as a possible solution partial or lack of compliance, selective attrition, spillovers and Not every treatment can be randomized such as road construction program. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:33,38. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

DID AND IT’S ASSUMPTIONS

DID/DD/Diff-n-Diff as a possible solution Difference in Difference/ Double Difference (DD) method. Assumption unobserved factors affect program participation and are time invariant. It uses panel data to estimate program effect. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:28, 71. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Division of Agricultural Economics, IARI, New Delhi-110012 12-09-2018 What is Panel Data ? A panel dataset contains observations on multiple entities (individuals), where each entity is observed at two or more points in time. Yit = f (x1it , x2it ,….,xnit ) i = 1,…,n, t = 1,…,T n = number of units(subjects); T = number of time periods (years); It is a combination of cross section (across units) and time series (over time)data on same units. When restriction of same unit is removed to include identical units then it is called as Pooled data. Longitudinal=a study over time of a variable or group of subjects. Its variants includes Pooled data, longitudinal data, micropanel data. Source: Presenter’s own definition and notations. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012 gouravkumarvani@outlook.com

Division of Agricultural Economics, IARI, New Delhi-110012 Fixed Effect Model 𝒀 𝒊𝒕 = 𝜷 𝟎 +𝛽𝑋+ 𝜶 𝒊 + 𝒗 𝒊𝒕 , i=1,2,…,n; t=1,2,…,T; 𝒀 𝒊𝒕 is the outcome of program 𝑌 for ith individual at tth time. 𝑋 is the vector of variables. 𝜶 𝒊 is the unobserved time invariant factors for ith individual. 𝒗 𝒊𝒕 is the random error term for ith individual at tth time. 𝜺 𝒊𝒕 is the composite error term for ith individual at tth time. 𝜺 𝒊𝒕 = 𝜶 𝒊 + 𝒗 𝒊𝒕 Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:456-457. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

DID Estimator of Program Effect 𝒀 𝒊𝒕 = 𝜷 𝟎 + 𝜹 𝟎 𝒅𝟐+𝜷 𝟏 𝒅𝑻+ 𝜹 𝟏 𝒅𝟐.𝒅𝑻+𝛽𝑋+ 𝜶 𝒊 + 𝒗 𝒊𝒕 , i=1,2,…,n; t=1,2,…,T; 𝒅𝑻 is the dummy variable 𝒅𝟐 is the dummy variable 𝜹 𝟏 is also called as average treatment effect because it measures the effect of the “treatment” or program intervention on the average outcome of the 𝒀. 0: for outcome control group (without treatment) 1: for outcome treatment group (Treatment Dummy) 0: for outcome before receiving treatment 1: for outcome after receiving treatment (Time dummy) Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:450-454. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

DID Estimator of Program Effect 𝒀 𝒊𝒕 = 𝜷 𝟎 + 𝜹 𝟎 𝒅𝟐+𝜷 𝟏 𝒅𝑻+ 𝜹 𝟏 𝒅𝟐.𝒅𝑻+𝛽𝑋+ 𝜶 𝒊 + 𝒗 𝒊𝒕 , i=1,2,…,n; t=1,2,…,T; 𝜷 𝟎 is the average paddy yield of control farmers in the base period. 𝜹 𝟎 is the average change in paddy yield of all farmers between two time periods. 𝜷 𝟏 is the average difference in the paddy yields of the treatment and control farmers in base period. Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:450-454. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

DID Estimator of Program Effect Before (𝒅𝟐=𝟎) After (𝒅𝟐=𝟏) After-Before Difference Control (𝒅𝑻=𝟎) 𝛽 0 = 𝑌 0,𝐶 𝛽 0 + 𝛿 0 = 𝑌 1,𝐶 𝛿 0 = 𝑌 1,𝐶 − 𝑌 0,𝐶 Treatment (𝒅𝑻=𝟏) 𝛽 0 + 𝛽 1 = 𝑌 0,𝑇 𝛽 0 + 𝛿 0 +𝛽 1 + 𝛿 1 = 𝑌 1,𝑇 𝛿 0 +𝛿 1 = 𝑌 1,𝑇 − 𝑌 0,𝑇 Treatment – Control Difference 𝛽 1 = 𝑌 0,𝑇 − 𝑌 0,𝐶 𝛽 1 + 𝛿 1 = 𝑌 1,𝑇 − 𝑌 1,𝐶 𝛿 1 𝒀= 𝜷 𝟎 + 𝜹 𝟎 𝒅𝟐+𝜷 𝟏 𝒅𝑻+ 𝜹 𝟏 𝒅𝟐.𝒅𝑻+𝒗 Without other factors in the regression, DID estimator 𝜹 𝟏 can be written as 𝜹 𝟏 = 𝒀 𝟏,𝑻 − 𝒀 𝟏,𝑪 − 𝒀 𝟎,𝑻 − 𝒀 𝟎,𝑪 , Or 𝜹 𝟏 = 𝒀 𝟏,𝑻 − 𝒀 𝟎,𝑻 − 𝒀 𝟏,𝑪 − 𝒀 𝟎,𝑪 . Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:450-454. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

DID Estimator of Program Effect First Difference( ∆𝒀 𝒄𝒐𝒏𝒕𝒓𝒐𝒍 ): Before-After changes in control group is devoid of effect of any time-invariant factors. So the changes in outcome are purely due to time-varying factors. Second Difference( ∆𝒀 𝒕𝒓𝒆𝒂𝒕 ): Before-After changes in treatment group which purifies the second year outcome for time-invariant factors. So leaving only time variant factors. Problem with “with-without” comparison was that two sets of units may have different characteristics and that it may be those characteristics rather than program that explains the difference in the outcome between two groups. The unobserved differences in characterises are most worrying. The DID method helps to resolve this problem to the extent that many characteristics (observed and unobserved )of unit/individuals can reasonably be assumed to be constant over time. By first difference we cancel out all of the characteristics that are unique to that individual and that do not change over time. So here we are not only controlling for effect of time-invariant characteristics but also for effect of unobserved time-invariant characteristics. Source: Impact Evaluation in Practice (2011), Gertler et. al.,pp:95-103. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

DID Estimator of Program Effect Precise estimate of the programme effect we take further difference of these differences. Here counterfactual is first difference. The counterfactual being estimated here is the changes in outcomes for the comparison group. 𝜹 𝟏 = ∆𝒀 𝒕𝒓𝒆𝒂𝒕 − ∆𝒀 𝒄𝒐𝒏𝒕𝒓𝒐𝒍 Problem with “with-without” comparison was that two sets of units may have different characteristics and that it may be those characteristics rather than program that explains the difference in the outcome between two groups. The unobserved differences in characterises are most worrying. The DID method helps to resolve this problem to the extent that many characteristics (observed and unobserved )of unit/individuals can reasonably be assumed to be constant over time. By first difference we cancel out all of the characteristics that are unique to that individual and that do not change over time. So here we are not only controlling for effect of time-invariant characteristics but also for effect of unobserved time-invariant characteristics. Source: Impact Evaluation in Practice (2011), Gertler et. al.,pp:95-103. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

DID Estimator of Program Effect Thus DID approach thus combines the two counterfeit counterfactuals. Although DID allows us to take care of differences between the treatment and control groups that are constant over time, it will not help us eliminate the differences between these two groups that change over time. For the method to provide a valid counterfactual, we must assume that no such time-varying differences exist between treatment and comparison group. Source: Impact Evaluation in Practice (2011), Gertler et. al.,pp:95-103. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

DID Estimator of Program Effect So in absence of the program the differences in outcomes between the treatment and comparison groups would need to move in tandem. This assumption of is known as “Equal-Trend Assumption/Parallel Trend Assumption”. Source: Impact Evaluation in Practice (2011), Gertler et. al.,pp:95-103. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

DID Estimator of Program Effect Treatment group (Beneficiaries) Paddy Yield Y4 Baseline Program Y3 Y2 Y1 Control group (Non-beneficiaries) Y-2 Y0 Proper Counterfactual (Beneficiaries group had they not availed subsidy from Government ) Y-1 Time Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:75. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

How to test for Parallel Trend Assumption ? There is no statistical test to check validity of the assumption. Some other ways to assess the validity of the assumption, such as Using two pre-intervention observations we can draw two lines, if lines are parallel then assumption is satisfied. Source: Impact Evaluation in Practice (2011), Gertler et. al.,pp:95-103. # Presenter’s own suggestion. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

How to test for Parallel Trend Assumption ? Perform “Placebo Test” by using either fake treatment group or fake outcome. Try different comparison groups to test which group gives us parallel trend If large no. of time series observations are available then estimate the trend with and without using control variables#. Source: Impact Evaluation in Practice (2011), Gertler et. al.,pp:95-103. # Presenter’s own suggestion. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Division of Agricultural Economics, IARI, New Delhi-110012 Other Assumptions Additive structure of effects: We are imposing a linear model where the group or time specific effects only enter additively. No Spillover effects: The treatment group received the treatment and the control group did not. Control group did not get benefits of treatment in any form affecting outcome. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:71-82. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Division of Agricultural Economics, IARI, New Delhi-110012 Advantages DD method is an improvement over both the comparisons, With-Without and Before-After comparison. It can be used in both settings, experimental as well as non-experimental settings. The treatment and comparison group do not necessarily need to have the same pre-intervention conditions. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:71-82. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

LIMITATIONS OF DID

Division of Agricultural Economics, IARI, New Delhi-110012 Limitations of DID DID is generally less robust than the randomized selection methods. If outcome trends are different for the treatment and comparison group, then the estimated treatment effect obtained by DID would be invalid or biased. Even when the trends are parallel before the start of the intervention, bias in estimation may still appear. It does not take care of time variant differences between treatment and control. The reason is that DID attributes to the intervention any difference in trends between treatment and comparison group that occurs from the time intervention begins. If any factors are present that affect the difference in trends between these two groups, the estimation will be invalid or biased. Source: Impact Evaluation in Practice (2011), Gertler et. al.,pp:104. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

CASE STUDY

A story of Garbage Incinerator A furnace for burning garbage to trashes If the garbage incinerator is installed in any area then the whole of the area get polluted from the ash and smoke it creates. The same thing happened with North Andover in USA. In After 1978, Rumour started that “A new incinerator would be built in North Andover”. Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:450-454. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

A story of Garbage Incinerator Construction of incinerator began in 1981. In 1985, Incinerator started operation. Hypothesis: price of houses located near the incinerator would fall relative to the price of more distant houses. Near incinerator if house is within three miles from incinerator. Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:450-454. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

A story of Garbage Incinerator Nearinc y81 We take Housing price adjusted for inflation (Real price) as the outcome variable (rprice). 0: if far away 1: if near incinerator 0: if year is 1981 1: if year is not 1981 Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:450-454. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

A story of Garbage Incinerator So with and without estimate of incinerator can be estimated as following (data taken only for 1981) 𝑟𝑝𝑟𝑖𝑐𝑒= 𝛾 0 + 𝛾 1 𝑛𝑒𝑎𝑟𝑖𝑛𝑐+𝑢 𝑟𝑝𝑟𝑖𝑐𝑒 =1,01,307.5−30,688.27 𝑛𝑒𝑎𝑟𝑖𝑛𝑐 With-without equation estimated for 1978 𝑟𝑝𝑖𝑐𝑒 =82,517−18,824.37 𝑛𝑒𝑎𝑟𝑖𝑛𝑐 How, then, we can tell that building a new incinerator depresses housing values? Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:450-454. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

A story of Garbage Incinerator One way is to take difference the coefficient of nearinc from both estimates (1981,1987) This is equal to double difference Incinerator effect = −30,688.27−(−18,824.37)=−11,863.9 Or else run another regression 𝑟𝑝𝑖𝑐𝑒= 𝛽 0 + 𝛿 0 𝑦81+ 𝛽 1 𝑛𝑒𝑎𝑟𝑖𝑛𝑐+ 𝛿 1 𝑦81.𝑛𝑒𝑎𝑟𝑖𝑛𝑐+𝑢 𝑟𝑝𝑟𝑖𝑐𝑒 =82,517.23+18,790.29 𝑦81−18,824.37 𝑛𝑒𝑎𝑟𝑖𝑛𝑐−11,863.90 𝑦81.𝑛𝑒𝑎𝑟𝑖𝑛𝑐 Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:450-454. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

CASES WHERE DID IS APPLICABLE

Where can We Apply DID Technique ???? This method is applicable when the data arise from a natural experiment (or a quasi-experiment). A natural experiment occurs when some exogenous event-often a change in government policy-changes the environment in which individuals, families, firms or cities operate. A natural experiment always has a control group, which is not affected by the policy change. Unlike a true experiment, here treatment allocation is not random but control and treatment groups arise from particular policy change Source: Introductory Econometrics (2013), A Modern Approach (4e), Jeffrey M. Wooldridge, pp:453. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

ALTERNATIVES TO DID

Division of Agricultural Economics, IARI, New Delhi-110012 Alternatives to DID DID with PSM (Propensity score matching), PSM and Triple Differencing. Source: Handbook on Impact Evaluation, Quantitative Methods & Practices (2010), Khandker et. al., pp:71-82. 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012

Thank You for Patience…. Any Questions….????? 24-09-16 Division of Agricultural Economics, IARI, New Delhi-110012