Presentation is loading. Please wait.

Presentation is loading. Please wait.

Applying Impact Evaluation Tools: Hypothetical Fertilizer Project

Similar presentations


Presentation on theme: "Applying Impact Evaluation Tools: Hypothetical Fertilizer Project"— Presentation transcript:

1 Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Emmanuel Skoufias The World Bank PRMPR PREM Learning Week: April 21-22, 2008

2 5 second review To do an impact evaluation, we need a treatment group and a comparison group  We need a comparison group that is as identical in observable and unobservable dimensions as possible, to those receiving the program, and a comparison group that will not receive spillover benefits.

3 We observe an outcome indicator
Intervention

4 and its value rises after the program
Intervention

5 Having the “ideal” counterfactual……
Intervention

6 allows us to estimate the true impact

7 The Problem of Selection Bias

8 Selection Bias Selection bias

9 Example: Providing fertilizer to farmers
The intervention: provide fertilizer to farmers in a poor region of a country (call it region A) Program targets poor areas Farmers have to enroll at the local extension office to receive the fertilizer Starts in 2002, ends in 2004, we have data on yields for farmers in the poor region and another region (region B) for both years

10 How to construct a comparison group – building the counterfactual
Randomization Matching Before and After Difference-in-Difference Instrumental variables Regression discontinuity

11 1. Randomization Individuals/communities/firms are randomly assigned into participation Counterfactual: randomized-out group Advantages: Often called the “gold standard”: by design: selection bias is zero on average and mean impact is revealed Perceived as a fair process of allocation with limited resources Disadvantages: Ethical issues, political constraints Internal validity (exogeneity): people might not comply with the assignment (selective non-compliance) Unable to estimate entry effect External validity (generalizability): usually run controlled experiment on a pilot, small scale. Difficult to extrapolate the results to a larger population.

12 Randomization in our example…
Simple answer: randomize farmers within a community to receive fertilizer... Potential problems? Run-off (contamination) so control for this Take-up (what question are we answering) Generalizability – how comparable are these farmers to the rest of the area we would consider providing this project to Randomization wasn’t done right Give farmers more fertilizer, they plant more land (and don’t use the right application) – monitor well…

13 The experimental/randomized design
In a randomized design the control group (randomly assigned out of the program) provides the counterfactual (what would have happened to the treatment group without the program) Randomization equalizes the mean selection bias between T and C groups Suppose you have somehow chosen a comparison/control group (without the program) and you compare the mean value of the outcome indicator Y in the two groups (the treatment T and control group C) at a given point in time (after the start of the program). Then:

14 2. Matching Match participants with non-participants from a larger survey Counterfactual: matched comparison group Each program participant is paired with one or more non-participant that are similar based on observable characteristics Assumes that, conditional on the set of observables, there is no selection bias based on unobserved heterogeneity When the set of variables to match is large, often match on a summary statistics: the probability of participation as a function of the observables (the propensity score)

15 2. Matching Advantages: Disadvantages:
Does not require randomization, nor baseline (pre-intervention data) Disadvantages: Strong identification assumptions Requires very good quality data: need to control for all factors that influence program placement Requires significantly large sample size to generate comparison group (and same survey to comparison and treatment is important)

16 Matching in our example…
Using statistical techniques, we match a group of non-participants with participants using variables like gender, household size, education, experience, land size (rainfall to control for drought), irrigation (as many observable characteristics not affected by fertilizer)

17 Matching in our example… 2 scenarios
Scenario 1: We show up afterwards, we can only match (within region) those who got fertilizer with those who did not. Problem? Problem: select on expected gains and/or ability (unobservable) Scenario 2: The program is allocated based on historical crop choice and land size. We show up afterwards and match those eligible in region A with those in region B. Problem? Problems: same issues of individual unobservables, but lessened because we compare eligible to potential eligible now unobservables across regions

18 3. Before and After Intervention

19 Before and After (BA) comparisons
In BA comparisons the comparison group is the farmer herself before the treatment Selection bias is not a problem (or removed by differencing) since we compare the same person (w/ the same unobserved ability before and after)

20 Shortcomings of Before and After (BA) comparisons
Not different from RB Monitoring Attribute all changes over time to the program (i.e. assume that there would have been no trend, or no changes in outcomes in the absence of the program) Overestimates impacts Difference in difference may be thought as a method that tries to improve upon the BA method

21 4. Difference-in-difference:
Observed changes over time for non-participants provide the counterfactual for participants. Steps: Collect baseline data on non-participants and (probable) participants before the program. Note: there is no particular assumption about how the non-participants are selected Could use arbitrary comparison group Or could use comparison group selected via PSM/RDD Compare with data after the program. Subtract the two differences, or use a regression with a dummy variable for participant. This allows for selection bias but it must be time-invariant and additive.

22 Implementing differences in differences in our example…
When does 2DIF give more or less reliable impact estimates?

23 Difference-in-difference: Interpretation 1
Dif-in-Dif removes the trend effect from the estimate of impact using the BA method True impact= Measured Impact in Treat G ( or BA)– Trend The change in the control group provides an estimate of the trend. Subtracting the “trend” form the change in the treatment group yields the true impact of the program The above assumes that the trend in the C group is an accurate representation of the trend that would have prevailed in the T group in the absence of the program. That is an assumption that cannot be tested (or very hard to test). What if the trend in the C group is not an accurate representation of the trend that would have prevailed in the T group in the absence of the program?? Need observations on Y one period before the baseline period.

24 Difference-in-difference: Interpretation 2
Dif-in-Dif estimator eliminates selection bias under the assumption that selection bias enters additively and does not change over time

25 Diff-in-diff requires that the bias is additive and time-invariant

26 The method fails if the comparison group is on a different trajectory

27 Or… China: targeted poor areas have intrinsically lower
growth rates (Jalan and Ravallion)

28 Poor area programs: areas not targeted yield a biased counter-factual
Income Not targeted Targeted Time The growth process in non-treatment areas is not indicative of what would have happened in the targeted areas without the program Example from China (Jalan and Ravallion)

29 5. Instrumental Variables
Identify variables that affects participation in the program, but not outcomes conditional on participation (exclusion restriction) Counterfactual: The causal effect is identified out of the exogenous variation of the instrument Advantages: Does not require the exogeneity assumption of matching Disadvantages: The estimated effect is local: IV identifies the effect of the program only for the sub-population of those induced to take-up the program by the instrument Therefore different instruments identify different parameters. End up with different magnitudes of the estimated effects Validity of the instrument can be questioned, cannot be tested.

30 IV in our example It turns out that outreach was done randomly…so the time/intake of farmers into the program is essentially random. We can use this as an instrument Problems? Is it really random? (roads, etc)

31 6.Regression discontinuity design
Exploit the rule generating assignment into a program given to individuals only above a given threshold – Assume that discontinuity in participation but not in counterfactual outcomes Counterfactual: individuals just below the cut-off who did not participate Advantages: Identification built in the program design Delivers marginal gains from the program around the eligibility cut-off point. Important for program expansion Disadvantages: Threshold has to be applied in practice, and individuals should not be able manipulate the score used in the program to become eligible.

32 RDD in our example… Back to the eligibility criteria: land size and crop history We use those right below the cut-off and compare them with those right above… Problems: How well enforced was the rule? Can the rule be manipulated? Local effect

33 To sum up Use the best method you can – this will be influenced by local context, political considerations, budget and program design Watch for unobservables, but don’t forget observables Keep an eye on implementation, monitor well and be ready to adapt

34 Thank you


Download ppt "Applying Impact Evaluation Tools: Hypothetical Fertilizer Project"

Similar presentations


Ads by Google