Applying impact evaluation tools A hypothetical fertilizer project.

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

An introduction to Impact Evaluation
An Overview Lori Beaman, PhD RWJF Scholar in Health Policy UC Berkeley
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
Review of Identifying Causal Effects Methods of Economic Investigation Lecture 13.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
An introduction to Impact Evaluation Markus Goldstein Africa Region Gender Practice & Development Research Group.
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
Development Impact Evaluation Initiative innovations & solutions in infrastructure, agriculture & environment naivasha, april 23-27, 2011 in collaboration.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Measuring Impact: Experiments
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Making the Most out of Discontinuities Florence.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Measuring Impact 1 Non-experimental methods 2 Experiments
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Laura Chioda.
Implementing an impact evaluation under constraints Emanuela Galasso (DECRG) Prem Learning Week May 2 nd, 2006.
WBI WORKSHOP Randomization and Impact evaluation.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Monitoring, Evaluation, and Impact Evaluation for Decentralization Markus Goldstein PRMPR.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
An introduction to Impact Evaluation Markus Goldstein Poverty Reduction Group The World Bank.
The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Impact Evaluation Methods Regression Discontinuity Design and Difference in Differences Slides by Paul J. Gertler & Sebastian Martinez.
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
General belief that roads are good for development & living standards
Quasi Experimental Methods I
An introduction to Impact Evaluation
Quasi-Experimental Methods
Impact Evaluation Methods
An introduction to Impact Evaluation
Making the Most out of Discontinuities
Quasi-Experimental Methods
An introduction to Impact Evaluation
Impact evaluation: The quantitative methods with applications
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Impact Evaluation Methods
Impact Evaluation Toolbox
Matching Methods & Propensity Scores
Implementation Challenges
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Presentation transcript:

Applying impact evaluation tools A hypothetical fertilizer project

5 second review To do an impact evaluation, we need a treatment group and a comparison group  We need a comparison group that is as identical in observable and unobservable dimensions as possible, to those receiving the program, and a comparison group that will not receive spillover benefits.

Example: providing fertilizer to farmers The intervention: provide fertilizer to farmers in a poor region of a country (call it region A) –Program targets poor areas –Farmers have to enroll at the local extension office to receive the fertilizer –Starts in 2002, ends in 2004, we have data on yields for farmers in the poor region and another region (region B) for both years

How to construct a comparison group – building the counterfactual 1.Randomization 2.Matching 3.Difference-in-Difference 4.Instrumental variables 5.Regression discontinuity

1. Randomization Individuals/communities/firms are randomly assigned into participation Counterfactual: randomized-out groupCounterfactual: randomized-out group Advantages: –Often called the “gold standard”: by design: selection bias is zero on average and mean impact is revealed –Perceived as a fair process of allocation with limited resources Disadvantages: –Ethical issues, political constraints –Internal validity (exogeneity): people might not comply with the assignment (selective non-compliance) –Unable to estimate entry effect –External validity (generalizability): usually run controlled experiment on a pilot, small scale. Difficult to extrapolate the results to a larger population.

Randomization in our example… Simple answer: randomize farmers within a community to receive fertilizer... Potential problems? –Run-off (contamination) so control for this –Take-up (what question are we answering) –Generalizability – how comparable are these farmers to the rest of the area we would consider providing this project to –Randomization wasn’t done right –Give farmers more fertilizer, they plant more land (and don’t use the right application) – monitor well…

2. Matching Match participants with non-participants from a larger survey Counterfactual: matched comparison groupCounterfactual: matched comparison group Each program participant is paired with one or more non- participant that are similar based on observable characteristics Assumes that, conditional on the set of observables, there is no selection bias based on unobserved heterogeneity When the set of variables to match is large, often match on a summary statistics: the probability of participation as a function of the observables (the propensity score)

2. Matching Advantages: –Does not require randomization, nor baseline (pre- intervention data) Disadvantages: –Strong identification assumptions –Requires very good quality data: need to control for all factors that influence program placement –Requires significantly large sample size to generate comparison group (and same survey to comparison and treatment is important)

Matching in our example… Using statistical techniques, we match a group of non-participants with participants using variables like gender, household size, education, experience, land size (rainfall to control for drought), irrigation (as many observable characteristics not affected by fertilizer)

Matching in our example… 2 scenarios –Scenario 1: We show up afterwards, we can only match (within region) those who got fertilizer with those who did not. Problem? Problem: select on expected gains and/or ability (unobservable) –Scenario 2: The program is allocated based on historical crop choice and land size. We show up afterwards and match those eligible in region A with those in region B. Problem? Problems: same issues of individual unobservables, but lessened because we compare eligible to potential eligible now unobservables across regions

3. Difference-in-difference Observations over time: compare observed changes in the outcomes for a sample of participants and non-participants Identification assumption: the selection bias is time- invariant (‘parallel trends’ in the absence of the program) Counter-factual: changes over time for the non- participantsCounter-factual: changes over time for the non- participants Constraint: Requires at least two cross-sections of data, pre- program and post-program on participants and non- participants –Need to think about the evaluation ex-ante, before the program Can be in principle combined with matching to adjust for pre-treatment differences that affect the growth rate

Implementing differences in differences in our example… Some arbitrary comparison group Matched diff in diff Randomized diff in diff What would we want to look out for? These are in order of more problems  less problems, think about this as we look at this graphically

As long as the bias is additive and time- invariant, diff-in-diff will work ….

What if the observed changes over time are affected?

4. Instrumental Variables Identify variables that affects participation in the program, but not outcomes conditional on participation (exclusion restriction) Counterfactual: The causal effect is identified out of the exogenous variation of the instrumentCounterfactual: The causal effect is identified out of the exogenous variation of the instrument Advantages: –Does not require the exogeneity assumption of matching Disadvantages: –The estimated effect is local: IV identifies the effect of the program only for the sub-population of those induced to take-up the program by the instrument –Therefore different instruments identify different parameters. End up with different magnitudes of the estimated effects –Validity of the instrument can be questioned, cannot be tested.

IV in our example It turns out that outreach was done randomly…so the time/intake of farmers into the program is essentially random. We can use this as an instrument Problems? –Is it really random? (roads, etc)

5.Regression discontinuity design Exploit the rule generating assignment into a program given to individuals only above a given threshold – Assume that discontinuity in participation but not in counterfactual outcomes Counterfactual: individuals just below the cut-off who did not participateCounterfactual: individuals just below the cut-off who did not participate Advantages : –Identification built in the program design –Delivers marginal gains from the program around the eligibility cut-off point. Important for program expansion Disadvantages : –Threshold has to be applied in practice, and individuals should not be able manipulate the score used in the program to become eligible.

RDD in our example… Back to the eligibility criteria: land size and crop history We use those right below the cut-off and compare them with those right above… Problems: –How well enforced was the rule? –Can the rule be manipulated? –Local effect

To sum up Use the best method you can – this will be influenced by local context, political considerations, budget and program design Watch for unobservables, but don’t forget observables Keep an eye on implementation, monitor well and be ready to adapt

Thank you