An introduction to Impact Evaluation Markus Goldstein Africa Region Gender Practice & Development Research Group.

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

An introduction to Impact Evaluation
The World Bank Human Development Network Spanish Impact Evaluation Fund.
An Overview Lori Beaman, PhD RWJF Scholar in Health Policy UC Berkeley
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
What do we know about gender and agriculture in Africa? Markus Goldstein Michael O’Sullivan The World Bank Cross-Country Workshop for Impact Evaluations.
Development Impact Evaluation Initiative innovations & solutions in infrastructure, agriculture & environment naivasha, april 23-27, 2011 in collaboration.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Measuring Impact: Experiments
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Making the Most out of Discontinuities Florence.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Public Policy Analysis ECON 3386 Anant Nyshadham.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Measuring Impact 1 Non-experimental methods 2 Experiments
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Laura Chioda.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 Steps in Implementing an Impact.
Implementing an impact evaluation under constraints Emanuela Galasso (DECRG) Prem Learning Week May 2 nd, 2006.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Monitoring, Evaluation, and Impact Evaluation for Decentralization Markus Goldstein PRMPR.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
An introduction to Impact Evaluation Markus Goldstein Poverty Reduction Group The World Bank.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Impact Evaluation Methods Regression Discontinuity Design and Difference in Differences Slides by Paul J. Gertler & Sebastian Martinez.
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
Quasi Experimental Methods I
An introduction to Impact Evaluation
Quasi-Experimental Methods
An introduction to Impact Evaluation
Quasi-Experimental Methods
An introduction to Impact Evaluation
Impact evaluation: The quantitative methods with applications
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Impact Evaluation Toolbox
Matching Methods & Propensity Scores
Implementation Challenges
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Presentation transcript:

An introduction to Impact Evaluation Markus Goldstein Africa Region Gender Practice & Development Research Group

Knowledge is the most democratic source of power -Alvin Toffler

a world in which there are two types of people 1.Those who know 2.Those who know that they don’t know

So how can we know? Monitoring Evaluation Impact evaluation

Outline Monitoring and impact evaluation Why do impact evaluation Why we need a comparison group Methods for constructing the comparison group Microfinance example of why it matters When to do an impact evaluation

Monitoring and evaluation Monitoring: collection, analysis and use of data on indicators at different levels (inputs, outputs, outcomes) Evaluation: focus on processes and understanding why indicators are moving the way they are

Monitoring - levels

Monitoring and causality Gov’t/program production function Users meet service delivery INPUTS OUTPUTS OUTCOMES IMPACTS Program impacts confounded by local, national, global effects difficulty of showing causality

Impact evaluation Many names (e.g. Rossi et al call this impact assessment) so need to know the concept. Impact is the difference between outcomes with the program and without it The goal of impact evaluation is to measure this difference in a way that can attribute the difference to the program, and only the program

Why it matters We want to know if the program had an impact and the average size of that impact –Understand if policies work Justification for program (big $$) Scale up or not – did it work? Compare different policy options within a program Meta-analyses – learning from others –(with cost data) understand the net benefits of the program –Understand the distribution of gains and losses

What we need  The difference in outcomes with the program versus without the program – for the same unit of analysis (e.g. individual) Problem: individuals only have one existence Hence, we have a problem of a missing counter-factual, a problem of missing data

Thinking about the counterfactual Why not compare individuals before and after (the reflexive)? –The rest of the world moves on and you are not sure what was caused by the program and what by the rest of the world We need a control/comparison group that will allow us to attribute any change in the “treatment” group to the program (causality)

comparison group issues Two central problems: –Programs are targeted  Program areas will differ in observable and unobservable ways precisely because the program intended this –Individual participation is (usually) voluntary  Participants will differ from non-participants in observable and unobservable ways Hence, a comparison of participants and an arbitrary group of non-participants can lead to heavily biased results

Example: providing fertilizer to farmers The intervention: provide fertilizer to farmers in a poor region of a country (call it region A) –Program targets poor areas –Farmers have to enroll at the local extension office to receive the fertilizer –Starts in 2008, ends in 2010, we have data on yields for farmers in the poor region and another region (region B) for both years We observe that the farmers we provide fertilizer to have a decrease in yields from 2008 to 2010

Did the program not work? Further study reveals there was a national drought, and everyone’s yields went down (failure of the reflexive comparison) We compare the farmers in the program region to those in another region. We find that our “treatment” farmers have a larger decline than those in region B. Did the program have a negative impact? –Not necessarily (program placement) Farmers in region B have better quality soil (unobservable) Farmers in the other region have more irrigation, which is key in this drought year (observable)

OK, so let’s compare the farmers in region A We compare “treatment” farmers with their neighbors. We think the soil is roughly the same. Let’s say we observe that treatment farmers’ yields decline by less than comparison farmers. Did the program work? –Not necessarily. Farmers who went to register with the program may have more ability, and thus could manage the drought better than their neighbors, but the fertilizer was irrelevant. (individual unobservables) Let’s say we observe no difference between the two groups. Did the program not work? –Not necessarily. What little rain there was caused the fertilizer to run off onto the neighbors’ fields. (spillover/contamination)

The comparison group In the end, with these naïve comparisons, we cannot tell if the program had an impact  We need a comparison group that is as identical in observable and unobservable dimensions as possible, to those receiving the program, and a comparison group that will not receive spillover benefits.

How to construct a comparison group – building the counterfactual 1.Randomization 2.Matching 3.Difference-in-Difference 4.Instrumental variables 5.Regression discontinuity

1. Randomization Individuals/communities/firms are randomly assigned into participation Counterfactual: randomized-out groupCounterfactual: randomized-out group Advantages: –Often called the “gold standard”: by design: selection bias is zero on average and mean impact is revealed –Perceived as a fair process of allocation with limited resources Disadvantages: –Ethical issues, political constraints –Internal validity (exogeneity): people might not comply with the assignment (selective non-compliance) –Unable to estimate entry effect –External validity (generalizability): usually run controlled experiment on a pilot, small scale. Difficult to extrapolate the results to a larger population.

Randomization in our example… Simple answer: randomize farmers within a community to receive fertilizer... Potential problems? –Run-off (contamination) so control for this –Take-up (what question are we answering)

1.a. Randomized phase in Take the case of a program that will cover the whole country What would the control group be? One option: –Capacity, funding, and other constraints mean that you can’t start everywhere at once. –If you can divide the program into a suitable number of small administrative units, randomize the order in which the program is phased in

Randomized phase in…cont. Advantages: –Gives you a handle on big programs, where randomization of individuals/hhs in and out isn’t possible –Rigor of a randomized design Disadvantages –There are likely to be some additional implementation costs (you likely won’t be moving to contiguous administrative units) –Cannot look at long term effects (all treated in the end)

2. Matching Match participants with non-participants from a larger survey Counterfactual: matched comparison groupCounterfactual: matched comparison group Each program participant is paired with one or more non- participant that are similar based on observable characteristics Assumes that, conditional on the set of observables, there is no selection bias based on unobserved heterogeneity When the set of variables to match is large, often match on a summary statistics: the probability of participation as a function of the observables (the propensity score)

2. Matching Advantages: –Does not require randomization, nor baseline (pre- intervention data) Disadvantages: –Strong identification assumptions –Requires very good quality data: need to control for all factors that influence program placement –Requires significantly large sample size to generate comparison group

Matching in our example… Using statistical techniques, we match a group of non-participants with participants using variables like gender, household size, education, experience, land size (rainfall to control for drought), irrigation (as many observable charachteristics not affected by fertilizer)

Matching in our example… 2 scenarios –Scenario 1: We show up afterwards, we can only match (within region) those who got fertilizer with those who did not. Problem? Problem: select on expected gains and/or ability (unobservable) –Scenario 2: The program is allocated based on historical crop choice and land size. We show up afterwards and match those eligible in region A with those in region B. Problem? Problems: same issues of individual unobservables, but lessened because we compare eligible to potential eligible now unobservables across regions

An extension of matching: pipeline comparisons Idea: compare those just about to get an intervention with those getting it now Assumption: the stopping point of the intervention does not separate two fundamentally different populations example: extending irrigation networks

3. Difference-in-difference Observations over time: compare observed changes in the outcomes for a sample of participants and non-participants Identification assumption: the selection bias is time- invariant (‘parallel trends’ in the absence of the program) Counter-factual: changes over time for the non- participantsCounter-factual: changes over time for the non- participants Constraint: Requires at least two cross-sections of data, pre- program and post-program on participants and non- participants –Need to think about the evaluation ex-ante, before the program Can be in principle combined with matching to adjust for pre-treatment differences that affect the growth rate

Implementing differences in differences in our example… Some arbitrary comparison group Matched diff in diff Randomized diff in diff These are in order of more problems  less problems, think about this as we look at this graphically

As long as the bias is additive and time- invariant, diff-in-diff will work ….

What if the observed changes over time are affected?

4. Instrumental Variables Identify variables that affects participation in the program, but not outcomes conditional on participation (exclusion restriction) Counterfactual: The causal effect is identified out of the exogenous variation of the instrumentCounterfactual: The causal effect is identified out of the exogenous variation of the instrument Advantages: –Does not require the exogeneity assumption of matching Disadvantages: –The estimated effect is local: IV identifies the effect of the program only for the sub-population of those induced to take-up the program by the instrument –Therefore different instruments identify different parameters. End up with different magnitudes of the estimated effects –Validity of the instrument can be questioned, cannot be tested.

IV in our example It turns out that outreach was done randomly…so the time/intake of farmers into the program is essentially random. We can use this as an instrument Problems? –Is it really random? (roads, etc)

5.Regression discontinuity design Exploit the rule generating assignment into a program given to individuals only above a given threshold – Assume that discontinuity in participation but not in counterfactual outcomes Counterfactual: individuals just below the cut-off who did not participateCounterfactual: individuals just below the cut-off who did not participate Advantages : –Identification built in the program design –Delivers marginal gains from the program around the eligibility cut-off point. Important for program expansion Disadvantages : –Threshold has to be applied in practice, and individuals should not be able manipulate the score used in the program to become eligible.

Example from Buddelmeyer and Skoufias, 2005

RDD in our example… Back to the eligibility criteria: land size and crop history We use those right below the cut-off and compare them with those right above… Problems: –How well enforced was the rule? –Can the rule be manipulated? –Local effect

You can also use RDD in physical space

What difference do unobservables make: Microfinance in Thailand 2 NGOs in north-east Thailand Village banks with loans of (300 US$) baht Borrowers (women) form peer groups, which guarantee individual borrowing What would we expect impacts to be?

Comparison group issues in this case: Program placement: villages which are selected for the program are different in observable and unobservable ways Individual self-selection: households which choose to participate in the program are different in observable and unobservable ways (e.g. entrepreneurship) Design solution: allow membership but no loans at first

FE modelNon-FE model Naïve modelSuper naïve Women’s land value 42.5 (93.3) 87.5 (65.3) 121** (54.6) 6916*** (1974) Women’s self emp sales (504) 174 (364) 542* (296) 545* (295) Women’s ag sales 76.5 (101) 162 (73.9) 101* (59.5) 113* (59.9) Unobserved village char X Observed village char XX Member obs & unobs char XX Member land 5 years ago XXX Results from Coleman (JDE 1999)

Prioritizing for Impact Evaluation It is not cheap – relative to monitoring Possible prioritization criteria: –Don’t know if policy is effective e.g. conditional cash transfers –Politics e.g. Argentina workfare program –It’s a lot of money Note that 2 & 3 are variants of not “knowing” – in this context, etc.

Summing up: Methods No clear “gold standard” in reality – do what works best in the context Watch for unobservables, but don’t forget observables Be flexible, be creative – use the context IE requires good monitoring and monitoring will help you understand the effect size

Human knowledge and human power meet in one; for where the cause is not known the effect cannot be produced. -Francis Bacon

Thank you

Impact Evaluation CN Template 1.What is the main question we want to answer? 2.What are the indicators we will use to capture this? 3.How will we set up the evaluation (evaluation method, strategy) 4.What will be our source of data? 5.Who will be responsible for what?

Impact Evaluation CN Template 6.What is the work plan/time line? - Consider important policy milestones 7.How will we pay for it? 8.What are the plans for dissemination?