1 2 3 Economist Sociologist Project Officer. Evaluating Anti-Poverty Programs: Concepts and Methods Martin Ravallion Development Research Group, World.

Slides:



Advertisements
Similar presentations
Evaluating Anti-Poverty Programs Part 1: Concepts and Methods Martin Ravallion Development Research Group, World Bank.
Advertisements

Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
REGRESSION, IV, MATCHING Treatment effect Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Pooled Cross Sections and Panel Data II
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
© Institute for Fiscal Studies The role of evaluation in social research: current perspectives and new developments Lorraine Dearden, Institute of Education.
TOOLS OF POSITIVE ANALYSIS
ECON 6012 Cost Benefit Analysis Memorial University of Newfoundland
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
Evaluation in the Practice of Development
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
Health Programme Evaluation by Propensity Score Matching: Accounting for Treatment Intensity and Health Externalities with an Application to Brazil (HEDG.
Evaluating Anti-Poverty Programs Part 2: Examples Martin Ravallion Development Research Group, World Bank.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Welfare Reform and Lone Parents Employment in the UK Paul Gregg and Susan Harkness.
Introduction to Impact evaluation: Methods & Examples Emmanuel Skoufias The World Bank PRMPR PREM Learning Forum April 29-30, 2009.
Assessing the Distributional Impact of Social Programs The World Bank Public Expenditure Analysis and Manage Core Course Presented by: Dominique van de.
Evaluating Job Training Programs: What have we learned? Haeil Jung and Maureen Pirog School of Public and Environmental Affairs Indiana University Bloomington.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
Public Policy Analysis ECON 3386 Anant Nyshadham.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
Targeting of Public Spending Menno Pradhan Senior Poverty Economist The World Bank office, Jakarta.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Current practices in impact evaluation Howard White Independent Evaluation Group World Bank.
Lorraine Dearden Director of ADMIN Node Institute of Education
Implementing an impact evaluation under constraints Emanuela Galasso (DECRG) Prem Learning Week May 2 nd, 2006.
WBI WORKSHOP Randomization and Impact evaluation.
Randomized Assignment Difference-in-Differences
1 Validating Ex Ante Impact Evaluation Models: An Example from Mexico Francisco H.G. Ferreira Phillippe G. Leite Emmanuel Skoufias The World Bank PREM.
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
1 Joint meeting of ESF Evaluation Partnership and DG REGIO Evaluation Network in Gdańsk (Poland) on 8 July 2011 The Use of Counterfactual Impact Evaluation.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
MATCHING Eva Hromádková, Applied Econometrics JEM007, IES Lecture 4.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
Do European Social Fund labour market interventions work? Counterfactual evidence from the Czech Republic. Vladimir Kváča, Czech Ministry of Labour and.
Common Pitfalls in Randomized Evaluations Jenny C. Aker Tufts University.
Experimental Evaluations Methods of Economic Investigation Lecture 4.
Assessing the Impact of Informality on Wages in Tanzania: Is There a Penalty for Women? Pablo Suárez Robles (University Paris-Est Créteil) 1.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Modeling Poverty Martin Ravallion Development Research Group, World Bank.
The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ The Fundamental Evaluation Problem and its Solution.
Measuring Results and Impact Evaluation: From Promises into Evidence
General belief that roads are good for development & living standards
Threats and Analysis.
An introduction to Impact Evaluation
Impact Evaluation Methods
Impact evaluation: The quantitative methods with applications
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Implementation Challenges
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Presentation transcript:

1 2 3 Economist Sociologist Project Officer

Evaluating Anti-Poverty Programs: Concepts and Methods Martin Ravallion Development Research Group, World Bank

1. Introduction 2. The evaluation problem 3. Generic issues 4. Single difference: randomization 5. Single difference: controls for observables 6. Single difference: exploiting program design 7. Double difference 8. Higher-order differencing 9. Instrumental variables 10. Learning more from evaluations

Assigned programs some units (individuals, households, villages) get the program; some do not. Examples: Social fund selects from applicants Workfare: gains to workers and benefiting communities; others get nothing Cash transfers to eligible households only Ex-post evaluation But ex post does not mean start late! 1. Introduction

What do we mean by “impact”? Impact = the difference between the relevant outcome indicator with the program and that without it. However, we can never observe someone in two different states of nature at the same time. While a post-intervention indicator is observed, its value in the absence of the program is not, i.e., it is a counter- factual. So all evaluation is essentially a problem of missing data. Calls for counterfactual analysis. 2. The evaluation problem

Naïve comparisons can be deceptive Common practices: n compare outcomes after the intervention to those before, or n compare units (people, households, villages) that receive the program with those that do not. Potential biases from failure to control for: n Other changes over time under the counterfactual, or n Unit characteristics that influence program placement.

Naïve comparison 1: Before vs after. We observe an outcome indicator, Intervention

and its value rises after the program: Intervention

However, we need to identify the counterfactual… Intervention

… since only then can we determine the impact of the intervention

Naïve comparison 2: “With” vs.”without” Impacts on poverty? Percent not poor Without (n=56) 43 With (n=44) % increase (t-test) 87% (2.29) 54% (2.00) Case 1 Case 2

Impacts on poverty? Percent not poor Without (n=56) 43 With (n=44) % increase (t-test) 87% (2.29) 54% (2.00) Case 1 Case 2

Without (n=56) 43 With (n=44) % increase (t-test) 87% (2.29) 54% (2.00) 1: Program yields 20% gain 2: Program yields no gain

How can we do better? The missing-data problem in evaluation n For each unit (person, household, village,…) there are two possible values of the outcome variable: The value under the treatment The value under the counterfactual n However, we cannot observe both for all units We cannot observe the counterfactual outcomes for the treated units Or the outcomes under treatment for the untreated units n So evaluation is essentially a problem of missing data => “counterfactual analysis.”

Archetypal formulation

The evaluation problem

Alternative solutions 1 Experimental evaluation (“Social experiment”) Program is randomly assigned, so that everyone has the same probability of receiving the treatment. In theory, this method is assumption free, but in practice many assumptions are required. Pure randomization is rare for anti-poverty programs in practice, since randomization precludes purposive targeting. Although it is sometimes feasible to partially randomize.

Alternative solutions 2 Non-experimental evaluation (“Quasi-experimental”; “observational studies”) One of two (non-nested) conditional independence assumptions: 1.Placement is independent of outcomes given X => Single difference methods assuming conditionally exogenous program placement. Or placement is independent of outcome changes =>Double difference methods 2.A correlate of placement is independent of outcomes given D and X => Instrumental variables estimator.

3. Generic issues Selection bias Spillover effects

Selection bias in the outcome difference between participants and non-participants = 0 with exogenous program placement

Two sources of selection bias Selection on observables Data Linearity in controls? Selection on unobservables Participants have latent attributes that yield higher/lower outcomes One cannot judge if exogeneity is plausible without knowing whether one has dealt adequately with observable heterogeneity. That depends on program, setting and data.

Spillover effects Hidden impacts for non-participants? Spillover effects can stem from: Markets Non-market behavior of participants/non- participants Behavior of intervening agents (governmental/NGO) Example: Employment Guarantee Scheme assigned program, but no valid comparison group.

As long as the assignment is genuinely random, mean impact is revealed: ATE is consistently estimated (nonparametrically) by the difference between sample mean outcomes of participants and non-participants. Pure randomization is the theoretical ideal for ATE, and the benchmark for non-experimental methods. More common: randomization conditional on ‘X’ 4. Randomization “Randomized out” group reveals counterfactual

Examples for developing countries n PROGRESA in Mexico Conditional cash transfer scheme 1/3 of the original 500 communities selected were retained as control; public access to data Impacts on health, schooling, consumption n Proempleo in Argentina Wage subsidy + training Wage subsidy: Impacts on employment, but not incomes Training: no impacts though selective compliance

Lessons from practice 1 Ethical objections and political sensitivities Deliberately denying a program to those who need it and providing the program to some who do not. Yes, too few resources to go around. But is randomization the fairest solution to limited resources? What does one condition on in conditional randomizations? Intention-to-treat helps alleviate these concerns => randomize assignment, but free to not participate But even then, the “randomized out” group may include people in great need. => Implications for design Choice of conditioning variables. Sub-optimal timing of randomization Selective attrition + higher costs

Lessons from practice 2 Internal validity: Selective compliance Some of those assigned the program choose not to participate. Impacts may only appear if one corrects for selective take-up. Randomized assignment as IV for participation Proempleo example: impacts of training only appear if one corrects for selective take-up

Lessons from practice 3 External validity: inference for scaling up Systematic differences between characteristics of people normally attracted to a program and those randomly assigned (“randomization bias”: Heckman-Smith) One ends up evaluating a different program to the one actually implemented => Difficult in extrapolating results from a pilot experiment to the whole population

5.1 OLS regression Ordinary least squares (OLS) estimator of impact with controls for selection on observables. 5. Controls Regression controls and matching

Even with controls…

Match participants to non-participants from a larger survey. The matches are chosen on the basis of similarities in observed characteristics. This assumes no selection bias based on unobservable heterogeneity. Mean impact on the treated (ATE or ATET) is nonparametrically identified. 5.2 Matching Matched comparators identify counterfactual

Ideally we would match on the entire vector X of observed characteristics. However, this is practically impossible. X could be huge. PSM: match on the basis of the propensity score (Rosenbaum and Rubin) = This assumes that participation is independent of outcomes given X. If no bias given X then no bias given P(X). Propensity-score matching (PSM) Match on the probability of participation.

1: Representative, highly comparable, surveys of the non-participants and participants. 2: Pool the two samples and estimate a logit (or probit) model of program participation. Predicted values are the “propensity scores”. 3: Restrict samples to assure common support Failure of common support is an important source of bias in observational studies (Heckman et al.) Steps in score matching:

Density of scores for participants

Density of scores for non-participants

5: For each participant find a sample of non- participants that have similar propensity scores. 6: Compare the outcome indicators. The difference is the estimate of the gain due to the program for that observation. 7: Calculate the mean of these individual gains to obtain the average overall gain. Various weighting schemes => Steps in PSM cont.,

The mean impact estimator

Propensity-score weighting n PSM removes bias under the conditional exogeneity assumption. n However, it is not the most efficient estimator. n Hirano, Imbens and Ridder show that weighting the control observations according to their propensity score yields a fully efficient estimator. n Regression implementation for the common impact model: with weights of unity for the treated units and for the controls.

How does PSM compare to an experiment? n PSM is the observational analogue of an experiment in which placement is independent of outcomes n The difference is that a pure experiment does not require the untestable assumption of independence conditional on observables. n Thus PSM requires good data. n Example of Argentina’s Trabajar program Plausible estimates using SD matching on good data Implausible estimates using weaker data

How does PSM differ from OLS? n PSM is a non-parametric method (fully non- parametric in outcome space; optionally non- parametric in assignment space) n Restricting the analysis to common support => PSM weights the data very differently to standard OLS regression n In practice, the results can look very different!

How does PSM perform relative to other methods? n In comparisons with results of a randomized experiment on a US training program, PSM gave a good approximation (Heckman et al.; Dehejia and Wahba) n Better than the non-experimental regression-based methods studied by Lalonde for the same program. n However, robustness has been questioned (Smith and Todd)

Lessons on matching methods n When neither randomization nor a baseline survey are feasible, careful matching is crucial to control for observable heterogeneity. n Validity of matching methods depends heavily on data quality. Highly comparable surveys; similar economic environment n Common support can be a problem (esp., if treatment units are lost). n Look for heterogeneity in impact; average impact may hide important differences in the characteristics of those who gain or lose from the intervention.

Discontinuity designs Participate if score M < m Impact= Key identifying assumption: no discontinuity in counterfactual outcomes at m. Strict eligibility rules alone do not make this plausible (e.g., geography and local govt.) “Fuzzy” discontinuities in prob. participation. 6. Exploiting program design 1

Pipeline comparisons Applicants who have not yet received program form the comparison group Assumes exogeneous assignment amongst applicants Reflects latent selection into the program Exploiting program design 2

Lessons from practice n Know your program well: Program design features can be very useful for identifying impact. n Know your setting well too: Is it plausible that outcomes are continuous under the counterfactual? n But what if you end up changing the program to identify impact? You have evaluated something else!

Observed changes over time for non-participants provide the counterfactual for participants. Steps: 1.Collect baseline data on non-participants and (probable) participants before the program. 2.Compare with data after the program. 3.Subtract the two differences, or use a regression with a dummy variable for participant. This allows for selection bias but it must be time- invariant and additive. 7. Difference-in-difference

Outcome indicator: t=0,1 where = impact (“gain”); = counterfactual; = estimate from comparison group

Difference-in-difference Baseline difference in outcomes Post-intervention difference in outcomes Gain over time for treatment group Gain over time for comparison group Or

Diff-in-diff: if (i) change over time for comparison group reveals counterfactual: and (ii) baseline is uncontaminated by the program:

Selection bias

Diff-in-diff requires that the bias is additive and time-invariant

The method fails if the comparison group is on a different trajectory => DD overestimates impact

Or…  DD underestimates impact Common problem in assessing impacts of development projects?

Example of poor area programs: areas not targeted yield a biased counter-factual Not targeted Targeted Time Income The growth process in non-treatment areas is not indicative of what would have happened in the targeted areas without the program Example from China (Jalan and Ravallion)

Matched double difference Matching helps control for time-varying selection bias Score match participants and non-participants based on observed characteristics in baseline Initial conditions (incl. outcomes) Prior outcome trajectories Then do a double difference This deals with observable heterogeneity in initial conditions that can influence subsequent changes over time

Propensity-score weighted version of “matched diff-in-diff.” n Weighting the control observations according to their propensity score yields a fully efficient estimator (Hirano, Imbens and Ridder). n Regression: with weights of unity for the treated units and for the controls where is the propensity score.

“ Fixed effects” model n Fixed effects model on balanced panel: where Note: n Adding picks up any differences in time- mean latent factors. n One does not require a balanced panel to estimate DD.

Lessons from practice n Single-difference matching can be severely contaminated by selection bias Latent heterogeneity in factors relevant to participation n Tracking individuals over time allows a double difference This eliminates all time-invariant additive selection bias n Combining double difference with matching: This allows us to eliminate observable heterogeneity in factors relevant to subsequent changes over time

8. Higher-order differencing Pre-intervention baseline data unavailable e.g., safety net intervention in response to a crisis Can impact be inferred by observing participants’ outcomes in the absence of the program after the program?

New issues n Selection bias from two sources: 1. decision to join the program 2. decision to stay or drop out n There are observed and unobserved characteristics that affect both participation and income in the absence of the program n Past participation can bring current gains for those who leave the program

Double-Matched Triple Difference 1.Match participants with a comparison group of non- participants 2.Match leavers and stayers 3.Compare gains to continuing participants with those who drop out (Ravallion et al.) Triple Difference (DDD) = DD for stayers – DD for leavers

Outcomes for participants: Single difference: Double difference: Triple difference: “stayers” “leavers” in period 2 in period 2

Joint conditions for DDD to estimate impact: no current gain to ex-participants; no selection bias in who leaves the program;

Test for whether DDD identifies gain to current participants Third round of data allows a test: mean gains in round 2 should be the same whether or not one drops out in round 3 Gain in round 2 for stayers in round 3 Gain in round 2 for leavers in round 3

Lessons from practice 1. Tracking individuals over time: n addresses some of the limitations of single-difference on weak data n allows us to study the dynamics of recovery 2. “Baseline” can be after the program, but must address the extra sources of selection bias 3. Single difference for leavers vs. stayers can work well if there is an exogenous program contraction

9. Instrumental variables Identifying exogenous variation using a 3 rd variable Outcome regression: ( D = 0,1 is our program – not random) “Instrument” ( Z ) influences participation, but does not affect outcomes given participation (the “exclusion restriction”). This identifies the exogenous variation in outcomes due to the program. Treatment regression:

Reduced-form outcome regression: where and Instrumental variables (two-stage least squares) estimator of impact: Or: Predicted D purged of endogenous part.

Problems with IVE 1. Finding valid IVs; n Usually easy to find a variable that is correlated with treatment. n However, the validity of the exclusion restrictions is often questionable. 2. Impact heterogeneity due to latent factors

Sources of instrumental variables n Partially randomized designs as a source of IVs n Non-experimental sources of IVs Geography of program placement (Attanasio and Vera-Hernandez); “Dams” example (Duflo and Pande) Political characteristics (Besley and Case; Paxson and Schady) Discontinuities in survey design

Endogenous compliance: Instrumental variables estimator D =1 if treated, 0 if control Z =1 if assigned to treatment, 0 if not. Compliance regression Outcome regression (“intention to treat effect”) 2SLS estimator (=ITT deflated by compliance rate)

Essential heterogeneity and IVE n Common-impact specification is not harmless. n Heterogeneity in impact can arise from differences between treated units and the counterfactual in latent factors relevant to outcomes. n For consistent estimation of ATE we must assume that selection into the program is unaffected by latent, idiosyncratic, factors determining the impact (Heckman et al). n However, likely “winners” will no doubt be attracted to a program, or be favored by the implementing agency. n => IVE is biased even with “ideal” IVs.

n IVE identifies the effect for those induced to switch by the instrument (“local average effect”) Suppose Z takes 2 values. Then the effect of the program is: n Care in extrapolating to the whole population when there is latent heterogeneity. IVE is only a ‘local’ effect

n LIV directly addresses the latent heterogeneity problem. n The method entails a nonparametric regression of outcomes Y on the propensity score. n The slope of the regression function gives the marginal impact at the data point. This slope is the marginal treatment effect (Björklund and Moffitt), from which any of the standard impact parameters can be calculated (Heckman and Vytlacil). Local instrumental variables

Lessons from practice n Partially randomized designs offer great source of IVs. n The bar has risen in standards for non- experimental IVE Past exclusion restrictions often questionable in developing country settings However, defensible options remain in practice, often motivated by theory and/or other data sources n Future work is likely to emphasize latent heterogeneity of impacts, esp., using LIV.

10. Learning from evaluations Can the lessons be scaled up? What determines impact? Is the evaluation answering the relevant policy questions?

Scaling up? n Institutional context => impact; “in certain settings anything works, in others everything fails” n Local contextual factors in development impact Example of Bangladesh’s Food-for-Education program Same program works well in one village, but fails hopelessly nearby n Partial equilibrium assumptions are fine for a pilot but not when scaled up PE greatly overestimates impact of tuition subsidy once relative wages adjust (Heckman)

What determines impact? n Replication across differing contexts Example of Bangladesh’s FFE: inequality etc within village => outcomes of program Implications for sample design => trade off between precision of overall impact estimates and ability to explain impact heterogeneity n Intermediate indicators Example of China’s SWPRP Small impact on consumption poverty But large share of gains were saved n Qualitative research/mixed methods Test the assumptions (“theory-based evaluation”) But poor substitute for assessing impacts on final outcome

Policy-relevant questions? n Choice of counterfactual n Policy-relevant parameters? Mean vs. poverty (marginal distribution) Average vs. marginal impact Joint distribution of Y T and Y C, esp., if some participants are worse off: ATE only gives net gain for participants. n “ Black box” vs. Structural parameters Simulate changes in program design PROGRESA example (Attanasio et al.; Todd&Wolpin) Modeling schooling choices using randomized assignment for identification Budget-neutral switch from primary to secondary subsidy would increase impact

Conclusion: What not to do! You know a bad evaluation when you see it: n Evaluation process is poorly linked to project Evaluator did not know enough about setting and project Data collection started too late Data collection did not cover right outcomes and did not allow for adequate controls Too many “monitoring indicators” + too few outcomes and controls Evaluation did not address, or even ask, the right questions! n No basis for assessing the counterfactual No comparison group; no historical data Too little data for addressing endogenous program placement Not enough thought about selection bias and spillovers