Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.

Slides:



Advertisements
Similar presentations
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
Advertisements

The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Choosing the level of randomization
Knowing if the RBF mechanism is working Incorporating Rigorous Impact Evaluation into your HRBF program Sebastian Martinez World Bank.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
Impact Evaluation Click to edit Master title style Click to edit Master subtitle style Impact Evaluation World Bank InstituteHuman Development Network.
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
Health Impact Evaluation: An Introduction Temina Madon, PhD Executive Director Center of Evaluation for Global Action University of California, Berkeley.
Assessing Program Impact Chapter 8. Impact assessments answer… Does a program really work? Does a program produce desired effects over and above what.
Impact Evaluation: The case of Bogotá’s concession schools Felipe Barrera-Osorio World Bank 1 October 2010.
Making Impact Evaluations Happen World Bank Operational Experience 6 th European Conference on Evaluation of Cohesion Policy 30 November 2009 Warsaw Joost.
TOOLS OF POSITIVE ANALYSIS
Experimental Study.
Impact Evaluation Toolbox Gautam Rao University of California, Berkeley * ** Presentation credit: Temina Madon.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
Measuring Impact: Experiments
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Private involvement in education: Measuring Impacts Felipe Barrera-Osorio HDN Education Public-Private Partnerships in Education, Washington, DC, March.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Causal Inference Nandini Krishnan Africa Impact Evaluation.
Impact Evaluation Designs for Male Circumcision Sandi McCoy University of California, Berkeley Male Circumcision Evaluation Workshop and Operations Meeting.
Impact Evaluation for Real Time Decision Making Arianna Legovini Head, Development Impact Evaluation Initiative (DIME) World Bank.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
EXPERIMENTAL EPIDEMIOLOGY
Why Use Randomized Evaluation? Isabel Beltran, World Bank.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Impact Evaluation for Evidence-Based Policy Making
What is randomization and how does it solve the causality problem? 2.3.
An introduction to Impact Evaluation and application to the Ethiopia NFSP Workshop on Approaches to Evaluating The Impact of the National Food Security.
AADAPT Workshop for Impact Evaluation in Agriculture and Rural Development Goa, India 2009 With generous support from Gender Action Plan.
There and Back Again: An Impact Evaluator’s Tale Paul Gertler University of California, Berkeley July, 2015.
Measuring Impact 1 Non-experimental methods 2 Experiments
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
CHOOSING THE LEVEL OF RANDOMIZATION. Unit of Randomization: Individual?
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Randomized Assignment Difference-in-Differences
Impact Evaluation Using Impact Evaluation for Results Based Policy Making Arianna Legovini Impact Evaluation Cluster, AFTRL Slides by Paul J. Gertler &
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Using Randomized Evaluations to Improve.
What IE is not and is not Male Circumcision Impact Evaluation Meeting Johannesburg, South Africa January 18-23, 2010 Nancy Padian UC Berkeley.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Measuring causal impact 2.1. What is impact? The impact of a program is the difference in outcomes caused by the program It is the difference between.
Impact Evaluation Methods Randomization and Causal Inference Slides by Paul J. Gertler & Sebastian Martinez.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Impact Evaluation Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead, Africa Impact Evaluation Initiative AFTRL.
Using Randomized Evaluations to Improve Policy
Measuring Results and Impact Evaluation: From Promises into Evidence
Using Randomized Evaluations to Improve Policy
Development Impact Evaluation in Finance and Private Sector
1 Causal Inference Counterfactuals False Counterfactuals
Implementation Challenges
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Impact Evaluation Designs for Male Circumcision
Sampling for Impact Evaluation -theory and application-
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Steps in Implementing an Impact Evaluation
Nancy Padian UC Berkeley
Presentation transcript:

Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative

2 How to turn this child…

3 …into this child

4 Why Evaluate? Fiscal accountability –Allocate limited budget to what works best Program effectiveness –Managing by results: do more of what works Political sustainability –Negotiate budget –Inform constituents

5 Traditional M&E and Impact Evaluation monitoring to track implementation efficiency (input- output) INPUTSOUTCOMESOUTPUTS MONITOR EFFICIENCY EVALUATE EFFECTIVENESS $$$ BEHAVIOR impact evaluation to measure effectiveness (output-outcome)

6 Question types and methods M&E: monitoring & process evaluation Descriptiveanalysis Causalanalysis ▫What was the effect of the program on outcomes? ▫How would outcomes change under alternative program designs? ▫Is the program cost-effective? ▫Is program being implemented efficiently? ▫Is program targeting the right population? ▫Are outcomes moving in the right direction? Impact Evaluation:

7 Answer with traditional M&E or IE? Are nets being delivered as planned? Do IPTs increase cognitive ability? What is the correlation between HIV treatment and prevalence? How does HIV testing affect prevention behavior?  M&E  IE  M&E  IE

8 Efficacy & Effectiveness Efficacy: –Proof of Concept –Pilot under ideal conditions Effectiveness: –At scale –Normal circumstances & capabilities –Lower or higher impact? –Higher or lower costs?

9 Use impact evaluation to…. Test innovations Scale up what works (e.g. de-worming) Cut/change what does not (e.g. HIV counseling) Measure effectiveness of programs (e.g. JTPA ) Find best tactics to change people’s behavior (e.g. bring children to school) Manage expectations

10 What makes a good impact evaluation?

11 Evaluation problem Compare same individual with & without a program at the same point in time BUT Never observe same individual with and without program at same point in time Formally the impact of the program is: α = (Y | P=1) - (Y | P=0) Example –How much does an anti-malaria program lower under-five mortality?

12 Solving the evaluation problem Counterfactual: what would have happened without the program Estimate counterfactual –i.e. find a control or comparison group Counterfactual Criteria –Treated & counterfactual groups have identical initial average characteristics –Only reason for the difference in outcomes is due to the intervention

13 “Counterfeit” Counterfactuals Before and after: –Same individual before the treatment Non-Participants: –Those who choose not to enroll in program, or –Those who were not offered the program –Problem: We can not determine why some are treated and some are not

14 Before and After Example Food Aid –Compare mortality before and after –Observe mortality increases –Did the program fail? –“Before” normal year, but “after” famine year  Cannot separate (identify) effect of food aid from effect of drought

15 Before & After Compare Y before & after intervention Before & after counterfactual =B Estimated impact =A-B Control for time varying factors True counterfactual=C True impact=A-C A-B is under-estimated Time Y AfterBefore A B C t-1t Treatment B

16 Non-Participants…. Compare non-participants to participants Counterfactual: non-participant outcomes Problem: why did they not participate? Estimated Impact α i = (Y it | P=1) - (Y kt | P=0), Hypothesis : (Y kt | P=0) = (Y it | P=0)

17 Mothers who came to the health unit for ORT and mothers who did not? Communities that applied for funds for IRS and communities that did not? People who receive ART and people who do not? Child had diarrhea Access to clinic Costal and mountain Epidemic and non-epidemic People with HIV Access to clinic Exercise: Why participants and non-participants might differ?

18 Health program example Treatment offered Who signs up? –Those who are sick –Areas with epidemics Have lower health status that those who do not sign up Healthy people/communities are a poor estimate of counterfactual

19 What's wrong? Selection bias: People choose to participate for specific reasons Many times reasons are directly related to the outcome of interest Cannot separately identify impact of the program from these other factors/reasons

20 Need to know… Why some get assigned to treatment and others to control group. If reasons correlated with outcome –cannot separately identify program impact from –these other “selection” factors The process by which data is generated

21 Possible Solutions… Guarantee comparability of treatment and control groups ONLY remaining difference is intervention How? –Experimental design/randomization –Quasi-experiments Regression Discontinuity Double differences –Instrumental Variables

22 These solutions all involve… EITHER Randomization –Give all equal chance of being in control or treatment groups –Guarantees that all factors/characteristics will be on average equal between groups –Only difference is the intervention OR Transparent & observable criteria for assignment into the program

23 Finding controls: opportunities Budget constraints: –Eligible who get it = potential treatments –Eligible who do not = potential controls Roll-out capacity: –Those who go first = potential treatments –Those who go later = potential controls

24 Finding controls: ethical considerations Do not delay benefits: Rollout based on budget/capacity constraints Equity: equally deserving populations deserve an equal chance of going first Transparent & accountable method –Give everyone eligible an equal chance –If rank based on criteria, then criteria should be measurable and public

25 Thank you