Implementation Challenges

Slides:



Advertisements
Similar presentations
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Advertisements

Knowing if the RBF mechanism is working Incorporating Rigorous Impact Evaluation into your HRBF program Sebastian Martinez World Bank.
Presented by Malte Lierl (Yale University).  How do we measure program impact when random assignment is not possible ?  e.g. universal take-up  non-excludable.
Operationalizing IE: Case Study example: Textbooks and Teacher Training in Sierra Leone APEIE Workshop, May
Assessing Program Impact Chapter 8. Impact assessments answer… Does a program really work? Does a program produce desired effects over and above what.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Nandini Krishnan Africa Impact Evaluation Initiative World Bank April 14, 2009.
Measuring Impact: Experiments
Private involvement in education: Measuring Impacts Felipe Barrera-Osorio HDN Education Public-Private Partnerships in Education, Washington, DC, March.
Successful Concepts Study Rationale Literature Review Study Design Rationale for Intervention Eligibility Criteria Endpoint Measurement Tools.
5-4-1 Unit 4: Sampling approaches After completing this unit you should be able to: Outline the purpose of sampling Understand key theoretical.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Applying impact evaluation tools A hypothetical fertilizer project.
Development Impact Evaluation in Finance and Private Sector 1.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 Steps in Implementing an Impact.
AADAPT Workshop South Asia Goa, India, December 17-21, 2009 Arianna Legovini Manager, Development Impact Evaluation Initiative The World Bank.
Social Experimentation & Randomized Evaluations Hélène Giacobino Director J-PAL Europe DG EMPLOI, Brussells,Nov 2011 World Bank Bratislawa December 2011.
1 Ongoing Agriculture Projects in Nigeria and Malawi Ahmed Mushfiq Mobarak Yale University.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Impact Evaluation Methods Regression Discontinuity Design and Difference in Differences Slides by Paul J. Gertler & Sebastian Martinez.
Module 8 Guidelines for evaluating the SDGs through an equity focused and gender responsive lens: Overview Technical Assistance on Evaluating SDGs: Leave.
Module 9: Transition and Exit Strategy ASEAN Training of Trainers (TOT) on Disaster Recovery.
Small Charities Challenge Fund (SCCF) Guidance Webinar
CHAPTER 4 Designing Studies
Operational Aspects of Impact Evaluation
Monitoring & Evaluation Processes A A walkthrough of the whole process
Measuring Results and Impact Evaluation: From Promises into Evidence
Technical Assistance on Evaluating SDGs: Leave No One Behind
Commercial Agriculture Development Project Impact Evaluation
Food and Agriculture Organization of the United Nations
General belief that roads are good for development & living standards
Right-sized Evaluation
Building Crisis Monitoring Systems
An introduction to Impact Evaluation
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 2. Session 6. Developing indicators.
Data Collection Mechanisms for ESD
Randomized Trials: A Brief Overview
Impact Evaluation Terms Of Reference
Implementation Issues Program roll-out
Performance Framework
Matching Methods & Propensity Scores
Institutionalizing the Use of Impact Evaluation
Matching Methods & Propensity Scores
Workshop of the Africa Program for Impact Evaluation of HIV/AIDS
Strategies Achieving our Goals
Development Impact Evaluation in Finance and Private Sector
Impact Evaluation Methods
Matching Methods & Propensity Scores
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
To Support The Poverty Reduction Strategy on a Sustainable Basis
Helene Skikos DG Education and Culture
Impact Evaluation Methods: Difference in difference & Matching
Introduction to Experimental Design
Evaluating Impacts: An Overview of Quantitative Methods
Randomization This presentation draws on previous presentations by Muna Meky, Arianna Legovini, Jed Friedman, David Evans and Sebastian Martinez.
Impact Evaluation Designs for Male Circumcision
Class 2: Evaluating Social Programs
Sampling for Impact Evaluation -theory and application-
Class 2: Evaluating Social Programs
Operational Aspects of Impact Evaluation
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Steps in Implementing an Impact Evaluation
Public Policy Management in Nepal: Context and Issues
Public Policy Management in Nepal: Context and Issues
TTL: Nyambura Githagui
Monitoring and Evaluating FGM/C abandonment programs
Steps in Implementing an Impact Evaluation
Presentation transcript:

Implementation Challenges Malte Lierl Africa Impact Evaluation Initiative World Bank April 14, 2009

Problems in implementation can invalidate the evaluation Given the costs of the evaluation, should avoid any second best solutions Key steps The best remedy is prevention – do we really have a program design which is politically and administratively feasible, in the short and long run? Monitor the roll-out carefully – demand high standards of compliance with the original design (small details make a difference); e.g.: accessibility of communities Make adjustments immediately as needed 2

Implementation Steps Identify learning priorities Determine priority interventions Think about the level of analysis Decide on evaluation design and roll-out plan, given operational realities Collect and analyze baseline data Roll out the intervention Collect follow-up data Learn from the results, improve operations DESIGN IMPLEMENTATION UTILIZATION

Design: Identify learning priorities Testing new strategies: Weak links in the results chain? Assumptions donors might challenge? Ex.: Is CDD more effective than capacity building for local administration Improving performance of existing programs Ex.: Does competition for grants between villages in a commune lead to the best targeting or is a fixed allocation to each village better?

Design: Identify priority interventions Unknown benefits Costly intervention New intervention National or regional priorities Priority outcomes of interest Intermediate Final Feasible evaluation question

Design: Level of analysis Individuals Behavior (citizens, elites) Welfare (vulnerable groups,…) Community dynamics? Public services/microprojects? Implementation performance (short-term) Sustainability, maintenance (long-term) Institutions?

Design: Evaluation Strategy How do we obtain a good counterfactual at our level of analysis? Randomization (small samples) Need to convince stakeholders Matching, diff-in-diff (large sample, good baseline data) Politically less difficult Discontinuities (need lots of data points around the discontinuity)

Design: Roll-out plan Is random assignment (or random phase-in) feasible? At what level? The unit of intervention often simplest Trade-off: higher level means bigger sample Alternative assignment strategies? If the intervention must be targeted, use clearly defined criteria, and if feasible, phase in randomly within eligible population / villages / regions

Summary: Impact Evaluation Design Identify learning priorities Determine priority interventions Think about the level of analysis Decide on evaluation design and roll-out plan, given operational realities

Implementation Collecting baseline data Rolling out the intervention Collecting follow-up data

Implementation: Baseline data Why collect baseline data? Ensure that treatment and control group are well balanced Non-experimental approach: Baseline data is needed to construct counterfactual Calculate sample size for follow-up Feed into the program’s MIS Analyze targeting of the program

Implementation: Baseline data Take advantage of existing data Take design of date collection instruments seriously Discuss, get advice Test the instruments in the field Who collects it? Bureau of Statistics: integrate with existing data Program: integrate with operational logistics Private agency: Sometimes quality monitoring is easier

Implementation: Program roll-out Experimental or quasi-experimental evaluation: Deviation from roll-out plan can invalidate the evaluation strategy Stick to roll-out plan as much as possible Carefully monitor program roll-out Plan ahead to link MIS and impact evaluation data

Gather information on roll-out Who, in reality, receives which benefits when? This could affect the impacts measured: variation in exposure to treatment Example: a local NGO operator does not deliver Does the intervention involve something other than initially planned? Example: Learn that local NGO operators use assemblies to provide sanitation information Program impact now includes the village sanitation intervention

Challenge: Access to Treatment and Control Areas Because of new constraints, or constraints that were not planned for, some of the communities in the study are not fully accessible, either for treatment, for data collection, or both. Randomized evaluation: If randomization is compromised then the whole impact evaluation strategy is compromised. If sample is compromised, use statistical correction Carefully plan for it (and to avoid it), and make an effort to stick with original design. 15

Challenge: Contamination What if some of the benefits are shared with the control group? CDD Example: a treatment community decides to build a road which connects it to a control community What if all the control group receive some other benefit? Example: NGO targets control communities to receive similar program Solutions If contamination is small, can still estimate effect of being assigned to treatment group – will estimate different parameter, which may still be of interest. If control communities received benefits but less / or different, then measure impact of additional (redefine question – but make sure it is still relevant).

Challenge: Heterogeneous treatment Sometimes the quality of the intervention is not the same across all recipients. Example: Different NGO operators in different areas If quality differences are not randomly assigned this may require some reinterpretation of results, even if the differences are observed. At the end of the day we have to focus on the question of interest and direct the evaluation towards that question: what type of quality can we deliver in practice? 17

Challenge: Timing of benefits and Multiple Treatments Treatment 1: CDD only Treatment 2: CDD + Information for accountability Interventions should be rolled out at same time. Suppose program rolls out to Group 1 in June but to Group 2 in December, with a follow-up survey in January Now measured effect is different treatment + different time with treatment. Can’t measure effects of long term exposure to CCT in Mexico. Other examples? 18

Implementation: Follow-up data Collect follow-up data for both the treatment and control groups Appropriate intervals Consider how long it should take for outcomes to change Short-term outcomes (social dynamics, behavior, citizen-authority relations)  adjust program Medium-term (impact of local investment projects) After end of program: Do effects endure? What happens once the community support has phased out?

Challenge: Timing and delays Problem: Program expected to roll out in January but it doesn’t roll out until August; Delay in funding Concerns Seasonality effects: Baseline and follow-up data collection should take place at the same time of the year Examples: Eritrea Malaria and ECD, Nigeria Malaria Other examples? 20

Summary: Implementation Challenges Collecting baseline data Accessibility of treatment and control areas Rolling out the intervention Contamination Heterogeneous treatment Timing of treatment/Multiple interventions Collecting follow-up data Delays, seasonality

Thank you!