AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Nandini Krishnan Africa Impact Evaluation Initiative World Bank April 14, 2009.

Slides:



Advertisements
Similar presentations
Armenias Millennium Challenge Account: Assessing Impacts Ken Fortson, MPR Ester Hakobyan, MCA Anahit Petrosyan, MCA Anu Rangarajan, MPR Rebecca Tunstall,
Advertisements

AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation Muna Meky Impact Evaluation Cluster, AFTRL Slides by Paul J.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Knowing if the RBF mechanism is working Incorporating Rigorous Impact Evaluation into your HRBF program Sebastian Martinez World Bank.
Impact Evaluation Click to edit Master title style Click to edit Master subtitle style Impact Evaluation World Bank InstituteHuman Development Network.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Operationalizing IE: Case Study example: Textbooks and Teacher Training in Sierra Leone APEIE Workshop, May
A business case to reduce rural poverty through targeted investments in water in sub-Saharan Africa WWF5 Session How can food market measures boost.
What do we know about gender and agriculture in Africa? Markus Goldstein Michael O’Sullivan The World Bank Cross-Country Workshop for Impact Evaluations.
The 8-7 National Poverty Reduction Program in China: the National Strategy and its Impact Wang Sangui, Li Zhou, Ren Yanshun.
Concept note for Social Investment Program Project (SIPP), Bangladesh Team Members : Md. Abdul Momen Md. Golam Faruque Md. Lutfor Rahman MIM Zulfiqar Dr.
Goal Paper  Improve our understanding on whether business training can improve business practices and firm outcomes (sales, profits, investment) of poor.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
1 Randomization in Practice. Unit of randomization Randomizing at the individual level Randomizing at the group level –School –Community / village –Health.
Matching Methods. Matching: Overview  The ideal comparison group is selected such that matches the treatment group using either a comprehensive baseline.
Randomized Control Trials for Agriculture Pace Phillips, Innovations for Poverty Action
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Measuring Impact: Experiments
Concept note for Pakistan Poverty Alleviation Fund (PPAF) Tanvir Hussain (GM ERD, PPAF) Hassan Akbar (ME ERD, PPAF) Aleena Naseem (ME ERD, PPAF) Imtiaz.
AADAPT Workshop South Asia Goa, December 17-21, 2009 Nandini Krishnan 1.
PREVENTION, PROTECTION, PROMOTION THE WORLD BANK’S EVOLVING FRAMEWORK OF SOCIAL PROTECTION IN AFRICA MILAN VODOPIVEC WORLD BANK Prepared for the conference.
SEDA IMPACT EVALUATION WESTERN CAPE (SOUTH AFRICA) Varsha Harinath (the dti) Francisco Campos (World Bank) Finance and Private Sector Development IE Workshop.
The Targeting Outcomes of Programs (TOP) framework.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Mastewal Yami Post Doctoral Fellow: Social and Institutional Scientist Challenges to Investment in Irrigation in Ethiopia: Lessons.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Impact Evaluation Designs for Male Circumcision Sandi McCoy University of California, Berkeley Male Circumcision Evaluation Workshop and Operations Meeting.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Training Resource Manual on Integrated Assessment Session UNEP-UNCTAD CBTF Process of an Integrated Assessment Session 2.
Why Use Randomized Evaluation? Isabel Beltran, World Bank.
Land Market Based Interventions in LAC: Protierras in Bolivia Martín Valdivia.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
S Ethiopia Sustainable Land Management Project Impact Evaluation CROSS-COUNTRY WORKSHOP FOR IMPACT EVALUATIONS IN AGRICULTURE AND COMMUNITY DRIVEN DEVELOPMENT.
Development Impact Evaluation in Finance and Private Sector 1.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
DEPARTMENT OF AGRICULTURE, FORESTRY AND FISHERIES COMPREHENSIVE AGRICULTURAL SUPPORT PROGRAMME IMPACT EVALUATION 20 OCTOBER 2015.
Can a Market-Assisted Land Redistribution Program Improve the Lives of the Poor? Evidence from Malawi Gayatri Datar (World Bank, IEG) Ximena V. Del Carpio.
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 Steps in Implementing an Impact.
AADAPT Workshop South Asia Goa, India, December 17-21, 2009 Arianna Legovini Manager, Development Impact Evaluation Initiative The World Bank.
Randomized Assignment Difference-in-Differences
1 Ongoing Agriculture Projects in Nigeria and Malawi Ahmed Mushfiq Mobarak Yale University.
Kalanidhi Subbarao Human Development Network (Social Protection) The World Bank March 23, 2011.
Ghana: Impact of the Productive Safety Nets Project on Agricultural Productivity Angela Dannson, MoFA Benjamin Botchway, MoLG Osman Gyasi, World Bank Markus.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
Common Pitfalls in Randomized Evaluations Jenny C. Aker Tufts University.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Short Training Course on Agricultural Cost of Production Statistics
Using Randomized Evaluations to Improve Policy
Impact Evaluation for Real Time Decision Making
Measuring Results and Impact Evaluation: From Promises into Evidence
Commercial Agriculture Development Project Impact Evaluation
Impact Evaluation Terms Of Reference
Explanation of slide: Logos, to show while the audience arrive.
Quasi-Experimental Methods
Using Randomized Evaluations to Improve Policy
Development Impact Evaluation in Finance and Private Sector
1 Causal Inference Counterfactuals False Counterfactuals
Implementation Challenges
Impact Evaluation Designs for Male Circumcision
Sampling for Impact Evaluation -theory and application-
Steps in Implementing an Impact Evaluation
Steps in Implementing an Impact Evaluation
Presentation transcript:

AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Nandini Krishnan Africa Impact Evaluation Initiative World Bank April 14, 2009

 Know your sector  Examine sector plan ▪ Poverty Reduction ▪ Long-term Strategy for Agricultural Growth and sustainable Rural Livelihoods ▪ Governance and accountability initiatives  Identify highest priorities for learning in agriculture, rural development or local governance

 Agriculture ▪ Technology adoption: Input vouchers, Matching grants, Agricultural advisory services- Associated implementation mechanisms ▪ Irrigation: Large-scale or small-scale, construction or rehabilitation, financial arrangements and sustainability ▪ Access to markets: Information, complementary infrastructure  Community-driven development and local governance ▪ Accountability interventions: Information, Devolution of funds, functions and functionaries ▪ Participation interventions: Capacity building, conditional budgetary support, women’s associations, community assemblies

 Priority interventions:  Unknown benefits  Costly intervention  New intervention  National or regional policy thrust: resources focused on../ Scaling up  Priority outcomes of interest  Intermediate  Final

 Useful to  Take next year decisions  Justify changes to the program  Negotiate your budget  Justify expansion

 Agriculture Sector priority: Increasing commercialization of agricultural products  Intervention: Grants for added value projects  Priority for learning: What level and type of training and support needed to help farmer associations succeed? What level of subsidy is most cost effective? For which product lines are the grant most effective?  Governance Sector priority: Improving local accountability  Intervention: Budgetary support conditional on participatory decision-making  Priority for learning: What are the rules of the game that are most conducive to driving decisions toward public and away from private goods?

 How will the program be rolled out? Different interventions?  Piloted in a random sample of households, communities, or regions?  Rolled out nationwide?  Rolled out in communities/population/regions satisfying a certain criteria?  Rolled out to a targeted high-potential or high-poverty population/areas?  Understand TARGETING and PROGRAM PARTICIPATION  Each roll-out strategy yields distinct opportunities for impact evaluation

 Keep in mind  The needs of the intervention – Target population/ High-priority areas  The evaluation: Take advantage of opportunities for random assignment or phase-out  Example: 10,000 eligible households in high-potential areas to receive vouchers for improved seeds and fertilizers  Randomly assign 3000 to Year 1, 4000 each to Years 2-3  Identify 5000 neediest (using clearly defined criteria) and assign to years 1 and 2

 Random encouragement: use random assignment of promotional activities to evaluate  Promotion activities on take-up  Grants and incentives on productivity (use random encouragement as an instrument)  Randomize in the call for proposal pipeline:  Call for “expressions of interest”  Select twice+ as many “expressions of interest” than those you can fund  Randomly select half of them and ask them to submit full proposals (treatment)  The other half serve as control (either forever or until next call)

 Determine scale: Large scale or pilot?  Universal scale with imperfect take-up: encouragement design  Universal scale with perfect take-up: difficult  Large scale with representative sample: more costly, more informative  Large scale with purposeful sample: less costly, good for first instance, may require more evaluation later  Small pilot (e.g., in two districts): easier to implement, not as informative, may need to use all beneficiaries  Some programs are too small to evaluate

 Unit of assignment is unit of intervention  Is random assignment feasible?  Large-scale irrigation project : Not feasible to assign farm households or communities randomly to the intervention, determined by location of canals  Input vouchers: Can randomly assign at region, community, or household level ▪ Contamination? Spill overs?  Trade-off: higher unit of intervention means bigger survey sample (need multiple households to obtain observation for one community)

 If the intervention must be targeted, think about valid counterfactual,  If eligibility criteria are not clearly defined: ▪ phase out randomly within eligible population / villages / regions  If targeting specific regions or communities: ▪ Randomize at district/village level or use clear eligibility criteria

 Random assignment: Implies control and treatment are identical  Still need to check for balance  If not balanced, reassign to correct  Baseline insures against mishaps  Regression Discontinuity Design and other quasi-experimental methods:  Baseline essential  Matching on observables and checking for balance ex post  By-products of Baseline Analysis:  Informs project design and implementation: Who was targeted? Did the program mostly benefit patients who were poor or at high risk at baseline? How well were they targeted?

 Include areas essential to monitoring system and impact evaluation  Ultimate outcomes we care most about: yield, consumption, incomes  Intermediate outcomes we expect to change first: input use  Other outcomes that the intervention may affect: schooling, labor  Characteristics that might affect outcomes: farm size, household size, education  In short, outcomes of interest AND variables that help understand how the intervention affect different population

 Take advantage of opportunity to collect essential sector data  Existing land use, crop choice, input use  Who collects it?  Bureau of Statistics: Integrate with existing data  Ministry concerned: Ministry of Agriculture/Water Resources/Rural Development  Private agency: Sometimes higher quality, more dependable

 IE team (not data collection agency) to  Design questionnaire and sample  Define terms of reference for data collection agency  Train enumerators  Conduct pilot  Supervise data collection

 Do treatment and control groups look similar at baseline?  If not, all is not lost!  Even in absence of perfect balance, can use baseline data to adjust analysis or re-assign PovertyFemale- headed households Number of children in household Formal sector job Treatment70%64%3.120% Control68%66%2.918% Significance-*--

 Monitor to roll-out to ensure evaluation is not compromised  What if the benefits are accidentally rolled out to everyone, all at once?  Example: Input vouchers to be randomly assigned to households in pre-identified communities, but rolled out to the entire community  Contamination: Some treatment households sell all or part of their vouchers for cash to control households ▪ Is the evaluation is compromised? Needed to monitor! ▪ Spillovers are interesting and can be measured

 What if all the control group receive some other benefit?  Example: NGO targets control communities to receive vouchers  Changes evaluation: comparison between your program and the NGO program.

 In reality, who receives which benefits when?  Could affect the impacts measured: variation in exposure to treatment  Voucher program rolls out in some communities before harvest, others after  Does the intervention involve something other than initially planned?  Example: Learn that input suppliers who distributed vouchers also gave detailed training on appropriate input use  Program impact now includes the training

 Collect follow-up data for both the treatment and control groups  Appropriate intervals  Consider how long it should take for outcomes to change  One year or at next harvest ▪ Provide initial outcomes ▪ Adjust program if needed  Two years: Changes in longer term outcomes?  After end of program: Do effects endure? ▪ What happens once the input voucher program has phased out?

 Randomization: Simply compare average outcomes for treatment and comparison  Other methods: Make statistical assumptions to estimate impact of program  Combination of methods:  Random Encouragement and IV  Matching with difference-in-difference

 Are the effects statistically significant?  Basic statistical test tells whether differences are due to the program or to noisy data  Are they significant in real terms?  If the input voucher scheme costs a million dollars and has positive effect but it’s tiny, may not be worthwhile  Are they sustainable?  If input use falls to pre-program levels when the intervention ends, the program is not financially sustainable in its current form

 Are you thinking about this just now??? Start dissemination today  If no one knows about it, it won’t make a difference to policy!  Make sure the information gets into the right policy discussions  Ownership by government, capacity building  Forums  Real time discussions  Workshop  Report  Policy brief

 Identify next learning opportunity  Test variations  Alternate subsidy amounts  Alternate packages of inputs  Alternate implementation and targeting mechanisms: Government extension workers or input dealers? Beneficiary selection?  Test other interventions to affect same outcomes  Matching grants for technology adoption  Training in use of improved technologies  Improving access to markets and providing complementary infrastructure to increase the share of marketed output