Presentation is loading. Please wait.

Presentation is loading. Please wait.

AADAPT Workshop South Asia Goa, India, December 17-21, 2009 Arianna Legovini Manager, Development Impact Evaluation Initiative The World Bank.

Similar presentations


Presentation on theme: "AADAPT Workshop South Asia Goa, India, December 17-21, 2009 Arianna Legovini Manager, Development Impact Evaluation Initiative The World Bank."— Presentation transcript:

1 AADAPT Workshop South Asia Goa, India, December 17-21, 2009 Arianna Legovini Manager, Development Impact Evaluation Initiative The World Bank

2  To secure operational relevance of IE and change culture  Learning agenda must be set up bottom up  This requires capacity development for IE in implementing agencies  Some formal training  Mainly application and learning by doing by being part of the evaluation team  Objective  use impact evaluation as an internal and routine management tool  secure policy feedback

3  A learning agenda is a set of questions of high policy and operational importance, the answers for which will:  Secure policy continuity  Improve effectiveness of programs  Buy approval of policy establishment, electorate How?  Know your sector  Involve all relevant parties to the discussion  Include evaluators to help structure the discussion and resulting analytical design  Identify program(s) with highest potential, political sensitivity or with high fiscal expenditures  Develop forward looking learning agenda  Include policy (what) and operational (how to) questions  Disseminate learning agenda to relevant parties (minister, constituencies)

4  Question design-choices of program  Institutional arrangements, Delivery mechanisms, Packages, Pricing/incentive schemes  Use random trials to test alternatives  Measure effects on short term outcomes (as leading indicators or other things to come)  take up rates, use, adoption  Follow up data collection and analysis  3-6-12 months after exposure  Measure impact of alternative treatments on short term outcomes and identify “best”  Change program to adopt best alternative  Start over

5  How much does the program deliver?  Is it cost-effective?  Use most rigorous method of evaluation possible  Focus on higher level outcomes  Agricultural productivity, health status, income  Measure impact of operation on stated objectives and a metric of common outcomes  One, two, three year horizon  Compare with results from other programs  Inform budget process and allocations

6  Take next year decisions  Justify changes to the program  Negotiate your budget  Justify expansion

7  Agriculture Sector priority: Increasing commercialization of agricultural products  Intervention: Grants for added value projects  Priority for learning:  What training is needed to help farmer associations succeed?  What level of subsidy is most cost effective?  For which product lines are the grant most effective?  Governance Sector priority: Improving local accountability  Intervention: Budgetary support conditional on participatory decision- making  Priority for learning:  What are the rules of the game that are most conducive to driving decisions toward public and away from private goods?  How much effort should be exerted to ensure participation of women, poor, etc?

8  Determine scale: Large scale or pilot?  Universal scale with imperfect take-up: encouragement design  Universal scale with perfect take-up: difficult  Large scale with representative sample: more costly, more informative  Large scale with purposeful sample: less costly, good for first instance, may require more evaluation later  Small pilot (e.g., in two districts): easier to implement, not as informative, may need to use all beneficiaries  Some programs are too small to evaluate

9  Identify eligibility criteria and targeting  Rolled out in communities/population/regions satisfying a certain criteria? Possibility to use regression discontinuities  Rolled out to a targeted high-potential or high-poverty population/areas?  Understand roll out (timing and geography)  Piloted in a sample of households, communities, or regions? Possibility for random assignment  Rolled out nationwide? In how many phases? How many villages per phase? Possibility for random phase in  Universal program with imperfect take up? Possibility for random encouragement  Investigate budget constraints Possibility for random assignment Each roll-out strategy yields distinct opportunities for impact evaluation

10  Each question requires a separate design or “identification strategy”  For each question there will be a method that is best in the sense that it provides the most precise estimate and is operationally feasible  When unsure plan to use more than one method  Keep ethical consideration in mind:  No to deny benefits to something that we know works  Test interventions before scale up if we don’t know  Discuss design with authorizing environment

11  Random encouragement: use random assignment of promotional activities to evaluate  Promotion activities on take-up  Grants and incentives on productivity (use random encouragement as an instrument to create exogenous variation in take up)  Randomize in the call for proposal pipeline:  Call for “expressions of interest”  Select twice+ as many “expressions of interest” than those you can fund  Randomly select half of them and ask them to submit full proposals (treatment)  The other half serve as control (either forever or until next call)

12  Power calculations needed to determine size of sample to be used in the evaluation  What is the unit of intervention?  Sample increases the smaller the expected effects  If clustered (village, community) ▪ Power decreases with high (intracluster) correlation of outcomes—need more clusters ▪ Power mostly determined by the number of clusters not the number of observations within each cluster

13  Cost items:  Capacity development  Project team time to manage IE  Analytical services for design and analysis  Field coordination  Data collection  Discussions and dissemination

14  IE Team should include  Government (program manager, economist/statistician)  WB Project team (Task manager or substitute)  Research team (Lead researcher, coresearchers, field coordinator)  Data collection agency

15  The smallest unit of assignment is the unit of intervention  Pension: individual  Health insurance: individual or household  Female president of local council: village  Assignment can be at higher aggregation level-but costly  Treatment units are assigned intervention and control units are not—create listing and records  Assignment must be discussed and explained to all parties who make decisions on implementation to avoid contamination later on

16  Only collect baseline data AFTER impact evaluation design is ready  Impact evaluation design determines sample (use power calculations)  Random assignment: control and treatment are identical at baseline  Baseline strictly not needed for analysis, used to check for balance, reassign if needed; use if randomization is contaminated  Quasi-experimental methods:  Baseline essential  Baseline Analysis:  Informs project design and implementation; improve targeting

17  IE team (not data collection agency) to  Design questionnaire and sample  Define terms of reference for data collection agency  Train enumerators  Conduct pilot  Supervise data collection

18  Design questionnaire(s) and sample that meet monitoring and impact evaluation data needs  Final outcomes : yield, consumption, incomes  Intermediate outcomes we expect to change first: input use, technology adoption  Other outcomes that the intervention may affect: schooling, labor  Characteristics that might affect outcomes: farm size, household size, education  In short, outcomes of interest AND variables that help understand how the intervention affect different populations

19  Who?  Bureau of Statistics: Integrate with existing data  Ministry concerned: Ministry of Agriculture/Water Resources/Rural Development  Private agency: Sometimes higher quality, more dependable

20  Do treatment and control groups look similar at baseline?  If not, all is not lost!  Even in absence of perfect balance, can use baseline data to adjust analysis or re-assign PovertyFemale- headed households Number of children in household Formal sector job Treatment70%64%3.120% Control68%66%2.918% Significance-*--

21  Intensive monitoring of roll-out to ensure evaluation is not compromised  What if treatment and control receive the benefits? ▪ Input vouchers randomly assigned to households, but rolled out to the entire community, or treatment households sell their vouchers to control households ▪ Is the evaluation is compromised? Needed to monitor!  What if all the control group receive some other benefit? ▪ NGO targets control communities to receive vouchers ▪ Changes evaluation: comparison between your program and the NGO program.

22  Collect follow-up data with the same sample and questionnaire as baseline data  Appropriate intervals  Consider how long it should take for outcomes to change  One year or at next harvest ▪ Provide initial outcomes ▪ Adjust program if needed  Two years: Changes in longer term outcomes?  After end of program: Do effects endure? ▪ What happens once the input voucher program has phased out?

23  Randomization: Simply compare average outcomes for treatment and comparison  Other methods: Do econometric analysis required taking in consideration assumptions to estimate impact of program  Combination of methods:  Random Encouragement and IV  Matching with difference-in-difference

24  Are the effects statistically significant?  Basic statistical test tells whether differences are due to the program or to noisy data  Are they significant in real terms?  If the input voucher scheme costs a million dollars and has positive effect but it’s tiny, may not be worthwhile  Are they sustainable?  If input use falls to pre-program levels when the intervention ends, the program is not financially sustainable in its current form

25  Are you thinking about this only now???  Discuss what are the policy implications of the results  What actions should be taken  How to present them to authorizing environment to justify changes/budget/scale up?  Talk to policy-maker and disseminate to wider audience  If no one knows about it, it won’t make a difference  Make sure the information gets into the right policy discussions  Real time discussions  Workshops  Reports  Policy briefs

26  Identify next learning opportunity  Test variations  Alternate subsidy amounts  Alternate packages of inputs  Alternate implementation and targeting mechanisms: Government extension workers or input dealers? Beneficiary selection?  Test other interventions to affect same outcomes  Matching grants for technology adoption  Training in use of improved technologies  Improving access to markets and providing complementary infrastructure to increase the share of marketed output

27


Download ppt "AADAPT Workshop South Asia Goa, India, December 17-21, 2009 Arianna Legovini Manager, Development Impact Evaluation Initiative The World Bank."

Similar presentations


Ads by Google