Impact Evaluation and the Project Cycle Arianna Legovini PREM WEEK May 2, 2006 This presentation is based on work by the Thematic Group on Impact Evaluation.

Slides:



Advertisements
Similar presentations
An introduction to Impact Evaluation
Advertisements

1 World Bank Support TFSCB STATCAP Monitoring systems / Core Welfare Indicators Questionnaire (CWIQ) Readiness Assessment.
Introduction to Monitoring and Evaluation
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Donald T. Simeon Caribbean Health Research Council
Knowing if the RBF mechanism is working Incorporating Rigorous Impact Evaluation into your HRBF program Sebastian Martinez World Bank.
CLICK TO ADD TITLE [DATE][SPEAKERS NAMES] The 5th Global Health Supply Chain Summit November , 2012 Kigali, Rwanda Supply Chain Performance Approaches.
Project Monitoring Evaluation and Assessment
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
IFAD Reform towards a better development effectiveness How can we all do better? Mohamed Béavogui Director, West and Central Africa January 2009.
Lecture(3) Instructor : Dr. Abed Al-Majed Nassar
Evaluating Physical Activity Intervention Programs Thomas Schmid, PhD Physical Activity and Health Branch CDC Atlanta, Georgia, USA.
System of Environmental-Economic Accounting SEEA Implementation Guide and Diagnostic Tool Alessandra Alfieri UNSD.
EVALUATION IN THE GEF Juha Uitto Director
Financing Urban Public Infrastructure
System of Environmental-Economic Accounting SEEA Implementation Guide and Diagnostic Tool and Suggested Structure for Assessment United Nations Statistics.
1 RBM Background Development aid is often provided on a point to point basis with no consistency with countries priorities. Development efforts are often.
Evaluation Assists with allocating resources what is working how things can work better.
Global Strategy to Improve Agricultural Statistics Food and Agriculture June 22, 2009 Organization.
Initial thoughts on a Global Strategy for the Implementation of the SEEA Central Framework Ivo Havinga United Nations Statistics Division.
Use of impact evaluation for operational research PREM Week 2007 Arianna Legovini, Africa Impact Evaluation Initiative (AFTRL)
Contact Monitoring Regional Network (CMKN). Why procurement It is estimated that an effective public procurement system could save as much as 25% of government.
MARKETS II M&E FRAMEWORK AND CHALLENGES Joseph Obado.
Public Expenditure Tracking Surveys(PETS) Zerubabel Ojoo Management systems and Economic Consultants Jan
Assessing the Capacity of Statistical Systems Development Data Group.
Promoting CARICOM/CARIFORUM Food Security (Project GTFS/RLA/141/ITA) (FAO Trust Fund for Food Security and Food Safety – Government of Italy Contribution)
Stakeholder consultations Kyiv May 13, Why stakeholder consultations? To help improve project design and implementation To inform people about changes.
SESSION 3: FROM SETTING PRIORITIES TO PROGRAMMING FOR RESULTS.
Response Analysis MBRRR Training Session 2.1. Response Analysis: Overview Setting the scene Defining response analysis Why response choice matters Situating.
European Commission Joint Evaluation Unit common to EuropeAid, Relex and Development Methodology for Evaluation of Budget support operations at Country.
Objectives and Strategies of RRSF The RRSF has been prepared with an overall objective and four specific objectives to overcome the identified problems.
Independent Evaluation Group World Bank November 11, 2010 Evaluation of Bank Support for Gender and Development.
Africa Program for Impact Evaluation on HIV/AIDS (AIM-AIDS) Cape Town, March 2009 Workshop of the Africa Program for Impact Evaluation of HIV/AIDS.
Task NumberHarmonise, develop & implement capacity building Performance Indicators CB-07-01c Harmonise efforts by Tasks, in particular those related with.
An introduction to Impact Evaluation and application to the Ethiopia NFSP Workshop on Approaches to Evaluating The Impact of the National Food Security.
DEVELOPMENT COOPERATION FRAMEWORK Presentation by Ministry of Finance 10 December 2013.
IFAD Reform towards a better development effectiveness How can we all do better? Mohamed Tounessi Bamba Zoumana Virginia Cameroon Retreat 4-5 November.
Development Impact Evaluation in Finance and Private Sector 1.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
Revisions Proposed to the CIS Plan by the Global Office Misha V. Belkindas Budapest, July 3-4, 2013.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
United Nations Oslo City Group on Energy Statistics OG7, Helsinki, Finland October 2012 ESCM Chapter 8: Data Quality and Meta Data 1.
Kathy Corbiere Service Delivery and Performance Commission
AADAPT Workshop South Asia Goa, India, December 17-21, 2009 Arianna Legovini Manager, Development Impact Evaluation Initiative The World Bank.
The implementation programme for the 2008 SNA and supporting statistics UNSD-Regional Commissions Coordination Meeting on Integrated Economic Statistics.
Impact Evaluation of Urban Upgrading Programs Judy Baker, FEU November 19, 2007.
ACTED AME Appraisal, Monitoring and Evaluation. Summary 1/ ACTED AME department 2/ AME Responsibilities 3/ AME throughout project cycle 4/ Involvement.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
The Logical Framework (Log Frame). Programs & Projects Programs Broad areas of work required to implement policy decisions. Usually focused on a sector.
Session 2: Developing a Comprehensive M&E Work Plan.
NFM: Modular Template Measurement Framework: Modules, Interventions and Indicators LFA M&E Training February
An introduction to Impact Evaluation Markus Goldstein Poverty Reduction Group The World Bank.
Organizations of all types and sizes face a range of risks that can affect the achievement of their objectives. Organization's activities Strategic initiatives.
Presenter:- Mrs. Josette Maxwell-Dalsou Chief Economist Economic Planning Ministry of Finance, Economic Affairs and National Development.
1 Developing country capacity for impact evaluation PREM Week 2007 Arianna Legovini, Africa Impact Evaluation Initiative (AFTRL) Africa Impact Evaluation.
Understanding DWCPs, tripartite process and role of Trade Unions How the ILO works at a national level.
Monitoring and Evaluating Rural Advisory Services
Understanding DWCPs, tripartite process and role of Trade Unions
Operational Aspects of Impact Evaluation
Impact Evaluation for Real Time Decision Making
The SWA Collaborative Behaviors
Institutionalizing the Use of Impact Evaluation
Workshop of the Africa Program for Impact Evaluation of HIV/AIDS
UNDP-UNEP POVERTY & ENVIRONMENT INITIATIVE (PEI): MID-TERM REVIEW
Development Impact Evaluation in Finance and Private Sector
Understanding DWCPs, tripartite process and role of Trade Unions
Operational Aspects of Impact Evaluation
Understanding DWCPs, tripartite process and role of Trade Unions
The GEF Public Involvement Policy
Integrating Gender into Rural Development M&E in Projects and Programs
Presentation transcript:

Impact Evaluation and the Project Cycle Arianna Legovini PREM WEEK May 2, 2006 This presentation is based on work by the Thematic Group on Impact Evaluation

Objective of the presentation  Walk you through what it takes to do an impact evaluation for your project from Identification to ICR  Persuade you that impact evaluation will add value to your project

We will talk about…  Objective of the evaluation  General Principles  Evaluation activities – the core issues for evaluation design and implementation, and  Housekeeping activities—procedural, administrative and financial management issues

Why do an impact evaluation of your project? Provide a sound basis for policy development  Measure the impact of the project on intended and unintended outcomes for: Making budget decisions and reallocating resources (fiscal accountability) Making budget decisions and reallocating resources (fiscal accountability) Scaling up (provisos apply) Scaling up (provisos apply)  Measure the relative effectiveness of alternatives (modes of delivery, packages, pricing schemes) to: Modify project features overtime (managing by results) Modify project features overtime (managing by results) Inform future project designs Inform future project designs

Some general principles  Government ownership—what matters is institutional buy-in  Relevance and applicability—asking the right questions  Flexibility and adaptability  Horizon matters

Country ownership  Ensure Government involvement at all stages to build institutional capacity and a culture of managing-by-results.  Agree on a dissemination plan to maximize use of results for policy development.  Identify entry points in project and policy cycles midpoint and closing, for project; midpoint and closing, for project; sector reporting, CGs, MTEF, budget, for policy sector reporting, CGs, MTEF, budget, for policy  Use partnerships with local academics to build local capacity for impact evaluation.  Example, Kenya education

Relevance and Applicability  For an evaluation to be relevant, it must be designed to respond to the policy questions that are of importance to the clients.  Clarifying early what it is that the client wants to learn and designing the evaluation to that end will go some way to ensure that the recommendations of the evaluation will feed into policy making.  Example, devolution and decentralization in Zambia

Flexibility and adaptability  The evaluation must be tailored to the specific project and adapted to the specific institutional context.  The project design must be flexible to secure our ability to learn in a structured manner, feed evaluation results back into the project and change the project mid-course to improve project end results.  Example, Ethiopia energy This is an important point: In the past projects have been penalized for affecting mid-course changes in project design. Now we want to make change part of the project design.

Horizon matters  The time it takes to achieve results is an important consideration for timing the evaluation. Conversely, the timing of the evaluation will determine what outcomes should be focused on. Early evaluations should focus on outcomes that are quick to show change Early evaluations should focus on outcomes that are quick to show change For long-term outcomes, evaluations may need to span beyond project cycle. Example, Madagascar early childhood development For long-term outcomes, evaluations may need to span beyond project cycle. Example, Madagascar early childhood development  Think through how things are expected to change over time and focus on what is within the time horizon for the evaluation Do not confuse the importance of an outcome with the time it takes for it to change—some important outcomes are obtained instantaneously !

Identification to PCN

Get an Early Start How do you get started?  Get help and access to resources: contact person in your region or sector responsible for impact evaluation and/or Thematic Group on Impact Evaluation and HDN CEO  Define the timing for the various steps of the evaluation to ensure you have enough lead time for preparatory activities (e.g. baseline goes to the field before program activities start)  The evaluation will require support from policy- makers: start building and maintaining constituents, dialogue with relevant part of government, build a broad base of support, include stakeholders

Build the Team  Select impact evaluation team and define responsibilities of: program managers (government), program managers (government), project team, and other donors, project team, and other donors, lead researcher (impact evaluation specialist), lead researcher (impact evaluation specialist), local research team, and local research team, and data collection agency or firm data collection agency or firm Selection of lead researcher is critical for ensuring quality of product, and so is the capacity of the data collection agency  Partner with local researchers and research institutes to build local research capacity

Shift Paradigm  From a project design based on “we know what’s best”  To project design based on the notion that “we can learn what’s best in this context, and adapt to new knowledge as needed” Work iteratively: Discuss what the team knows and what it needs to learn–the questions for the evaluation—to deliver on project objectives Discuss what the team knows and what it needs to learn–the questions for the evaluation—to deliver on project objectives Discuss translating this into a feasible project design Discuss translating this into a feasible project design Figure out what questions can feasibly be addressed Figure out what questions can feasibly be addressed Housekeeping: Include these first thoughts in a paragraph in the PCN Housekeeping: Include these first thoughts in a paragraph in the PCN

Example: Incorporate learning in an energy access in Ethiopia  One component of the project will distribute Compact Fluorescent Light bulbs (CFLs) at subsidized prices to increase energy efficiency and reduce energy costs for the poor.  The team wants to know how to deliver the greatest number of CFLs under budget constraints. What is the optimal value for the subsidy value? What is the best delivery mechanism?  During the first six months of CFL distribution, the electric company will experiment with alterative subsidy values (high, medium, low) and distribution mechanisms (local market, company distribution) in different localities. Localities will be randomly selected to one or the other treatment.  At the end of the first six months, the company with evaluate which model was most cost-efficient (use of CFLs per dollar spent), and implement that model for the rest of the project.

Preparation through appraisal

Define project development objectives and results framework  This activity clarifies the results chain for the project, clarifies the results chain for the project, identifies the outcomes of interest and the indicators best suited to measure changes in those outcomes, and identifies the outcomes of interest and the indicators best suited to measure changes in those outcomes, and the expected time horizon for changes in those outcomes. the expected time horizon for changes in those outcomes.  This will provide the lead researcher with the project specific variables that must be included in the survey questionnaire and a notion of timing for scheduling data collection.

Work out project design features that will affect evaluation design  Target population and rules of selection This provides the evaluator with the universe for the treatment and comparison sample This provides the evaluator with the universe for the treatment and comparison sample  Roll out plan This provide the evaluation with a framework for timing data collection and, possibly, an opportunity to define a comparison group This provide the evaluation with a framework for timing data collection and, possibly, an opportunity to define a comparison group  The impact evaluator will work iteratively with the team to agree on selection and roll out that allows for rigorous evaluation (F&A)

Narrow down the questions for the evaluation  Questions aimed at measuring the impact of the project on a set of outcomes, and  Questions aimed at measuring the relative effectiveness of different features of the project

 What is your hypothesis? (Results framework) By expanding water supply, the use of clean water will increase, water borne disease decline, and health status improve By expanding water supply, the use of clean water will increase, water borne disease decline, and health status improve  What is the evaluation question? Does improved water supply result in better health outcomes? Does improved water supply result in better health outcomes?  How can do you test the hypothesis? The government might randomly assign areas for expansion in water supply during the first and second phase of the program The government might randomly assign areas for expansion in water supply during the first and second phase of the program  What will you measure? Measure the change in health outcomes in phase I areas relative to the change in outcomes in phase II areas. Outcomes will include use of safe water (S-T), incidence of diarrhea (S/M-T), and health status (L-T, depending on when phase II occurs). Add other outcomes. Measure the change in health outcomes in phase I areas relative to the change in outcomes in phase II areas. Outcomes will include use of safe water (S-T), incidence of diarrhea (S/M-T), and health status (L-T, depending on when phase II occurs). Add other outcomes.  What will you do with the results? If the hypothesis proves true go to phase II; if false, modify policy. If the hypothesis proves true go to phase II; if false, modify policy. Questions aimed at measuring the impact of the project are relatively straightforward

Questions aimed at measuring the relative effectiveness of different project features require identifying the tough design choices on the table…  What is the issue? What is the best package of products or services? What is the best package of products or services?  Where do you start from? What package is the government delivering now? What package is the government delivering now?  Which changes do you or the government think could be made to improve effectiveness?

 How do you test it? The government might agree to provide a package to a randomly selected group of households and another package to another group of households to see how the two package perform The government might agree to provide a package to a randomly selected group of households and another package to another group of households to see how the two package perform  What will you measure? The average change in relevant outcomes for households receiving one package versus the same for households receiving the other package The average change in relevant outcomes for households receiving one package versus the same for households receiving the other package  What will you do with the results? The package that is most effective in delivering desirable outcomes becomes the one adopted by the project from the evaluation onwards The package that is most effective in delivering desirable outcomes becomes the one adopted by the project from the evaluation onwards

Application, features that should be tested early on  Early testing of project features (say 6 months to 1 year) can provide the team with the information needed to adjust the project early on in the direction most likely to deliver success.  Features might include: alternative modes of delivery (e.g. school-based vs. vertical delivery), alternative modes of delivery (e.g. school-based vs. vertical delivery), alternative packages of outputs (e.g. awareness campaigns vs. legal services), or alternative packages of outputs (e.g. awareness campaigns vs. legal services), or different pricing schemes (e.g. alternative subsidy levels). different pricing schemes (e.g. alternative subsidy levels).

Example: Incorporate learning in an energy project in Ethiopia  The project will distribute CFLs at subsidized prices to increase energy efficiency and reduce energy costs for the poor.  The team wants to know how to deliver the greatest number of CFLs under budget constraints. What is the optimal value for the subsidy value? What is the best delivery mechanism?  During the first six months of CFL distribution, the electric company will experiment with alterative subsidy values (high, medium, low) and distribution mechanisms (local market, company distribution) in different localities. Localities will be randomly selected to one or the other treatment.  At the end of the first six months, the company with evaluate which model was most cost-efficient (use of CFLs per dollar spent) and market friendly, and implement that model for the next 4 years.

Develop identification strategy (to identify the impact of the project separately from changes due to other causes )  One the questions are defined, the lead researcher selects one or more comparison groups against which to measure results in the treatment group.  The “rigor” with which the comparison group is selected will determine the reliability of the impact estimates.  Rigor? More-same observables and unobservables (experimental), More-same observables and unobservables (experimental), Less-same observables (non-experimental) Less-same observables (non-experimental)

Explore Existing Data  Explore what data exists that might be relevant for use in the evaluation. Discuss with the agencies of the national statistical system and universities to identify existing data sources and future data collection plans. Discuss with the agencies of the national statistical system and universities to identify existing data sources and future data collection plans.  Record data periodicity, quality, variables covered and sampling frame and sample size, for Censuses Censuses Surveys (household, firms, facility, etc) Surveys (household, firms, facility, etc) Administrative data Administrative data Data from the project monitoring system Data from the project monitoring system

New Data  Start identifying additional data collection needs. Data for impact evaluation must be representative of treatment and comparison group Data for impact evaluation must be representative of treatment and comparison group Questionnaires must include outcomes of interest (consumption, income, assets etc), questions about the program in question and questions about other programs Questionnaires must include outcomes of interest (consumption, income, assets etc), questions about the program in question and questions about other programs The data might be at household, community, firm, facility, or farm levels and might be combined with specialty data such as those from water or land quality tests. The data might be at household, community, firm, facility, or farm levels and might be combined with specialty data such as those from water or land quality tests.  Investigate synergies with other projects to combine data collection efforts and/or explore existing data collection efforts on which the new data collection could piggy back  Develop a data strategy for the impact evaluation including: The timing for data collection The timing for data collection The variables needed The variables needed The sample The sample Plans to integrate data from other sources Plans to integrate data from other sources

Prepare for collecting data  Identify data collection agency  Lead researcher will work with the data collection agency to design sample, and train enumerators  Lead researcher will prepare survey questionnaire or questionnaire module as needed  Pre-testing survey instrument may take place at this stage to finalize instruments  If financed with outside funds, baseline can now go to the field. If financed by project funds, baseline will go to the field just after effectiveness but before implementation starts

Develop a Financial Plan  Costs: Lead researcher and researcher team, Lead researcher and researcher team, Data collection, Data collection, Supervision and Supervision and Dissemination Dissemination  Finances: BB, BB, Trust fund, Trust fund, Research grants, Research grants, Project funds, or Project funds, or Other donor funds Other donor funds

Housekeeping  Initiate an IE activity. The IE code in SAP is a way of formalizing evaluation activities. The IE code recognizes the evaluation as a separate AAA product. Prepare concept note Prepare concept note Identify peer reviewers –impact evaluation and sector specialist Identify peer reviewers –impact evaluation and sector specialist Carry out review process Carry out review process  Appraisal documents Include in the project description plans to modify project overtime to incorporate results Include in the project description plans to modify project overtime to incorporate results Work the impact evaluation into the M&E section of the PAD and Annex 3 Work the impact evaluation into the M&E section of the PAD and Annex 3  Include the impact evaluation in the Quality Enhancement Review (TTL).

Negotiations to Completion

Ensure timely implementation  Ensure timely procurement of evaluation services especially contracting the data collection, and  Supervise timely implementation of the evaluation including Data collection Data collection Data analysis Data analysis Dissemination and feedback Dissemination and feedback

Data collection agency/firm  Data collection agency or firm must have technical knowledge and sufficient logistical capacity relative to the scale of data collection required  The same agency or firm should be expected to do baseline and follow up data collection

Baseline data collection and analysis  Baseline data collection should be carried out before program implementation begins; optimally even before program is announced  Analysis of baseline data will provide program management with additional information that might help finalize program design

Follow-up data collection and analysis  The timing of follow-up data collection must reflect the learning strategy adopted  Early data collection will help modifying programs mid course to maximize longer- term effectiveness  Later data collection will confirm achievement of longer-term outcomes and justify continued flows of fiscal resources into the program

Dissemination  Implement plan for dissemination of evaluation results ensuring that the timing is aligned with government’s decision making cycle.  Ensure that results are used to inform project management and that available entry points are exploited to provide additional feedback to the government  Ensure that wider dissemination takes place only after the client has had a chance to preview and discuss the results  Nurture collaboration with government and local researchers throughout the process

Housekeeping  Involve local project implementation unit and PIU person responsible for monitoring and evaluation  Put in place arrangements to procure the impact evaluation work and fund it on time  Use early results to inform mid-term review  Use later results to inform the ICR, CAS and future operations

Concluding remarks  Making evaluation work for you requires a change in the culture of project design and implementation, one that maximizes the use of learning to change course when necessary and improve the chances for success  Impact evaluation more than a tool is an organizing analytical framework for doing this Thank you