Introduction to Impact Evaluation The Motivation Emmanuel Skoufias The World Bank PRMPR PREM Learning Week: April 21-22, 2008.

Slides:



Advertisements
Similar presentations
Designing and Building a Results-Based Monitoring and Evaluation System: A Tool for Public Sector Management.
Advertisements

Introduction to Monitoring and Evaluation
Explanation of slide: Logos, to show while the audience arrive.
Donald T. Simeon Caribbean Health Research Council
Mywish K. Maredia Michigan State University
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
Ray C. Rist The World Bank Washington, D.C.
The French Youth Experimentation Fund (Fonds d’Expérimentation pour la Jeunesse – FEJ) Mathieu Valdenaire (DJEPVA - FEJ) International Workshop “Evidence-based.
Making Impact Evaluations Happen World Bank Operational Experience 6 th European Conference on Evaluation of Cohesion Policy 30 November 2009 Warsaw Joost.
Role of Result-based Monitoring in Result-based Management (RBM) Setting goals and objectives Reporting to Parliament RBM of projects, programs and.
Dr. G. Johnson, Program Evaluation and the Logic Model Research Methods for Public Administrators Dr. Gail Johnson.
Results-Based Management
Performance Measurement and Analysis for Health Organizations
Evaluation methods and tools (Focus on delivery mechanism) Jela Tvrdonova, 2014.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
UNDAF M&E Systems Purpose Can explain the importance of functioning M&E system for the UNDAF Can support formulation and implementation of UNDAF M&E plans.
Monitoring & Evaluation: The concepts and meaning Day 9 Session 1.
MAINSTREAMING MONITORING AND EVALUATION IN EDUCATION Can education be effectively managed without an M & E system in place?
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
704: Conducting Business in Fiscally Challenging Times: Strategies and Tools to Get There PCYA Leadership Academy Presentation March 28, 2012.
Impact Evaluations and Development Draft NONIE Guidance on Impact Evaluation Cairo Conference: Perspectives on Impact Evaluation Tuesday, March 31, 2009.
Potential and Pitfalls of Experimental Impact Evaluation: Reflections on the design and implementation of an experimental Payments for Environmental Services.
Project Cycle Management for International Development Cooperation Indicators Teacher Pietro Celotti Università degli Studi di Macerata 16 December 2011.
CHAPTER 28 Translation of Evidence into Nursing Practice: Evidence, Clinical practice guidelines and Automated Implementation Tools.
Monitoring and Evaluation
Applying impact evaluation tools A hypothetical fertilizer project.
Regional Policy How are evaluations used in the EU? How to make them more usable? Stockholm, 8 October 2015 Kai Stryczynski, DG Regional and Urban Policy.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
Kathy Corbiere Service Delivery and Performance Commission
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Prof. (FH) Dr. Alexandra Caspari Rigorous Impact Evaluation What It Is About and How It Can Be.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection.
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
Module 8 Guidelines for evaluating the SDGs through an equity focused and gender responsive lens: Overview Technical Assistance on Evaluating SDGs: Leave.
JMFIP Financial Management Conference
Logic Models How to Integrate Data Collection into your Everyday Work.
Module 3: Building a Results-Based Monitoring and Evaluation System
Country Level Programs
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
Evaluation: For Whom and for What?
Measuring Results and Impact Evaluation: From Promises into Evidence
Technical Assistance on Evaluating SDGs: Leave No One Behind
Managing for Results Capacity in Higher Education Institutions
Right-sized Evaluation
Fundamentals of Monitoring and Evaluation
Module 1: Introducing Development Evaluation
An introduction to Impact Evaluation
Introduction to Program Evaluation
HEALTH IN POLICIES TRAINING
Descriptive Analysis of Performance-Based Financing Education Project in Burundi Victoria Ryan World Bank Group May 16, 2017.
Business Environment Dr. Aravind Banakar –
Business Environment Dr. Aravind Banakar –
Business Environment
Business Environment
Business Environment
Business Environment
RRI MONITORING AND EVALUATION
Claire NAUWELAERS, independent policy expert
Logic Models and Theory of Change Models: Defining and Telling Apart
MONITORING AND EVALUATION IN FOOD SECURITY AND NUTRITION INTERVENTIONS KOLLIESUAH, NELSON P.
WHAT is evaluation and WHY is it important?
Class 2: Evaluating Social Programs
Class 2: Evaluating Social Programs
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Reading Paper discussion – Week 4
Evaluation of ESF support to Gender Equality
Integrating Gender into Rural Development M&E in Projects and Programs
Title Team Members.
Presentation transcript:

Introduction to Impact Evaluation The Motivation Emmanuel Skoufias The World Bank PRMPR PREM Learning Week: April 21-22, 2008

Outline of presentation 1.Role of IE in the Results Agenda 2.Impact evaluation: Why and When? 3.Evaluation vs. Monitoring 4.Necessary ingredients of a good Impact Evaluation

The Role of IE in the Results Agenda  Demand for evidence of the results of development assistance is increasing.  Among monitoring and evaluation techniques, impact evaluation provides an important tool to show the effect of interventions  Given the power of this tool, the Bank is supporting an increasing number of impact evaluations (figure 1)

Status of IE within the Bank-1  Although the number of impact evaluations is growing overall, some Regions and Networks are more active than others  Most ongoing impact evaluations are in the social sectors (figure 2), which reflects not only the support provided by the HD Network, but also that there is more of an evaluation tradition in these areas and that the projects are more amenable to impact evaluation techniques.

WB Lending and IE by Sector

Status of IE within the Bank--2  The regional picture is also a skewed one. Africa is the leader with 47 ongoing evaluations, followed by SAR (27), LAC (26), and EAP (17). MENA and ECA have 2 each.

WB Lending and IE by Region

2. Impact Evaluation: Why and When?

Impact evaluation  Ex-ante vs. ex-post  Impact is the difference between outcomes with the program and without it  The goal of impact evaluation is to measure this difference in a way that can attribute the difference to the program, and only the program.  Challenge to evaluating SDN operations:  difficult to find comparison group  need quasi-experimental methods  take advantage of sub-national variation

Why conduct an Impact Evaluation?  Knowledge & Learning  Improve design and effectiveness of the program  Economic Reasons To make resource allocation decisions: Comparing program impacts allows G to reallocate funds from less to more effective programs and thus to an increase in Social Welfare  Social Reasons  increases transparency & accountability  Support of public sector reform / innovation  Political Reasons  Credibility/break with “bad” practices of past

When Is It Time to Make Use of Evaluation?--1  When you want to determine the roles of both design and implementation on project, program, or policy outcomes  Resource and budget allocations are being made across projects, programs, or policies  A decision is being made whether to (or not) expand a pilot  When regular results measurement suggests actual performance diverges sharply from planned performance.

When Is It Time to Make Use of Evaluation?--2  There is a long period with no evidence of improvement in the problem situation  Similar projects, programs or policies are reporting divergent outcomes  There are conflicting political pressures on decision- making in ministries or parliament  Public outcry over a governance issue  To identify issues around an emerging problem, I.e. children dropping out of school

Summary An impact evaluation informs: Strategy  Whether we are doing the right things  Rationale/justification  Clear theory of change Operation  Whether we are doing things right  Effectiveness in achieving expected outcomes  Efficiency in optimizing resources  Client satisfaction Learning  Whether there are better ways of doing it  Alternatives  Best practices  Lessons learned

3. Evaluation vs. Monitoring

Definitions (Results Based) Monitoring: is a continuous process of collecting and analyzing information to compare how well a project, program or policy is performing against expected results (Results-Based) Evaluation: An assessment of a planned, ongoing, or completed intervention to determine its relevance, efficiency, effectiveness, impact and sustainability. The intent is to incorporate lessons learned into the decision- making process.

Monitoring and Evaluation

Evaluation Addresses: “Why” Questions – What caused the changes we are monitoring “How” Questions – What was the sequence or processes that led to successful (or not) outcomes “Compliance/ Accountability Questions” Process/ Implementation Questions – Did the promised activities actually take place and as they were planned? Was the implementation process followed as anticipated, and with what consequences

Six Types Of Evaluation Impact Evaluation Process Implementation Performance Logic Chain Meta-Evaluation Case Study Pre-Implementation Assessment

Complementary Roles of Results-Based Monitoring and Evaluation MonitoringEvaluation Clarifies program objectivesAnalyzes why intended results were or were not achieved Links activities and their resources to objectives Assesses specific causal contributions of activities to results=Impact Evaluation Translates objectives into performance indicators and set targets Examines implementation process=Operations Evaluation Routinely collects data on these indicators, compares actual results with targets Explores unintended results=Spillover effects Reports progress to managers and alerts them to problems Provides lessons, highlights significant accomplishment or program potential, and offers recommendations for improvement

Summary--1  Results-based monitoring and evaluation are generally viewed as distinct but complementary functions  Each provides a different type of performance information  Both are needed to be able to better manage policy, program, and project implementation

Summary--2  Implementing results-based monitoring and evaluation systems can strengthen WB and public sector management  Implementing results-based monitoring and evaluation systems requires commitment by leadership and staff alike

4. Necessary ingredients of a good Impact Evaluation: A good counterfactual & robustness checks

What we need for an IE  The difference in outcomes with the program versus without the program – for the same unit of analysis (e.g. individual, community etc.)  Problem: individuals only have one existence  Hence, we have a problem of a missing counter-factual, a problem of missing data

Thinking about the counterfactual  Why not compare individuals before and after (the reflexive)?  The rest of the world moves on and you are not sure what was caused by the program and what by the rest of the world  We need a control/comparison group that will allow us to attribute any change in the “treatment” group to the program (causality)

We observe an outcome indicator, Intervention

and its value rises after the program: Intervention

Having the “ideal” counterfactual…… Intervention

allows us to estimate the true impact

Comparison Group Issues  Two central problems:  Programs are targeted  Program areas will differ in observable and unobservable ways precisely because the program intended this  Individual participation is (usually) voluntary  Participants will differ from non-participants in observable and unobservable ways (selection based on observable variables such as age and education and unobservable variables such as ability, motivation, drive)  Hence, a comparison of participants and an arbitrary group of non-participants can lead to heavily biased results

Impact Evaluation methods Differ in how they construct the counterfactual Experimental methods/Randomization Quasi-experimental methods Propensity score matching (PSM) Regression discontinuity design (RDD) Other Econometric methods Before and After (Reflexive comparisons) Difference in Difference (Dif in Dif) Instrumental variables Encouragement design

Thank you