Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection.

Slides:



Advertisements
Similar presentations
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Advertisements

Donald T. Simeon Caribbean Health Research Council
Knowing if the RBF mechanism is working Incorporating Rigorous Impact Evaluation into your HRBF program Sebastian Martinez World Bank.
Mywish K. Maredia Michigan State University
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Assessing Program Impact Chapter 8. Impact assessments answer… Does a program really work? Does a program produce desired effects over and above what.
Results-Based Management: Logical Framework Approach
PPA 502 – Program Evaluation
Evaluation. Practical Evaluation Michael Quinn Patton.
Participants should expect to understand: Concept of M&E Importance of gender in M&E Different steps in the M&E process Integrating gender into program/project.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, 2009 AIM-CDD Using Randomized Evaluations.
Monitoring Evaluation Impact Assessment Objectives Be able to n explain basic monitoring and evaluation theory in relation to accountability n Identify.
CONCEPT PAPER RESULT BASED PLANNING. RESULT-ORIENTED PLANNING Overall Objective/ Goal Specific Objective/ Purposes Expected Result/ Output Activities.
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
Program Evaluation Using qualitative & qualitative methods.
Evaluation methods and tools (Focus on delivery mechanism) Jela Tvrdonova, 2014.
1 RBM Background Development aid is often provided on a point to point basis with no consistency with countries priorities. Development efforts are often.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
Evaluating a Research Report
Introduction to Evaluation Odette Parry & Sally-Ann Baker
IAOD Evaluation Section, the Development Agenda (DA) and Development Oriented Activities Julia Flores Marfetan, Senior Evaluator.
EVALUATION APPROACHES Heather Aquilina 24 March 2015.
GSSR Research Methodology and Methods of Social Inquiry socialinquiry.wordpress.com January 17, 2012 I. Mixed Methods Research II. Evaluation Research.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Nigeria Impact Evaluation Community of Practice Abuja, Nigeria, April 2, 2014 Measuring Program Impacts Through Randomization David Evans (World Bank)
Screen 1 of 22 Food Security Policies – Formulation and Implementation Policy Monitoring and Evaluation LEARNING OBJECTIVES Define the purpose of a monitoring.
The Major Steps of a Public Health Evaluation 1. Engage Stakeholders 2. Describe the program 3. Focus on the evaluation design 4. Gather credible evidence.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Impact Evaluation for Evidence-Based Policy Making
Preparing proposals for funding RIMC Research Capacity Enhancement Workshops Series : “Achieving Research Impact”
Clarifying uncertainties and needs for monitoring and evaluation Are there important uncertainties that should be addressed prior to making a decision?
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Measuring Impact 1 Non-experimental methods 2 Experiments
Africa Program for Education Impact Evaluation Dakar, Senegal December 15-19, 2008 Experimental Methods Muna Meky Economist Africa Impact Evaluation Initiative.
Current practices in impact evaluation Howard White Independent Evaluation Group World Bank.
Evaluation design and implementation Puja Myles
Implementing an impact evaluation under constraints Emanuela Galasso (DECRG) Prem Learning Week May 2 nd, 2006.
Randomized Assignment Difference-in-Differences
Evaluation Research Dr. Guerette. Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose.
Overview of evaluation of SME policy – Why and How.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Randomization.
Introduction Extensive Experience of ex-post evaluation of national support programmes for innovation; less experience at regional level; Paper aims to.
Monitoring and evaluation Objectives of the Session  To Define Monitoring, impact assessment and Evaluation. (commonly know as M&E)  To know why Monitoring.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
IMPLEMENTATION AND PROCESS EVALUATION PBAF 526. Today: Recap last week Next week: Bring in picture with program theory and evaluation questions Partners?
Organizations of all types and sizes face a range of risks that can affect the achievement of their objectives. Organization's activities Strategic initiatives.
Cross-Country Workshop for Impact Evaluations in Agriculture and Community Driven Development Addis Ababa, April 13-16, Causal Inference Nandini.
Evaluation What is evaluation?
1 An introduction to Impact Evaluation (IE) for HIV/AIDS Programs March 12, 2009 Cape Town Léandre Bassolé ACTafrica, The World Bank.
IEc INDUSTRIAL ECONOMICS, INCORPORATED Attributing Benefits to Voluntary Programs: Practical and Defensible Approaches Cynthia Manson, Principal June 23,
Monitoring and Evaluation Systems for NARS organizations in Papua New Guinea Day 4. Session 10. Evaluation.
Module 8 Guidelines for evaluating the SDGs through an equity focused and gender responsive lens: Overview Technical Assistance on Evaluating SDGs: Leave.
Introduction to Impact Evaluation The Motivation Emmanuel Skoufias The World Bank PRMPR PREM Learning Week: April 21-22, 2008.
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 9. Periodic data collection methods.
Right-sized Evaluation
Draft OECD Best Practices for Performance Budgeting
Development Impact Evaluation in Finance and Private Sector
III. Practical Considerations in preparing a CIE
Evaluating Impacts: An Overview of Quantitative Methods
Sampling for Impact Evaluation -theory and application-
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Monitoring and Evaluating FGM/C abandonment programs
Presentation transcript:

Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection

Overview of monitoring and evaluation (M&E)  Why M&E?  What issues can M&E address?  Key methodological options  Good practices –An 8-step implementation framework –Key “good practice” issues  Conclusions

What is monitoring?  Monitoring: is the program progressing as planned?  Routine collection of administrative data  Mostly input and output indicators, but sometimes outcome indicators  Operational indicators  Monitoring complements impact assessment (evaluation)  Point to what should be further researched  Administrative data can be used in impact evaluation analysis 3

 Impact evaluation: what changes in outcomes (and of which size) can be attributed to the programme and only the programme? With project With out project Indicator Year 30% 40% 50% % 4 What is evaluation (impact assessment)? SOURCE: Regalia 2007

Why M&E? There are three major motivations:  To serve strategic objectives: are social protection instruments achieving the main policy goals?  To serve operational objectives: how can one further improve implementation and delivery?  To serve learning objectives: what can you learn from the programmes? PLUS ONE MORE: To mobilise political will to sustain and expand the programme

Specifically…  To analyze alternative designs and intervention schemes  To learn what works and make the best use of limited budget resources (Measure effectiveness of alternatives)  To improve program’s design and operation  Sequential learning  Programs are in constant evolution  IE useless if it is an ex-post, retrospective, one shot “judgment”  To ensure program’s sustainability  Ensure adequate budget allocations through rigorous evidence  Ride out changes in administration 6

Why M&E? A concrete test. M&E may be appropriate if the answer is “YES” to any of the following questions:  Is the programme of strategic relevance for national public policy?  Will the evaluation contribute to improving the implementation or development of the programme?  Can the evaluation results influence future design of the programme and other programmes?

M&E activities can be designed to address a broad ranges of questions. Some examples include:  Does the programme reach the intended beneficiaries?  How does the programme affect the beneficiaries?  Does the programme generate the desired outcomes?  What is the impact on the rest of the population?  Are there better ways to design the programme?  Can the programme be managed more efficiently?  Are the allocated resources being spent efficiently?  Were the costs of setting up and running the programme justified?

 Distributions of gains (and losses): Do transfers increase food per capita expenditures (improve school progression/health outcomes) more for extremely poor vs. poor (females vs. males/rural vs. urban)?  Alternative designs: how would program outcomes change under alternative designs?  different size or design of the transfers  different recipients (males vs. females)  alternative delivery mechanisms for basic services in CCT  Cost – benefit analysis 9 Types of questions IE can address:

Does the programme reach the intended beneficiaries? A framework for analysing success and error:

Qualitative methods  Participant Observation involves field researchers spending an extended amount of time in residence with a programme community.  Case studies involve detailed or broad studies of a specific intervention involving open-ended questioning and the recording of personal stories.  Participatory learning and action involves a facilitator assisting the active involvement of those who have a stake in the programme.  Logical framework analysis involves identifying inputs, outputs, outcomes, and impacts; their causal relationships, indicators and the assumptions or risks that may influence success and failure.

Quantitative methods  Randomised experimental design –The need for a control group –The advantages –The disadvantages –The ethical dilemma  Quasi-experimental design –Alternatives to a control group: the credible comparison group –Propensity scoring –The advantages: practicality and ethics –The disadvantages: “unobserved heterogeneity”

Complementing other evaluations  Quantitative IEs are complemented by other evaluations  Rigorous qualitative evaluations are critical to complement and get a better understanding of quantitative IE results  Require very specific (and rare) skills  Qualitative results can be very useful for operational purposes  Should be carried out on the same sample of the quantitative IE  Operational and “process” evaluations  Based on MIS data and field observations  Should be carried out by social transfer specialists in close coordination with program executing agency  Routine analysis of administrative data from MIS 13

Types of design: Experimental design  An experimental design is possible when the programme placement is done in a random way.  This process guarantees that both treatment and control (comparison group) have the same observable and unobservable characteristics before the treatment;  Therefore differences in outcomes after the programme can be attributed to the programme.

Control groups  Simply comparing eligible participants with eligible non participants might lead to very biased estimates if program’s participation is voluntary (as it usually is with social transfers)  Eligible non participants might differ from eligible participants in many observable and unobservable ways  Reasons of non participation might be correlated with outcomes  We can control for observables but we are left with unobservables A comparison (control) group needs to be built  “Comparison” group needs to be as identical in observable and unobservable dimensions as possible to the “treatment” group receiving the program (and should not be “contaminated”) 15

Constructing control and treatment groups  The roll out pace determines whether the evaluation will estimate short/medium or long term impacts Eligible population Sample Treatment Group Control Group Randomization: first stage sampling external validity Second stage random assignment internal validity 16

Problems with experimental design  However, experimental design can be unfeasible due to ethical and political considerations. How to justify that you are excluding a person/family who needs a social protection programme?  Experimental design does not suit the case of universal policies: all eligible population gets the programme.  It is necessary to be very careful with the implementation of this design: contamination, drop- outs, skim-creaming may threaten the random nature of both groups (and its internal validity).

Counterfactual: What would have happened to the same household if it did not receive the transfer?  The same household is never observed at the same point in time with and without the programme …  …before and after comparison is not a good counterfactual (especially for social transfers programmes) Treatment Phase I Control Phase I/ Treatment Phase II Average impact on per capita food expenditures (C$) – SSN Nicaragua 18

Quasi-experimental design  A quasi-experimental design is based on the attempt to build a comparison group that was not generated randomly;  There are different QE methods: different assumptions and different results. Methods can be quite complex which makes it difficult to be understood by a broader audience.  Advantages: In general it is cheaper and quicker to be implemented. It can be done when the programme has already started (ex-post evaluation).

Regression discontinuity RD: baseline RD: post intervention 20

Issues in considering methodological options  The absence of a universal framework  Constraints in implementing M&E activities  Two basic classes of methodologies: quantitative and qualitative  The importance of a comprehensive (mixed) approach

How to implement an evaluation of a social transfer programme in 8 steps:  STEP 1: Decide whether or not to evaluate the programme.  STEP 2: Make clear the evaluation objectives.  STEP 3: Identify a truly independent and qualified evaluation team.  STEP 4: Fully design the evaluation.  STEP 5: Mobilise the required data.  STEP 6: Analyse the data.  STEP 7: Report the results.  STEP 8: Most importantly, reflect the results in improved programme delivery.

Knowledge transfer  Adequate staffing and mix of skills in the social transfer executing agency  Evaluation implemented in close collaboration with program executors  Impact evaluation team should ensure  Systematization of processes and data documentation  Adequate transfer of knowledge  Social transfer executing agency could periodically convene an external committee of experts in charge of quality control of the impact evaluation process 23

Lessons of international “good practice”  The importance of a thorough understanding of the administrative and institutional details of the programme.  The importance of an in depth understanding of the social and policy context.  Be open-minded about your source of data.  Be careful about simply comparing outcomes for programme participants and non-participants.  Effective evaluations must be adequately resourced.

Conclusions (M&E)  M&E is important: –to inform policy-makers about the strategic impact of social transfers, –to improve the delivery of social transfer programmes, –to provide an evidence base for better policy-making.  M&E can address many issues, depending on the needs of policy-makers.  Policy-makers face a range of methodological options—but no one model works best in every case.  The 8-step framework provides a starting point for implementing an M&E process.