IDEV 624 – Monitoring and Evaluation Evaluating Program Outcomes Elke de Buhr, PhD Payson Center for International Development Tulane University.

Slides:



Advertisements
Similar presentations
1 Mateja Bizilj PEMPAL BCOP KEY CONCEPTS AND APPROACHES TO PROGRAMS AND PERFORMANCE Tbilisi, June 28, 2007.
Advertisements

New Challenges in M&E Lets go. Scaling Up Monitoring & Evaluation Strategic Information PROGRAM GUIDANCE RESULT NEEDS OPPORTUNITIES Resources New directions.
Introduction to Monitoring and Evaluation
Donald T. Simeon Caribbean Health Research Council
What You Will Learn From These Sessions
Study Designs in Epidemiologic
Chapter 1: An Overview of Program Evaluation Presenter: Stephanie Peterson.
Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India.
Program Evaluation. Lecture Overview  Program evaluation and program development  Logic of program evaluation (Program theory)  Four-Step Model  Comprehensive.
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Measuring and Monitoring Program Outcomes
V MEASURING IMPACT Kristy Muir Stephen Bennett ENACTUS November 2013.
M&E Framework for Programmes for Most-at-Risk Populations
Objectives and Indicators for MCH Programs MCH in Developing Countries January 25, 2011.
Program Evaluation In A Nutshell 1 Jonathan Brown, M.A.
Evaluation. Practical Evaluation Michael Quinn Patton.
Measuring Learning Outcomes Evaluation
Continuous Quality Improvement (CQI)
Objectives and Indicators for MCH Programs
Developing the Logical Frame Work …………….
Monitoring Evaluation Impact Assessment Objectives Be able to n explain basic monitoring and evaluation theory in relation to accountability n Identify.
CONCEPT PAPER RESULT BASED PLANNING. RESULT-ORIENTED PLANNING Overall Objective/ Goal Specific Objective/ Purposes Expected Result/ Output Activities.
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
Case Management 1 What Will be Covered? 1. Organizational Assessments (Module 1) 2. Designing Quality Services (Module 2) 3. HIV Prevention and Care Advancing.
The Evaluation Plan.
Program Evaluation and Logic Models
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Evaluation Assists with allocating resources what is working how things can work better.
Unit 10. Monitoring and evaluation
KEYWORDS REFRESHMENT. Activities: in the context of the Logframe Matrix, these are the actions (tasks) that have to be taken to produce results Analysis.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
OECD/INFE Tools for evaluating financial education programmes Adele Atkinson, PhD Policy Analyst OECD With the support of the Russian/World Bank/OECD Trust.
Managing Project Through Information System.  Monitoring is collecting, recording, and reporting information concerning any and all aspects of project.
EVALUATION APPROACHES Heather Aquilina 24 March 2015.
UNDAF M&E Systems Purpose Can explain the importance of functioning M&E system for the UNDAF Can support formulation and implementation of UNDAF M&E plans.
IDEV 624 – Monitoring and Evaluation Introduction to Process Monitoring Payson Center for International Development and Technology Transfer Tulane University.
Monitoring & Evaluation Presentation for Technical Assistance Unit, National Treasury 19 August 2004 Fia van Rensburg.
Monitoring & Evaluation: The concepts and meaning Day 9 Session 1.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
IDEV 624 – Monitoring and Evaluation Assessing Program Theory Payson Center for International Development and Technology Transfer Tulane University.
Evaluating Ongoing Programs: A Chronological Perspective to Include Performance Measurement Summarized from Berk & Rossi’s Thinking About Program Evaluation,
 Welcome, introductions  Conceptualizing the evaluation problem  stakeholder interests  Divergent and convergent processes  Developing evaluation.
Monitoring and Evaluation
Evaluability Assessment, Formative & Summative Evaluation Laura C. Leviton, Ph.D. Senior Advisor for Evaluation.
Tier III Implementation. Define the Problem  In general - Identify initial concern General description of problem Prioritize and select target behavior.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 24, 2009.
Community Planning 101 Disability Preparedness Summit Nebraska Volunteer Service Commission Laurie Barger Sutter November 5, 2007.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
Evaluation design and implementation Puja Myles
Linking Data with Action Part 1: Seven Steps of Using Information for Decision Making.
Evaluation: from Objectives to Outcomes Janet Myers, PhD MPH AIDS Education and Training Centers National Evaluation Center
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 9, 2012.
 2007 Johns Hopkins Bloomberg School of Public Health Section B Logic Models: The Pathway Model.
Logic Model, Monitoring, Tracking and Evaluation Evaluation (Section T4-2)
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
Logical Framework Approach 1. Approaches to Activity Design Logical Framework Approach (LFA) – Originally developed in the 1970s, this planning process.
Session 2: Developing a Comprehensive M&E Work Plan.
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
.  Evaluators are not only faced with methodological challenges but also ethical challenges on a daily basis.
Session 7: Planning for Evaluation. Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring.
Demonstrating Institutional Effectiveness Documenting Using SPOL.
Building an ENI CBC project
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 9. Periodic data collection methods.
Right-sized Evaluation
Strategic Planning for Learning Organizations
Measuring Outcomes of GEO and GEOSS: A Proposed Framework for Performance Measurement and Evaluation Ed Washburn, US EPA.
MaryCatherine Jones, MPH, Public Health Consultant, CVH Team
Logic Models and Theory of Change Models: Defining and Telling Apart
Introduction to the PRISM Framework
Presentation transcript:

IDEV 624 – Monitoring and Evaluation Evaluating Program Outcomes Elke de Buhr, PhD Payson Center for International Development Tulane University

Process vs. Outcome/Impact Monitoring Process Monitoring Outcome Impact Monitoring Evaluation LFM USAID Results Framework

2/25/2016 What is the problem? Situation Analysis & Surveillance What are the contributing factors? Determinants Research What interventions and resources are needed? Needs, Resource, Response Analysis & Input Monitoring What interventions can work (efficacy & effectiveness)? Efficacy & Effectiveness Studies, Formative & Summative Evaluation, Research Synthesis Are we implementing the program as planned? Outputs Monitoring What are we doing? Are we doing it right? Process Monitoring & Evaluation, Quality Assessments Are interventions working/making a difference? Outcome Evaluation Studies Are collective efforts being implemented on a large enough scale to impact the epidemic? (coverage; impact)? Surveys & Surveillance Understanding Potential Responses Monitoring & Evaluating National Programs Determining Collective Effectiveness ACTIVITIES OUTPUTS INPUTS OUTCOMES & IMPACTS A Public Health Questions Approach to HIV/AIDS M&E Are we doing the right things? Are we doing them right? Are we doing them on a large enough scale? Problem Identification (UNAIDS 2008)

2/25/2016 Most Some Few* All Input/ Output Monitoring Process Evaluation Outcome Monitoring / Evaluation Levels of Monitoring & Evaluation Effort Number of Projects * Disease impact monitoring is synonymous with disease surveillance and should be part of all national-level efforts, but cannot be easily linked to specific projects Strategic Planning for M&E: Setting Realistic Expectations 4 Impact Monitoring / Evaluation

Monitoring Strategy Process  Activities Outcome/Impact  Goals and Objectives

Outcome Evaluation

Program vs. Outcome Monitoring Program process monitoring: The systematic and continual documentation of key aspects of program performance that assess whether the program is operating as intended or according to some appropriate standard Outcome monitoring: The continual measurement of intended outcomes of the program, usually of the social conditions it is intended to improve Process Monitoring A Form of Impact Evaluation

What is an Outcome? Outcome: The state of the target population or the social conditions that a program is expected to have changed 1.Outcomes are characteristics of the target population or social condition, and not of the program 2.Programs expect change but this does not necessarily mean that program targets have changed (Rossi/Lipsey/Freeman 2004)

What is your project’s outcome?

Outcome vs. Impact Outcome level: Status of an outcome at some point of time Outcome change: Difference between outcome levels at different points in time Impact/program effect: Proportion of an outcome change that can be attributed uniquely to a program as opposed to the influence of some other factor (Rossi/Lipsey/Freeman 2004)

Outcome vs. Impact (cont.) Outcome level and change: –Valuable for monitoring program performance –Limited use for determining program effects Impact/program effect: the value added or net gain that would not have occurred without the program and the only part of the outcome for which the program can honestly take credit –Most demanding evaluation task –Time-consuming and expensive

Outcome Variable Outcome variable: A measurable characteristic or condition of a program’s target population that could be affected by the actions of the program –Examples: amount of smoking, body weight, school readiness (Rossi/Lipsey/Freeman 2004)

What are your project’s outcome variables?

Program Impact Theory Useful for identifying and organizing program outcomes Expresses the outcomes of social programs as part of a logic model that connects program theory to proximal (immediate) outcomes, that are expected to lead to distal (long-term) outcomes

(Rossi, Peter H et al., p. 143) Program Impact Theory - Examples

Logic Model Visual representation of the expected sequence of steps going from program service to client outcome

Logic Model - Example (for a teen mother parenting program) (Rossi, P. H. et al., p. 95)

Proximal vs. Distal Outcomes Proximal (immediate) outcomes: –Usually the ones that the program has the greatest capability to effect –Often easiest to measure and to attribute to program Distal (longer-term) outcomes –Frequently the ones of the greatest political and practical importance –Often difficult to measure and to attribute to program –Usually influenced by many factors outside of the programs control

What are your project’s proximal/distal outcomes?

Measuring Program Outcomes Select most important outcomes Take into account feasibility (e.g. distal ones may be too difficult or expensive to measure) However, both proximal and distal outcomes can be subject of an outcome evaluation Multidimensional outcomes often require multiple measurements (  composite measures)

Monitoring Program Outcomes Outcome monitoring: –Simplest approach to measuring program outcomes –Similar to process monitoring with the difference that the regularly collected information relates to program outcomes rather than process and performance Requires indicators that are practical to collect routinely and that are informative with regard to program effectiveness

Monitoring Strategies Time XO1O1 O2O2 O4O4 O6O6 O2O2 O1O1 O3O3 XO5O5 O3O3 O4O4 XO1O1 O2O2 XX

Selecting Outcome Indicators Need to be as responsive as possible to program effects –Include only members of target population receiving services –Not include data on beneficiaries who dropped out of the program (  service utilization issue) The best outcome indicators, short of an impact evaluation, are: –Variables that only the program can effect –Variables that are central to the program’s mission

Selecting Outcome Indicators (cont.) Concerns with selecting outcome indicators: –“Teaching to the test”: Program staff may focus on critical outcome indicators to improve program performance on these measures, may distort program activities –“Corruptibility of indicators”: Monitoring data should be collected by outside evaluator, or with careful processes in place that prevent distortion (Role of participation?)

Advantage of Outcome Monitoring Useful and relatively inexpensive information about program effects, usually in a reasonable timeframe (compared to impact evaluation)  Mainly a technique for improving program administration, and not for assessing its impact on the social conditions it intends to benefit (Rossi/Lipsey/Freeman 2004)

Limitation of Outcome Monitoring Requires indicators that identify change and link that change to the program However, often many outside influences on a social condition (confounding factors)  Isolating program effects may require the special techniques of impact evaluation

Project Monitoring Plan Objective 1: Monitoring Strategy: What (Indicators)How (Methods and Tasks) WhenWhoWhere

Logframe GoalOVIsMOVsAssumptions/Ri sks Purpose Outputs Activities (Inputs) Milestones TAPGR: Development Project Planning

Discussion Questions

How could the outcome of your program be monitored? What are the critical outcome variables? What outcome monitoring strategy is feasible taking into account the local implementation environment? What are the strengths of this methodology? What are its weaknesses? How would you judge the quality of the collected data?