111 CINDI PMER Workshop 12-14 March, 2007 By Simon Opolot © 2007 Office of the Premier, KwaZulu-Natal Province, Private Bag X9037 Telecom House, 2nd Floor,

Slides:



Advertisements
Similar presentations
Results Based Monitoring (RBM)
Advertisements

1 EU Strategy for the Baltic Sea Region Evaluation: Setting Outcome Indicators and Targets Seminar: 15 March 2011, La Hulpe Veronica Gaffey Acting Director.
Introduction to Impact Assessment
Introduction to Monitoring and Evaluation
Intelligence Step 5 - Capacity Analysis Capacity Analysis Without capacity, the most innovative and brilliant interventions will not be implemented, wont.
M & E for K to 12 BEP in Schools
Module 4 Planning SP. What’s in Module 4  Opportunities for SP  Different SP models  Communication plan  Monitoring and evaluating  Working session.
How to Evaluate Your Health Literacy Project Jill Lucht, MS Project Director, Center for Health Policy
Evaluating public RTD interventions: A performance audit perspective from the EU European Court of Auditors American Evaluation Society, Portland, 3 November.
Good Evaluation Planning – and why this matters Presentation by Elliot Stern to Evaluation Network Meeting January 16 th 2015.
Project Monitoring Evaluation and Assessment
Lesson 2: Project: Evaluation, Monitoring, Auditing Macerata, 22 nd October Alessandro Valenza, Director, t33 srl.
A Human Rights-Based Approach to Monitoring Session 6 (cont.)
Role of Result-based Monitoring in Result-based Management (RBM) Setting goals and objectives Reporting to Parliament RBM of projects, programs and.
Pestalozzi Children‘s Foundation emPower 2012 Monitoring & Evaluation Lecturers: Beatrice Schulter.
Gender Aware Monitoring and Evaluation. Amsterdam, The Netherlands Presentation overview This presentation is comprised of the following sections:
HOW TO WRITE A GOOD TERMS OF REFERENCE FOR FOR EVALUATION Programme Management Interest Group 19 October 2010 Pinky Mashigo.
Monitoring Evaluation Impact Assessment Objectives Be able to n explain basic monitoring and evaluation theory in relation to accountability n Identify.
CONCEPT PAPER RESULT BASED PLANNING. RESULT-ORIENTED PLANNING Overall Objective/ Goal Specific Objective/ Purposes Expected Result/ Output Activities.
This project is funded by the EUAnd implemented by a consortium led by MWH Logical Framework and Indicators.
Case Management 1 What Will be Covered? 1. Organizational Assessments (Module 1) 2. Designing Quality Services (Module 2) 3. HIV Prevention and Care Advancing.
Monitoring & Reporting
The County Health Rankings & Roadmaps Take Action Cycle.
Lesson 3: Monitoring and Indicator Macerata, 23 nd October Alessandro Valenza, Director, t33 srl.
Sociology 3322a. “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards.
Evaluation methods and tools (Focus on delivery mechanism) Jela Tvrdonova, 2014.
1 RBM Background Development aid is often provided on a point to point basis with no consistency with countries priorities. Development efforts are often.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
Developing Indicators
Lesson 2: Project: Evaluation, Monitoring, Auditing Macerata, 22 nd October Alessandro Valenza, Director, t33 srl.
KEYWORDS REFRESHMENT. Activities: in the context of the Logframe Matrix, these are the actions (tasks) that have to be taken to produce results Analysis.
Rwanda MCC Threshold Program CIVIL SOCIETY STRENGTHENING PROJECT Cross-Cutting Advocacy Issues Data Collection Monitoring and Evaluation.
2013 NEO Program Monitoring & Evaluation Framework.
UNDAF M&E Systems Purpose Can explain the importance of functioning M&E system for the UNDAF Can support formulation and implementation of UNDAF M&E plans.
Methods: Pointers for good practice Ensure that the method used is adequately described Use a multi-method approach and cross-check where possible - triangulation.
Monitoring & Evaluation: The concepts and meaning Day 9 Session 1.
University of Palestine Dept. of Urban Planning Introduction to Planning ( EAGD 3304 ) M.A. Architect: Tayseer Mushtaha Mob.:
W HAT IS M&E  Day-to-day follow up of activities during implementation to measure progress and identify deviations  Monitoring is the routine and systematic.
Integrating Knowledge Translation and Exchange into a grant Maureen Dobbins, RN, PhD SON, January 14, 2013.
Building Capacity for effective government wide Monitoring and Evaluation Mr Oliver Seale M&E Learning Network Tuesday, 15 May 2007.
Measuring and reporting outcomes for BTOP grants: the UW iSchool approach Samantha Becker Research Project Manager U.S. IMPACT Study 1UW iSchool evaluation.
EVALUATION OF HRD PROGRAMS Jayendra Rimal. The Purpose of HRD Evaluation HRD Evaluation – the systematic collection of descriptive and judgmental information.
Screen 1 of 22 Food Security Policies – Formulation and Implementation Policy Monitoring and Evaluation LEARNING OBJECTIVES Define the purpose of a monitoring.
Monitoring and Evaluation
Workgroup: Delivering and Accounting for Development Results
1 Results-based Monitoring, Training Workshop, Windhoek, Results-based Monitoring Purpose and tasks Steps 1 to 5 of establishing a RbM.
Support the spread of “good practice” in generating, managing, analysing and communicating spatial information Evaluating and Reflecting on the Map-making.
The Literacy and Numeracy Secretariat Le Secrétariat de la littératie et de la numératie October – octobre 2007 The School Effectiveness Framework A Collegial.
SJI Impact Assessment 2014 Based on Impact Assessment for Development Agencies – Learning to Value Change (Chris Roche, Oxfam, digital file 2010)
Evaluation: from Objectives to Outcomes Janet Myers, PhD MPH AIDS Education and Training Centers National Evaluation Center
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 9, 2012.
The School Effectiveness Framework
27/04/2017 Strengthening of the Monitoring and Evaluation system for FTPP/FTTP in FAO /SEC December 2015 FTPP/FTFP Workshop, Bishkek, Kyrgyzstan.
IDEV 624 – Monitoring and Evaluation Evaluating Program Outcomes Elke de Buhr, PhD Payson Center for International Development Tulane University.
The Literacy and Numeracy Secretariat Le Secrétariat de la littératie et de la numératie October – octobre 2007 The School Effectiveness Framework A Collegial.
Session 2: Developing a Comprehensive M&E Work Plan.
IMPLEMENTING LEAPS IN CENTRAL AND EASTERN EUROPE: TRAINERS’ HANDBOOK Monitoring and Evaluating Results.
.  Evaluators are not only faced with methodological challenges but also ethical challenges on a daily basis.
Development of Gender Sensitive M&E: Tools and Strategies.
Evaluation What is evaluation?
Performance Indicators
Session 1 – Study Objectives
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 2. Session 6. Developing indicators.
Strategic Planning for Learning Organizations
Resource 1. Evaluation Planning Template
PROCESSES AND INDICATORS FOR MEASURING THE IMPACT OF EQUALITY BODIES
OGB Partner Advocacy Workshop 18th & 19th March 2010
Monitoring and evaluation
Monitoring and Evaluation in Communication Management
Integrating Gender into Rural Development M&E in Projects and Programs
Presentation transcript:

111 CINDI PMER Workshop March, 2007 By Simon Opolot © 2007 Office of the Premier, KwaZulu-Natal Province, Private Bag X9037 Telecom House, 2nd Floor, Room Langalibalele Street, Pietermaritzburg 3200, South Africa All rights reserved March 13, 2007 FEELING THE PULSE OF THE PROJECT FROM INCEPTION TOWARDS MEETINGS ITS OBJECTIVES/GOALS/COMPLETION – TOWARDS IMPACT Feeling the Pulse of the Project as it moves from Inception towards Meeting its Objectives/Goals/Completion – Towards Impact!

222 THE RATIONALE BEHIND INDICATOR USE Because they can be a measurement, a number, a fact, an opinion or a perception that points at a specific condition or situation being targeted by the intervention, and measure changes in that condition or situation over time: Indicators are front-line instruments in monitoring and evaluating HIV/AIDS interventions and other development work Indicators enable us to assess where we stand and where we are going with respect to values and goals, and to evaluate specific programs and determine their impact. Indicators provide a close look at the results of interventions Using indicators enables us to feel the pulse of the intervention/project as it moves from inception towards meeting its objectives/goals/completion – Towards Impact!! March 13, 2007

333 Risk/Enabling Indicators - Experience shows that at all stages of its cycle a project/an intervention may be affected by a variety of risks or enabling features. So by Risk/Enabling indicators we mean those factors external to a project that contribute to the project's success or failure. E.g. in interventions designed to mobilize and strengthen support for OVCs, indicators of risk would be the attitude of local communities. Input indicators (also called Resource Indictors) - relate to the resources devoted to the intervention/ project. E.g. funding, human and non-human resources, infrastructure, institution- building, and other means by which the intervention/project is put into effect. These indicators play an important role in flagging potential problems and identifying their causes. However, input indicators alone will not reveal whether or not the intervention/ project will be a success. USING INDICATORS IN A CHAIN, FROM INPUT THROUGH TO OUTCOME/IMPACT. March 13, 2007

444 Process indicators (also called "throughput" or "activity" indicators) - reflect delivery of resources devoted to the intervention/ project on an on-going basis. As such, they are the best indicators of implementation and are used for project monitoring. However, while they reflect achievement of results, they should not displace measures of distal outcomes (impact). – A process may be successful at the same time as the outcome/impact is a failure, as noted in this piece of cynical folk wisdom, "the operation was a success, but the patient died," Output indicators – Often used in project evaluations, but are less useful than outcome indicators as they do not track distal results. This is because output indicators measure intermediate results concerning products and services that are delivered when a program or project is completed, but not longer-term results. One of the most important tasks in use of indicators is to carry out evaluation at the outcome as well as the output level. USING INDICATORS IN A CHAIN, FROM INPUT THROUGH TO OUTCOME/IMPACT Cont … March 13, 2007

555 USING INDICATORS IN A CHAIN, FROM INPUT THROUGH TO OUTCOME/IMPACT Cont … Outcome indicators - concern the effectiveness, often long-term, of the intervention/project as judged by the measurable change achieved in improving the quality of life of beneficiaries. They are also known as "impact" indicators. In most cases, the primary emphasis in using indicators should be on outcome, because this best measures distal results

666 SOME QUESTIONS TO ASK AND ANSWER What are the various service areas your HIV/AIDS interventions are addressing? How were the indicators you are currently using developed? If they were developed in a non- participatory fashion, using expertise and knowledge from outside – chances are that those indicators have not been properly understood – Intervention/project does have the pulse. Externally developed indicators should be adopted and contextualized to the local environment the intervention is operating in.

777 IN ORDER TO FILL THE PULSE The Test of Reliability – Meaning the indicators used must be accurate and consistent. If multiple uses of the same instrument (e.g. interview, survey) yield the same/or similar result then your indicator is reliable; The Test of Validity – Meaning the information that indicators provide must be close to the reality they are measuring Ways of ensuring an indicator is valid are: 1) common sense; 2) whether the indicator reflects similar findings in different situations; and 3) whether different survey instruments yield or uncover the same indicators. In general, the validity of an indicator can be enhanced by triangulation, or use of multiple sources of information and data. It is in this context that quantitative and qualitative approaches can be fruitfully mixed. Indicators Chosen Must Meet Two Tests:

888 INDICATORS AND YOUR INTERVENTION OBJECTIVES Since indicators are tools to elicit results (and for assessing impact) they are tied to the objectives with which the HIV/AIDS intervention begins: Objectives should be determined in relation to the baseline studies, against which results can be measured. Good objectives serve as “anchors” through which the project cycle in two important ways: – Provide a statement of the principal contributions and intervention will make in impacting a particular condition – Act as a set of “information handle” to assess progress during implement. Good objectives must be realistic, operational and measurable. They must also be tied to a credible implementation plan that links courses of action and intermediate targets to the expected final OUTCOME/IMPACT

999 INDICATORS, TIME-FRAMES AND SUSTAINABILITY So you ask which indicators to use at which stage of the project cycle? Input indicators should be used at or close to the start of the project, at which point base-line data is collected. Process indicators should be used while the project is proceeding, for purposes of monitoring, and until near the end of donor involvement. Output indicators should be used near to the end of donor involvement, and outcome indicators should be used after donor involvement is complete. Process, output and outcome indicators can then be compared against each other and against base-line data in order to determine how far objectives have been met. In some cases, the same indicator will be used to measure process, output and outcome (e.g. disaggregated enrolment figures, literacy rates or the local community's level of satisfaction with the project), the difference coming in that indicators are used at different points in time.