Developing Program Indicators Measuring Results MEASURE Evaluation.

Slides:



Advertisements
Similar presentations
Using the PLACE Methodology to Measure GBV: Haiti Example Training on M&E of GBV Ilene Speizer MEASURE Evaluation November 8, 2007.
Advertisements

Violence Against Women and Girls A Compendium of Monitoring and Evaluation Indicators.
HIV/AIDS Results Monitoring and Evaluation Systems Measuring the Multi-Sector Response.
Department of Gender and Womens Health Addressing gender in HIV/AIDS Indicators: Key issues to consider Department of Gender, Women and Health World Health.
Template: Making Effective Presentation about Your Evidence-based Health Promotion Program This template is intended for you to adapt to your own program.
Introduction to Monitoring and Evaluation
Donald T. Simeon Caribbean Health Research Council
Specific outcomes can be compared to resources expended; and successful programs can be highlighted;
GENERATING DEMAND FOR DATA Module 1. Session Objectives  Understand the importance of improving data-informed decision making  Understand the role of.
POLICY AND PLANNING BRANCH (PPB) Proposed M&E action plan Charles Mvula IAC WAGENINGEN UR February 9 –
Dissemination and Use of Results from OVC Program Evaluations Florence Nyangara, PhD MEASURE Evaluation/Futures Group Dissemination Meeting, September.
DETERMINANTS OF DATA USE Session 2. Session Objectives  Explain the data-use conceptual framework  Highlight the determinants of data use  List potential.
Importance of Health Information Systems Information explosion during 1990s  It is estimated that in the next 50 years, the amount of knowledge currently.
PROGRAM PLANNING, IMPLEMENTATION & EVALUATION The Service Delivery Model Link Brenda Thompson Jamerson, Chair Services to Youth Facet May 8-12, 2013.
1 Module 4: Designing Performance Indicators for Environmental Compliance and Enforcement Programs.
Performance Measurement and Analysis for Health Organizations
1 What are Monitoring and Evaluation? How do we think about M&E in the context of the LAM Project?
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Unit 10. Monitoring and evaluation
Slide 1 Long-Term Care (LTC) Collaborative PIP: Medication Review Tuesday, October 29, 2013 Presenter: Christi Melendez, RN, CPHQ Associate Director, PIP.
2013 NEO Program Monitoring & Evaluation Framework.
M&E Basics Miguel Aragon Lopez, MD, MPH. UNAIDS M&E Senior Adviser 12 th May 2009.
Indicators Regional Workshop on the
28 February, 2011 University of Pretoria
Indicators for ACSM.
Case management versus M&E in the context of OVC programs: What have we learned? Jenifer Chapman, PhD Futures Group/MEASURE Evaluation.
21/4/2008 Evaluation of control measures 1. 21/4/2008 Evaluation of control measures 2 Family and Community Medicine Department.
Quality Assurance Programme of the Canadian Census of Population Expert Group Meeting on Population and Housing Censuses Geneva July 7-9, 2010.
Bridging the Research-to-Practice Gap Session 1. Session Objectives  Understand the importance of improving data- informed decision making  Understand.
Data Triangulation. Objectives:  At the end of the session, participants will be able to:  Describe the role of data triangulation in program evaluation.
NATIONAL M&E PLANNING AND LESSONS LEARNED. WHAT’S M&E?  Let’s keep our definition practical – are we:  Doing the right thing?  Doing it right?  Doing.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
1 Results-based Monitoring, Training Workshop, Windhoek, Results-based Monitoring Purpose and tasks Steps 1 to 5 of establishing a RbM.
Measuring Workforce Development Outcomes. Definition of Workforce Development 1. Needs include those of WfD providers, labor, and employers. 2. The definition.
Linking Data with Action Part 1: Seven Steps of Using Information for Decision Making.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Linking Data with Action Part 2: Understanding Data Discrepancies.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 9, 2012.
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
IDEV 624 – Monitoring and Evaluation Evaluating Program Outcomes Elke de Buhr, PhD Payson Center for International Development Tulane University.
New M&E Requirements Increased Accountability USA versus South Korea.
1 DEMONSTRATION PROJECTS TO ENSURE STUDENTS WITH DISABILITIES RECEIVE A QUALITY HIGHER EDUCATION PROGRAM Performance Measurement, Program and Project Evaluation.
Session 2: Developing a Comprehensive M&E Work Plan.
NFM: Modular Template Measurement Framework: Modules, Interventions and Indicators LFA M&E Training February
RE-AIM Framework. RE-AIM: A Framework for Health Promotion Planning, Implementation and Evaluation Are we reaching the intended audience? Is the program.
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
Session 5: Selecting and Operationalizing Indicators.
Development of Gender Sensitive M&E: Tools and Strategies.
Violence Against Women and Girls A Compendium of Monitoring and Evaluation Indicators By Shelah S. Bloom Presented by: Anupa Deshpande.
Introduction to Group Work. Learning Objectives The goal of the group project is to provide workshop participants with an opportunity to further develop.
Data Quality Management. Learning Objectives  At the end of this session, participants will be able to:  Summarize basic terminology regarding data.
Information Use Part II Informing Decisions Strengthening Programs through Improved Use of Data and Information.
MEASURE Evaluation Data Quality Assurance Workshop Session 3 Introduction to Routine Data Quality Assessment.
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
Monitoring & Evaluation Capacity Strengthening Workshop WORKSHOP INTRODUCTION AND OVERVIEW.
M&E Basics Miguel Aragon Lopez, MD, MPH
Right-sized Evaluation
Fundamentals of Monitoring and Evaluation
Session 1 – Study Objectives
Presenter: Christi Melendez, RN, CPHQ
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 2. Session 6. Developing indicators.
Overview of the RHIS Rapid Assessment Tool
Assessment Training Session 9: Assessment Analysis
Introduction to the PRISM Framework
Monitoring and Evaluation
Measuring Data Quality
Integrating Gender into Rural Development M&E in Projects and Programs
Nancy Padian UC Berkeley
M & E Plans and Frameworks
Presentation transcript:

Developing Program Indicators Measuring Results MEASURE Evaluation

Objectives By the end of this module, participants will:  State definition of program indicators  Develop program indicators  Complete an Indicator Information Sheet

Indicator Basics  What is an indicator  How to select key indicators for your own organization  Program donor indicators  Discussion of indicators by program area

Definition An Indicator is….  A variable  That measures  One aspect of a program/ project An appropriate set of indicators includes at least one indicator per significant element of the program or project (input, output, outcome, impact).

An indicator  Is a variable… means the value of the indicator varies between a given, reference level, measured at the start of the intervention, and another value measured after the intervention has had time to produce its impact, when the indicator is again measured.

An indicator  Is a measurement… It measures the value of the change in units that are significant for the management of the program and comparable to past and future units and values

An indicator  One aspect of a program…  This can be an input, an output, or a general objective, but the corresponding indicator should be narrowly defined in order to determine the aspect measured as precisely as possible.  For a given project a complete and appropriate set of indicators must include at least one indicator per significant element of the intervention.

What is an Indicator?  An indicator should be selected to represent those key or significant areas, which will demonstrate whether conditions have or have not changed – track trends over time (condom distribution, service statistics).  An indicator is specific information that provides evidence as to the achievement of (or lack of) results and activities.

Why are indicators important?  Indicators enable you to reduce a large amount of data down to its simplest form (percent of clients who tested after receiving pre-test counseling, prevalence rate).  When related to targets or goals, indicators can signal the need for corrective management action, evaluate the effectiveness of various management actions, and provide evidence as to whether objectives are being achieved.

Concepts and Definitions An indicator can be a:  Number  Ratio  Percentage  Average  Rate  Index (composite of indicators)

Indicators are Not  Just anything you can think of to measure. Every measure is not an indicator (# of school desks).  Indicators are not objectives or targets, but the actual results.  Indicators are not biased i.e. they do not specify a particular level of achievement -- the words, improved, increased, gained, etc do not belong in an indicator

Remember…  No single indicator constitutes a comprehensive measure. (# of people receiving ARVs – need/cost, # of OVC served – visits, inputs)  To balance between too many indicators, but enough to gain key information. You do not develop an indicator for every possible issue – only the most important elements of the program.  A high quality M&E system tracks all levels of data but does not use input or output indicators alone as evidence of results or to evaluate effectiveness of the program

Levels of Indicators  Input  Output  Outcome  Impact

PlanningImplementation Outcomes -Formative Evaluation (Planning and Assessment) -Input/Output Monitoring -Process Evaluation -Outcome Monitoring -Outcome Evaluation -Impact Monitoring -Impact Evaluation What & how well are we doing?

4 Steps in Selecting Indicators Step 1: Clarify the Results Statements Identify what needs to be measured. Good indicators start with good results statements. Start with the overall objective or goal and work backwards.

4 Steps in Selecting Indicators Step 2: Develop a List of Possible Indicators Brainstorm indicators at each level of results. Use:  Internal brainstorming (involvement)  Consultation with references (experts, documents)  Experience of other similar organizations

4 Steps in Selecting Indicators Step 3: Assess Each Possible Indicator 1)Measurable (can be quantified and measured by some scale). 2)Practical (data can be collected on a timely basis and at reasonable cost). 3)Reliable (can be measured repeatedly with precision by different people). 4)Relevant--Attributable to YOUR ORGANZATION (the extent to which a result is caused by YOUR activities). 5)Management Useful (project staff and audiences feel the information provided by the measure is critical to decision-making).

4 Steps in Selecting Indicators Step 3: Assess Each Possible Indicator 6)Direct (the indicator closely tracks the result it is intended to measure). 7)Sensitive (serves as an early warning of changing conditions). 8) Capable of being Disaggregated (data can be broken down by gender, age, location, or other dimension where appropriate).

Proxy Indicator  An indirect measure to obtain data that is indicative of the desired result:  # of condoms distributed  VCT (post –test counseling)

4 Steps in Selecting Indicators Step 4: Select the “Best” Indicators  Based on your analysis, narrow the list to the final indicators that will be used in the monitoring system.  They should be the optimum set that meets management needs at a reasonable cost.  Limit the number of indicators used to track each objective or result to a few (two or three).  Remember your target audiences (information users)

PROGRAM ELEMENT RESULTINDICATOR ImpactMTCT of HIV OutcomeAccessibility of HIV testing services to ANC clients OutputANC clients receiving HIV testing services ActivityProviding HIV testing services to ANC clients InputHIV test kits

PROGRAM ELEMENT RESULTINDICATOR ImpactHIV infectionHIV Prevalence Rate OutcomeAvailability of quality VCT services % of population receiving quality VCT services OutputTrained VCT service providers # of VCT service providers trained ActivityTraining of personnel providing VCT services # of comprehensive VCT training courses conducted InputComprehensive VCT Training Curricula VCT Training curricula developed (Y/N)

Identifying Data Sources 1. Input 2. Output  Program report  Service statistics  Training evaluation  Private sector data  Government report

Identifying Data Sources 1. Outcome 2. Impact  1998/2004 DHS Study  2002/2004 Nelson Mandela/HSRC Behavior Survey  2002 NDOH In-School Youth Study  2002 RHRU Facility Based STI Survey  Annual ANC Surveillance  Annual STI Surveillance

Developing Indicator Information Sheets  A protocol is an instruction sheet.  Protocols capture the reason for selecting indicators, describe the indicator in precise terms, and identify the plans for data collection, analysis reporting and review.  Protocols help ensure reliability of indicators as they provide critical information to help different people repeatedly measure the indicator with the same precision.  Protocols provide the organization with the means to collect data over time.  Audit trail

Parts of Indicator information sheet – Identification Indicator Information Reference Sheet No. Name of the Indicator: Result to which the Indicator Responds: Level of Indicator:

Parts of Indicator information sheet – Description Definition: Unit of Measurement: Disaggregated by: Justification and Management Utility:

Parts of Indicator information sheet – Plan for data acquisition Data Collection Method: Data Source: Frequency and timing of Data Acquisition: Estimated cost of Data Acquisition: Individual Responsible: Location of Data Storage:

Parts of Indicator information sheet – Data Quality Issues Known Data Limitations and Significance: Action Taken or Planned to Address this Limitation: Internal Data Quality Assessment:

Parts of Indicator Information Sheet - Plans for Data Analysis, Review and Reporting Data Analysis: Presentation of Data: Review of Data: Baselines:

Parts of Indicator Information Sheet – Performance Indicator Values Year: Target: Actual: Notes: Date When This Indicator Sheet Last Updated:

Program Donor Indicators

Input OutputOutcome Program Level Impact

Project Level Indicators (MTCT, VCT, TB/HIV, Care and Treatment, etc.) Country Level Indicators (SA NDOH, USG Mission) Multi- national Indicators (UNAIDS, O/GAC) M&E Indicator Pyramid:

Purposes of M&E Program Improvement Program Improvement Reporting/ Accountability Reporting/ Accountability Share Data with Partners Share Data with Partners

MEASURE Evaluation is funded by the U.S. Agency for International Development (USAID) through Cooperative Agreement GPO-A and is implemented by the Carolina Population Center at the University of North Carolina in partnership with Futures Group, John Snow, Inc., Macro International, and Tulane University. Visit us online at