28 February, 2011 University of Pretoria Introduction to M&E 28 February, 2011 University of Pretoria
Session Objectives Describe Health Information/M&E Systems Define Monitoring Define Evaluation Purposes of M&E Describe the relationship between monitoring and evaluation Define Program Components Identify the different types of M&E
Learning Objectives By the end of this session you will be able to: Define key M&E terms Identify components of programs to evaluate Describe different purposes for M&E State why Monitoring & Evaluation are important in programming Describe characteristics of good M&E system
Session Overview Key M&E terms “M” vs. “E” The purpose of M&E Types of evaluation
Defining Programs, Projects & Interventions Program: organized effort to respond to a broad social problem (typically organized at national level) Project: specific set of activities with linked objectives, that contribute to the overall objective of a program (typically organized at sub-national level) Intervention: often used in same was as “project”, sometimes a specific sub-set of program or project activities Term “program” can be used in many ways. For our purposes, programs are large and overarching efforts with many components and many levels. Programs distinguished from “projects” – more discrete. programme may have projects within it. Focus is on NATIONAL-SCALE programs
Program Planning Involves Setting… Goals & Objectives based on intended Impact Intended Outcomes Intended Outputs or Deliverables Planned Activities (Processes) Inputs or Resources Inputs—Output and outcome/impact game
Inputs What you need to implement the program e.g. the financial , human and material resources used to implement an HIV counseling and testing program Trained personnel C&T protocols and guidelines Training materials HIV test kits and other supplies Money
Activities What the program provides to accomplish its objectives (micro level) Training workshops on C&T for personnel and site managers Providing pre-test and post-test counseling to clients Curriculum development Recording & reporting Dissemination of IEC materials Supervision Services that the program/project provides to accomplish its objectives, such as outreach, materials distribution, counseling sessions, workshops, and training. Provide pre-test counseling, HIV tests, post-test counseling Train C&T personnel and site managers
Outputs What the program delivers Condoms distributed Clients receiving pre-test counseling, HIV tests, post-test counseling Materials distributed People reached Intervention sessions completed
Outcomes The results of the program or changes that occur both immediately or some time after activities are completed Changes in knowledge, behavior, attitudes and skills Quality of C&T improved Clients develop and adhere to personalized risk-reduction and treatment strategy Effects of a program at the target population level Program results that occur both immediately and some time after the activities are completed, such as changes in knowledge, attitudes, beliefs, skills, behaviors, access, policies, and environmental conditions. Quality of C&T improved Clients develop & adhere to personalized risk-reduction and treatment strategy
Impact The wider effect of the program on long-term results HIV transmission rates decrease HIV incidence decreases Changes in HIV mortality and morbidity Decrease in deaths due to HIV-related TB Looking at the impact of our program answers this question: what is the effect of the program on the epidemiology of HIV? AND most important, are you able to link these directly to your program efforts? Show examples of impact – note that the epidemiology of HIV is affected by many factors, not just our program, and this is one way we can distinguish outcome from impact. All of these items may be affected by other health programs or interventions or phenomena that are not addressed by HIV programs. Impact: Long-term results of one or more programs over time, such as changes in HIV infection, morbidity, and mortality HIV transmission rates decrease HIV incidence decreases HIV morbidity and mortality decrease
Example: Components of an HIV program Programme level Population level Input Process Output Outcome Impact Capacity building BCC interventions Community mobilization Policy development Improved skills of providers Increased knowledge Mobilized communities Sound policies available Financial resources Staff Supplies Reduced risk behaviors Declining HIV/AIDS morbidity and mortality Economic growth AFTER THIS SLIDE INPUT-OUTPUT GAME IF TIME ALLOWS
So how does this relate to M&E? Back to our definitions…
What Do We Mean by M&E? Set of procedures & analytical tools to examine how programs are conducted (inputs & activities) their level of performance (outputs) whether they achieved what they were intended to achieve (outcomes & impact) Types of evaluation monitoring (including process evaluation) evaluation (outcome and impact evaluation) [+ surveillance]
Monitoring vs Evaluation Monitoring: What are we doing? Tracking inputs and outputs to assess whether programs are performing according to plans (e.g., people trained, condoms distributed) Evaluation: What have we achieved? Assessment of impact of the programme on behaviour or health outcome (e.g., condom use at last risky sex, HIV prevalence)
What is Monitoring? Is a a continuous systematic process of collecting, analyzing and using information to track the efficiency of achieving program goals and objectives Provides regular feedback that measures change over time in any of the program components such as costs, personnel and program implementation An unexpected change in monitoring data may trigger the need for a more formal evaluation of activities Process evaluation measures how well program activities are being performed. This information is sometimes collected on a routine basis, such as through staff reports, but may also be collected periodically in a larger scale process evaluation effort. Source: Compendium of indicators for Monitoring and Evaluation of National Tuberculosis Program. Aug 2004. Stop TB partnership, MEASURE Evaluation, CDC, USAID. P.1.
Illustration of Program Monitoring indicator Explain elements of the graph shown: vertical axis can be any program indicator, horizontal axis is the time over which a program runs, and each bar represents the periodic measurement of the indicator over the lifetime of the program. As this graph illustrates, monitoring requires data to construct indicators for your outcomes of interest at several points. At a minimum, the program must have all data necessary to calculate your indicator’s value before or near the start of the related intervention, and at or after the end of the intervention. Ideally, monitoring will measure the indicator at periodic intervals while the program is ongoing, both to track incremental program performance and to discover if activities or other factors need adjustment during the intervention in order to improve the ultimate outcome. For instance, if recurrent stock-outs occur, either increasing supply levels or multiplying supply schedules could lead to a higher measured use due to the program in the final evaluation. Additional Background Note that monitoring does not involve determining or attributing the cause of a change in the measured indicator. Even cumulative data can be used to monitor performance -- the rate of change is not investigated, but rather notice taken of the overall change in the measured level of a relevant outcome over a period of time. Methodological issues are less complex than issues that need to be taken into consideration. Program start Program end TIME
Example of programme Monitoring Male STD Cases at Thai Government Clinics: 1987-1993 Start of HIV-Control Program Full Source: Hannenberg RS, Rojanapithayakorn W, Kunasol P, et al. Impact of Thailand's HIV-control programme as indicated by the decline of sexually transmitted diseases. Lancet 1994;344:243-45. Source: Hannenberg et al. 1994
Key M&E Questions Is the program being implemented as planned? Are things moving in the right direction? Did the program achieve its objectives? Can results be attributed to program efforts? Which program activities were more (or less) important/effective? Did the target population benefit from the program? At what cost?
What is Evaluation? Is a systematic process limited in time of collecting, analyzing and using information to assess the effectiveness, relevance and impact of achieving your program’s goals. Requires study design; sometimes a control or comparison group; often measurement over time. Often involves measuring changes in knowledge, attitudes, behaviors, skills, community norms, utilization of health services, & health status at population level Provides regular feedback that helps programs analyze the consequences, outcomes and results of its actions Source: Compendium of indicators for Monitoring and Evaluation of National Tuberculosis Program. Aug 2004. Stop TB partnership, MEASURE Evaluation, CDC, USAID. P.1. Comprehensive evaluation is based on research and analysis covering the conceptualization and design of programmes, the monitoring of programme interventions, and the assessment of programme utility. Rigorous, scientifically based analysis of information about program activities, characteristics, and outcomes to determine the merit or worth of a specific program/project. Requires study design; sometimes a control or comparison group; often measurement over time. Analysis of monitoring data to improve programs/projects and inform decisions about future resource allocations.
What is Evaluation? (con’t) Rigorous research design need for: Establishing a causal link between program effort and desired outcomes Isolating program effect from other, non-program influences on the outcome of interest Not undertaken routinely; usually reserved for specific situations, such as determining the success of a project for scale-up or replication Source: Compendium of indicators for Monitoring and Evaluation of National Tuberculosis Program. Aug 2004. Stop TB partnership, MEASURE Evaluation, CDC, USAID. P.1.
Outcome Evaluation Time With program Attribution “Impact” Program Indicator: CPR Without program Red line – without program – that is there is a rise, even in locations without the program. Program Start Program End Time
Impact Evaluation Time With program Attribution “Impact” Program Indicator: Fertility or HIV prevalence Without program Red line – without program – that is there is a rise, even in locations without the program. Program Start Program End Time
Illustration of Program Evaluation/ Impact With program Change in program outcome Without program Program impact To measure program impact, an evaluation is typically conducted at the start of the program and again at the end of a program, rather than at repeated intervals while the program is being implemented. At the same time, these baseline and follow-up measurements are made in areas without the program. Attributing changes in outcomes to a particular program/intervention requires one to rule out all other possible explanations. We need to control for all external or confounding factors that may account for the results. Therefore, extensive knowledge of sampling and statistical analysis are sometimes required for measuring program impact. If the study design does not involve a randomly-assigned control group, the difference in outcome between areas with the program and areas without the program is an analytical exercise rather than a direct measurement. There are some similarities between monitoring and evaluation. Both require knowledge of baseline values and final values, often with an interim measurement during the project. However, evaluation differs crucially from monitoring in that the goal of evaluation is to determine how much of the change in outcome is due to the program or intervention. In other words, pure numbers cannot tell the evaluation tale; evaluation is fundamentally an analytical exercise to help decision makers understand when, how, and to what extent the program is responsible for particular measured impact. However, relatively few programs go as far as establishing cause-and-effect between the program and the change. Program start Program end TIME
Types of Monitoring and Evaluation Formative Evaluation Research Assessment and Planning (Concept and Design) Input/Output Monitoring Process Evaluation (Monitoring of Inputs and Outputs; Assessment of Service Quality) Outcome Monitoring Impact Monitoring Outcome Evaluation Impact Evaluation (Effectiveness Evaluation) Cost-Effectiveness Analysis (including sustainability issues) Questions Answered by the different Types of Monitoring and Evaluation Is an intervention needed? Who needs the intervention? How should the intervention be carried out? To what extent are the planned activities actually realized? How well are the services provided? What staffing/resources were used? What barriers did clients experience in accessing the intervention? Did the expected outcomes or impacts occur? (, e.g., expected knowledge gained; expected change in behavior occurred;; change in HIV prevalence) Did the intervention cause the expected outcomes? Should program priorities be changed or expanded? To what extent should resources be reallocated? NOTE: Terms in red indicate PEPFAR terminology Further differences between outcome monitoring vs outcome evaluation and impact monitoring and impact evaluation defined in next slide
Comparison of Potentially Confusing Terms Using the PEPFAR Example Outcome Impact Monitoring Simple Tracking of: Results that occur both immediately and some time after the activities are completed, such as changes in knowledge, attitudes, beliefs, skills, behaviors, access, policies, and environmental conditions. Clients reduced risk to HIV infection Increased prevalence of immediate breastfeeding in hospitals Decrease in newborn hypothermia Increased use of modern methods of contraception Simple Tracking (over multi-year periods) of: Long-term results of one or more programs over time, such as changes in morbidity, and mortality HIV prevalence TB incidence Maternal morbidity Infant mortality Evaluation Rigorous Scientific Analysis (including attribution of effects): of: Program outcomes (same definition as above) Note: Most people outside of the PEPFAR community consider this to be “Impact Evaluation” Program impacts (same definition as above)
Key M&E Questions Did the program achieve its objectives? Did the target population benefit from the program? At what cost? Can improved health outcomes be attributed to program efforts? Which program activities were more (or less) important/effective? What would have happened in the absence of the program? How can we know or measure this (the counterfactual)?
Why Monitor & Evaluate? Program Improvement Reporting/ Accountability To make decisions about project management and service delivery To ensure effective and efficient use of resources and provide accountability to donors To assess whether the project has achieved its objectives - has the desired effects To learn from our activities, and provide information to design future projects Program Improvement Share with Partners Reporting/ Accountability
Purposes of Monitoring and Evaluation Determine whether a plan or program is on schedule with planned activities Assess whether a policy, plan or program has produced desired impacts Generate knowledge: Identify factors (individual, community, programmatic) that influence health outcomes Help inform policy, planning or program decisions: new services, resource allocation, corrections, etc. M&E is an essential process in providing effective and efficient services and ensuring that programs are relevant and successful. For example, it helps us make informed decisions about such questions as appropriate staffing and other necessary resources. M&E helps us know whether a program is being true to its stated goals and objectives. For instance, …. M&E helps us evaluate whether our programs are having their desired impact. If we want to know how a program is performing we might assess it against targets that have been set for specific indicators by the program or funding agency or government. For instance, we might assess if a breastfeeding program is reaching its goals in providing counseling to pregnant women during ANC and by the percentage of children under six months who are exclusively breastfed. However, for M&E to have this desired impact, M&E data and information must be used strategically by programs, service delivery organizations, policymakers and other stakeholders.
Reasons to Monitor & Evaluate - Different Needs for Different Stakeholders: Funding Agencies & Policy Makers What M&E Measures Evidence of achievement of program objective Program outcome and impact Program cost-efficiency Data about the target population What M&E Results Identify Priorities for strategic planning Programs that qualify for donor assistance Best practices Impact of donor assistance What Decisions are Guided by M&E Results How much funding should be allocated to a program What types of programs should be funded Which programs approaches should be presented as models New strategic objectives, activities or results packages Replication and scaling up of successful programs Source: A Guide to Monitoring & Evaluating Adolescent Reproductive Health Program. Chapter 3: Developing an ARH Monitoring & Evaluation Plan. Pages 39-70. FOCUS on Young Adults, Tool Series 5, June 2005.
Reasons to Monitor & Evaluate - Different Needs for Different Stakeholders: Communities & Youth What M&E Measures Youth behavior related to reproductive health Young peoples’ needs How program funs are being spent The process and impact of community participation What M&E Results Identify Actual and potential benefits of youth programs Need for new and better youth services Community resources that can be used to support ARH programs Need for local support for ARH issues and actions What Decisions are Guided by M&E Results Degree to which community members and youth should participate in and support the program How to better coordinate community actions to address ARH How many and what type of local resources should be allocated to ARH Source: A Guide to Monitoring & Evaluating Adolescent Reproductive Health Program. Chapter 3: Developing an ARH Monitoring & Evaluation Plan. Pages 39-70. FOCUS on Young Adults, Tool Series 5, June 2005.
Reasons to Monitor & Evaluate - Different Needs for Different Stakeholders: Program Managers & Staff What M&E Measures Quality of activities and/or services Why some sites are more successful Program coverage What M&E Results Identify Priorities for strategic planning Training and supervision needs How to improve reporting to funding agency Feedback from clients Why program is not accomplishing what it set out to do What Decisions are Guided by M&E Results Resource allocation Replication and scaling up of intervention Fund-raising Motivating staff Policy advocacy By Developing consensus among stakeholders about what information should be collected, given your available resources, you can make an M&E effort more manageable. Source: A Guide to Monitoring & Evaluating Adolescent Reproductive Health Program. Chapter 3: Developing an ARH Monitoring & Evaluation Plan. Pages 39-70. FOCUS on Young Adults, Tool Series 5, June 2005.
Fundamental Steps to Carry out M&E Agreement on the scope and objectives of M&E plan with stakeholders Selection of Indicators Systematic and consistent collection of information on the selected indicators Analyze the information gathered Compare results with program initial goals and objectives Share results with stakeholders Source: A Guide to Monitoring & Evaluating Adolescent Reproductive Health Program. Chapter 3: Developing an ARH Monitoring & Evaluation Plan. Pages 39-70. FOCUS on Young Adults, Tool Series 5, June 2005.
Defining the Scope of M&E Effort Scope refers to the extent of the activity you will undertake in a M&E effort. Scope is determined by several factors (questions) What should be monitored and evaluated? When should health programs be monitored and evaluated? How much will M&E cost? Who should be involved in M&E? Who should carry out the evaluation? Where should M&E take place? Source: A Guide to Monitoring & Evaluating Adolescent Reproductive Health Program. Chapter 3: Developing an ARH Monitoring & Evaluation Plan. Pages 39-70. FOCUS on Young Adults, Tool Series 5, June 2005.
What Should be Monitored & Evaluated? M&E can be measure each stage of your program development: design, system development and functioning, and implementation. After goals, objectives and activities are developed, decision on about M&E at each stage is needed. M&E effort can measure each stage to determine how the program is working and its impact on the target population. What should be monitored? M&E can measure each of program development (design, system development, implementation). After you have developed goals, objectives, and activities, your next step is to make decisions about M&E in each of stages. Your M&E effort can measure each stage to determine how the program is working and its impact on the target population. Source: A Guide to Monitoring & Evaluating Adolescent Reproductive Health Program. Chapter 3: Developing an ARH Monitoring & Evaluation Plan. Pages 39-70. FOCUS on Young Adults, Tool Series 5, June 2005.
What Should be Monitored & Evaluated? Program Design is measured by process evaluation: Assessing how well the program has designed Documenting the success/challenges with program design System development and functioning is measured by monitoring and process evaluation: Document the development of support systems and determine if they are actually operating once program implementation begins Assess the performance of support systems Measure how effective the preparatory activities are in readying program personal for program implementation Implementation is measured by monitoring, process evaluation, and outcome/impact evaluation: Reveal how program implementation is occurring Determine whether program is achieving its objectives by measuring the changes in outcomes in your target population The process of program design involves developing a strategy and systematic approach to address target population’s needs, identifying actions and activities required to implement the strategy, and identifying actions and activities required to implement the strategy, and identifying the resources needed to carry out the activities. System development involves the creation of management and support system (MIS, financial management system, personnel system, commodities, and logistics systems) to carry out the program. System functioning involves the ongoing performance of the systems used to operate the program and includes issues such a how decision are made within the program, whether internal and external communication are functioning well, whether training and supervision are ensuring quality performance. Implementation is the process of carrying out program activities with the target population and providing services to them. Source: A Guide to Monitoring & Evaluating Adolescent Reproductive Health Program. Chapter 3: Developing an ARH Monitoring & Evaluation Plan. Pages 39-70. FOCUS on Young Adults, Tool Series 5, June 2005.
When Should Health Programs be Monitored & Evaluated? Monitoring and process evaluation should occur throughout the life of a program Outcome and impact evaluations are usually done near the end of a program (baseline gathered to measure change) Starting M&E at the beginning of a program is ideal Some activities can still be measured if M&E is started in the middle of a program Even Fewer activities can be measured if M&E is started towards the end of the program Source: A Guide to Monitoring & Evaluating Adolescent Reproductive Health Program. Chapter 3: Developing an ARH Monitoring & Evaluation Plan. Pages 39-70. FOCUS on Young Adults, Tool Series 5, June 2005.
When Should Health Programs be Monitored & Evaluated? Stage of Monitoring Process Outcome/Impact Program Early - Set up monitoring system; identify indicators and tools, plan for tracking program, data analysis, and reporting. - Assess systems development and functioning. Provide early feedback. Assess if program is responsive to target. - Identify objectives and indicators. Take baseline measurements. Crate an outcome or impact evaluation plan. Middle - Assess MIS and data. Modify original system is inadequate. If program is not performing, launch process evaluation. - Conduct more formal midterm process evaluation to assess quality of program performance. Determine coverage, or whether program is reaching target. - Take mid-term measurements. Analyze short term outcome measures, such as changes in knowledge, increase in use of programs and changes in contextual factors. Provide feedback to program. Late - Analyze data from tracking system to conclude if you conducted the program as planned. Prepare and submit reports. - Analyze end of program measurements. Determine what was done to improve quality of program implementation. Make recommendations for program replication or expansion. - Take end of program measurements. Examine evidence of changes in outcomes. Depending on study design conduct impact analysis to conclude whether outcomes are attributable to program activities. Report to donor and other stakeholders. Flow chart of an M&E effort started at the Beginning of a program Source: A Guide to Monitoring & Evaluating Adolescent Reproductive Health Program. Chapter 3: Developing an ARH Monitoring & Evaluation Plan. Pages 39-70. FOCUS on Young Adults, Tool Series 5, June 2005.
Where are the levers in a working health systems 6 building blocks of health systems strengthening Service delivery Health workforce INFORMATION Medical products ,Vaccines and technologies Financing Leadership and governance WHO: Nellie Bristol www. globalhealthmagazine.com
Input-Output Exercise
Program components as they relate to Types of M&E Assessment and Planning Input/Output Monitoring Outcome Monitoring Impact Monitoring Process Evaluation Outcome Evaluation Impact Evaluation
Program Components as They Relate to M&E Outcomes Impact Planning Inputs Processes/ Activities Outputs Outcome Monitoring Evaluation Impact Monitoring Evaluation Assessments Input/Output Monitoring Process Evaluation
Monitoring & Evaluation Pipeline Inputs Outputs All Most Outcomes Impact Some Few Number of Projects Measuring each Component Long-term effects Short-term and intermediate effects Behavior change Attitude change Changes in STI trends Increase in social support Condom availability Trained staff Quality of services (e.g. STI, VCT, care) Knowledge of HIV transmission Resources Staff Funds Materials Facilities Supplies Training Levels of Monitoring & Evaluation Efforts
M&E Terminology Quiz Work in pairs/group and determine which type of M&E each example is describing.
Additional M&E Websites MEASURE Evaluation: www.cpc.unc.edu/measure Health Metrics Network: www.who.int/healthmetrics John Snow Inc.: www.jsi.com HIV Global Partners: www.globalhivmeinform.org
MEASURE Evaluation is funded by the U.S. Agency for International Development (USAID) through Cooperative Agreement GHA-A-00-08-00003-00 and is implemented by the Carolina Population Center at the University of North Carolina at Chapel Hill, in partnership With Futures Group International, John Snow, Inc., Macro International Inc., Management Sciences for Health, and Tulane University. The views expressed in this presentation do not necessarily reflect the views of USAID or the United States government.