Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Secure Health Programme Monitoring and Evaluation December 12, 2013 Violet Murunga African Institute for Development Policy (AFIDEP) 1.

Similar presentations


Presentation on theme: "1 Secure Health Programme Monitoring and Evaluation December 12, 2013 Violet Murunga African Institute for Development Policy (AFIDEP) 1."— Presentation transcript:

1 1 Secure Health Programme Monitoring and Evaluation December 12, 2013 Violet Murunga African Institute for Development Policy (AFIDEP) 1

2 Overview What M&E is Why M&E is important Unpacking M&E – Frameworks – Indicators – Data sources Progress reporting mechanism 2

3 Defining M&E M&E is the process by which data are collected and analyzed in order to provide information to policy makers and others with a better means for learning from past experience, improving service delivery, planning and allocating resources, and demonstrating results as part of accountability to key stakeholders.. 3

4 M&E is an integral part of programme management 4 Assessment & planning Implementation & monitoring EvaluationAdaptation

5 Results based Programming Within the development community there is a strong focus on results— hence the growing interest in M&E DFID is expecting robust information from the Secure Health Programme and other BCURE programmes to facilitate learning of effective capacity building models 5

6 Underlying principles for M&E Efficiency Effectiveness Relevance and appropriateness Combined, all these criteria enable judgment about whether the outputs and outcomes of the project are worth the costs of the inputs 6

7 Monitoring Sometimes referred to as process evaluation as it focuses on the implementation process: – How well has the program been implemented? – How much does implementation vary from site to site? – Did the program benefit the intended people? At what cost? 7

8 Monitoring programme interventions An ongoing, continuous process Collection of data at multiple points throughout the program cycle, including at the beginning to provide a baseline Track progress of activities against time, resources and targets set in order to ensure timely completion of the project 8

9 Evaluation Assessing how well the program activities have met expected objectives Assessing the extent to which changes in outcomes can be attributed to the program or intervention Impact evaluation is measuring the difference in the outcome of interest between having or not having the program or intervention 9

10 Evaluating a programme interventions data collection at the start of a programme (to provide a baseline) and again at the end, rather than at repeated intervals during program implementation a control or comparison group in order to measure whether the changes in outcomes can be attributed to the program a well-planned study design. 10

11 When should M&E Take Place M&E is a continuous process that occurs throughout the life of a program. It should be planned at the design stage of a program, with the time, money and personnel that will be required calculated and allocated in advance. 11

12 When should M&E Take Place Monitoring should be conducted at every stage of the program, with data collected, analyzed and used on a continuous basis. Evaluations are usually conducted at the end of programs. However, they should be planned for at the start because they rely on data collected throughout the program – baseline data is especially important. 12

13 M&E for Capacity Building interventions Capacity building (or capacity development) is a process that improves the ability of a person, group, organization, or system to meet objectives or to perform better. Performance is a result or set of results that represent productivity and competence related to an established objective, goal or standard. Capacity Building is Behavior Change and depends on the context 13

14 M&E for Capacity building interventions Unlike other aspects of health-related monitoring and evaluation (M&E), capacity measurement is not supported by a comprehensive history of theory and practice. Thus, capacity measurement in the health sector is both new and experimental. 14

15 M&E for Capacity building interventions There are intrinsic challenges to measuring capacity that are reflected in the concept and role of capacity itself. For example, capacity derives its relevance from the contribution it makes to performance. There are endless areas where performance is required in the health sector, and an equally wide range of possible capacity variables that influence performance. 15

16 M&E for Capacity building interventions In addition, contextual factors (or factors outside the control of most health sector actors ) can have a strong influence on capacity or the desired outcome of capacity- building intervention. These and other characteristics of capacity and capacity building explain why there are no gold standards for capacity-building M&E. 16

17 Conceptual framework for capacity in the health sector Understanding capacity and performance of individuals and organizations demands careful consideration of their role in larger systems, and their relationships within those systems” (Morgan, 1997). 17

18 What is different about M&E for Capacity building interventions Traditionally, monitoring and evaluation focuses more on measuring performance and less on the way performance is achieved or sustained. capacity-building M&E focuses fundamentally on processes (e.g., building alliances, mobilizing communities, decentralized planning, learning) and other qualitative aspects of individual or organizational change (e.g., motivation to perform) that contribute to better performance. Consequently, M&E of capacity building often seeks to capture actions or results that often are not easily measured. 18

19 What is different about M&E for Capacity building interventions In capacity-building intervention, the process and result of capacity building becomes the “intermediate outcome” that is expected to lead eventually to improved and sustained performance. Exploring the links between changes in capacity and changes in performance is therefore key. However, it often involves considerable speculation about the capacity needed to achieve those goals. 19

20 What is different about M&E for Capacity building interventions One of the main gaps in the knowledge base that informs capacity measurement is the lack of common understanding of the relationship between capacity and performance. Little is known about what elements or combinations of elements of capacity are critical to performance. Moreover, there is considerable variation in what constitutes “adequate” performance. 20

21 Implications for M&E in capacity building interventions Broadening the concept of capacity building beyond technical skills and resources and thinking about capacity building in terms of multiple levels and influences helps planners and evaluators to hypothesize about what aspects of capacity are critical to performance and to define entry points for targeting capacity-building interventions. 21

22 Implications for M&E in capacity building interventions Need for a clear understanding of the interaction among different aspects of capacity and how they work (or fail to) work together, particularly with respect to individual and organizational behavior. These types of variables may require additional interpretation to ensure a complete grasp of capacity and its role in improving performance. 22

23 Implications for M&E in capacity building interventions The extent of experience is so limited that, at this stage, capacity measurement is considered to be an art rather than a science. Evaluators must therefore approach M&E of capacity building interventions with a willingness to test strategies and share what they have learned in order to build a body of theory and practice. 23

24 Despite the challenges M&E for capacity building is important for answering the following questions – the process of capacity change (how capacity building takes place), – capacity as an intermediate step toward performance (what elements of capacity are needed to ensure adequate performance), – capacity as an outcome (whether capacity building has improved capacity) 24

25 Some tools, methods and approaches Performance indicators The logical framework approach Theory-based evaluation Formal surveys Rapid appraisal methods Participatory methods Public expenditure tracking surveys Cost-benefit and cost-effectiveness analysis Impact evaluation 25

26 Performance indicators measures of inputs, processes, outputs, outcomes, and impacts Should be supported with sound data collection— perhaps involving formal surveys—analysis and reporting, to enable managers to track progress, demonstrate results, and take corrective action to improve service delivery Defining indicators should be a participatory activity to generate buy in for use of results for decision making 26

27 Types of performance indicators Indicators of program inputs measure the specific resources that go into carrying out a project or program Indicators of outputs measure the immediate results obtained by the program Indicators of outcomes measure whether the outcome changed in the desired direction and whether this change signifies program “success” (impact evaluation) 27

28 Types of performance indicators Indicators can be either be quantitative or qualitative. – Quantitative indicators are numeric and are presented as numbers or percentages. – Qualitative indicators are descriptive observations: Used to supplement or complement the numbers and percentages by adding a richness of information about the context in which the program has been operating e.g. “availability of institutional guidelines for sourcing, appraising, analyzing and using research” 28

29 Use of performance indicators Setting performance targets and assessing progress toward achieving them. Identifying problems via an early warning system to allow corrective action to be taken. Indicating whether an in-depth evaluation or review is needed. 29

30 Advantages of performance indicators Effective means to measure progress toward objectives. Facilitates benchmarking comparisons between different organizational units, districts, and over time. 30

31 Disadvantages or performance indicators Poorly defined indicators are not good measures of success. Tendency to define too many indicators, or those without accessible data sources, making system costly, impractical, and likely to be underutilized. Often a trade-off between picking the optimal or desired indicators and having to accept the indicators which can be measured using existing data 31

32 Characteristics of a good indicator produce the same results when used repeatedly to measure the same condition or event; measure only the condition or event it is intended to measure; reflect changes in the state or condition over time; represent reasonable measurement costs; and be defined in clear and unambiguous terms. Be consistent with international standards and other reporting requirements. 32

33 Common challenges to selecting indicators An indicator that – the program activities cannot affect – Too vague – Relies on unavailable data – Does not accurately represent the desired outcome 33

34 General guidelines for the selection of indicators – Indicators requiring data that can realistically be collected with the resources available. – At least one or two indicators (ideally, from different data sources) per key activity or result. – At least one indicator for each core activity (e.g., training event, social marketing message, etc.). – No more than 8-10 indicators per area of significant program focus. – Use a mix of data collection sources whenever possible. 34

35 Logical Framework Approach Helps to clarify objectives of any project, program, or policy. Aids in the identification of the expected causal links—the “program logic”—in the following results chain: inputs, processes, outputs (including coverage or “reach” across beneficiary groups), outcomes, and impact. 35

36 Logical Framework Approach Leads to the identification of performance indicators at each stage in this chain, as well as risks which might impede the attainment of the objectives. Also a vehicle for engaging partners in clarifying objectives and designing activities. During implementation the LogFrame serves as a useful tool to review progress and take corrective action. 36

37 Use of Logical Frameworks Improving quality of project and program designs— by requiring the specification of clear objectives, the use of performance indicators, and assessment of risks. Summarizing design of complex activities. Assisting the preparation of detailed operational plans. Providing objective basis for activity review, monitoring, and evaluation. 37

38 Advantages of Logical Frameworks Ensures that decision-makers ask fundamental questions and analyze assumptions and risks. Engages stakeholders in the planning and monitoring process. When used dynamically, it is an effective management tool to guide implementation, monitoring and evaluation. 38

39 Disadvantages of Logical Frameworks If managed rigidly, stifles creativity and innovation. If not updated during implementation, it can be a static tool that does not reflect changing conditions. Training and follow-up are often required. 39

40 Logframes

41 How comfortable are you in working with logical frameworks? 1.What’s a logical framework? 2.I’ve heard of them but they scare me a bit 3.I have used them a bit but could do with some help 4.I am pretty comfortable using logical frameworks 5.I have a personal logframe which I use to plan out my entire life

42 How do Theories of Change and Logframes relate to each other? Theories of change are a bit like hippies – they are free and creative… …and Logframes are their slightly nerdy cousin! Gaudiramone, flikr RochelleHartman, flikr

43 PROJECT NAME IMPACTImpact Indicator 1 The long-term change your outcome will be contributing towards What you will measure to see if progress is being made towards the impact OUTCOMEOutcome Indicator 1Milestone/targetAssumptions The stuff you think will change as a result of the products and/or services you have delivered What you will measure to check that product/service has been delivered effectively The target you expect to reach by specified date The things that need to be true for this outcome to feed into the impact OUTPUT 1Output Indicator 1.1Milestone/targetAssumptions Product or service delivered What you will measure to check that product/service has been delivered effectively The target you expect to reach by specified date The things that need to be true for this output to feed into the outcome OUTPUT 2Output Indicator 2.1Milestone/targetAssumptions Product or service delivered What you will measure to check that product/service has been delivered effectively The target you expect to reach by specified date The things that need to be true for this output to feed into the outcome InputOutputOutcomeImpact

44 Tips for better logframes 1.Keep the language SIMPLE and CONCISE 2.Make sure the indicators are measurable – and where possible objective 3.The indicator should tell you what you will measure – the milestone and target should flow from this 4.Impacts should be high level and there is no need to include targets and milestones

45 PROJECT NAME Ben’s birthday IMPACTImpact Indicator 1 The team is more motivated and works better Score in team annual review OUTCOMEOutcome Indicator 1 TargetAssumptions Ben is happier Ben’s self- assessment of happiness before and after event Ben is at least 1 point happier on a five point rating scale That Ben’s happiness impacts on team’s productivity OUTPUT 1Output Indicator 1.1 TargetAssumptions Delicious cake produced Rating of cake tastiness by team members Cake is rated at least 4 out of 5 That Ben likes cake OUTPUT 2Output Indicator 2.1 TargetAssumptions Happy Birthday sung to Ben Tuning of singingAt least 80% of notes are in tune Ben enjoys listening to singing Ben’s birthday

46 PROJECT NAME BCURE IMPACTImpact Indicator 1 Poverty reduction and improved quality of life % living in poverty and average wellbeing scores OUTCOMEOutcome Indicator 1TargetAssumptions Policy makers are informed by research evidence when making decisions Case studies of policy being informed by research evidence At least 10 case studiesUse of research evidence will lead to better development outcomes OUTPUT 1Output Indicator 1.1TargetAssumptions Effective training deliveredScores of participants on pre and post training test Increase in score of at least 20 percentage points That increased capacity to use evidence will increase usage of evidence OUTPUT 2Output Indicator 2.1TargetAssumptions Effective advocacy carried out Attitudes of policy makers towards evidence as assessed by survey Increase in positive attitude towards evidence That positive attitude toward evidence will increase usage of evidence

47 Theory-based evaluation Has similarities to the LogFrame approach but allows a much more in-depth understanding of the workings of a program or activity—the “program theory” or “program logic.” Does not assume simple linear cause-and effect relationships 47

48 Theory-based evaluation By mapping out the determining or causal factors judged important for success, and how they might interact, it can then be decided which steps should be monitored as the program develops, to see how well they are in fact borne out. This allows the critical success factors to be identified. And where the data show these factors have not been achieved, a reasonable conclusion is that the program is less likely to be successful in achieving its objectives. 48

49 Use of Theory-based evaluation Mapping design of complex activities. Improving planning and management. 49

50 Advantages of theory-based evaluation Provides early feedback about what is or is not working, and why. Allows early correction of problems as soon as they emerge. Assists identification of unintended side-effects of the program. Helps in prioritizing which issues to investigate in greater depth, perhaps using more focused data collection or more sophisticated M&E techniques. Provides basis to assess the likely impacts of programs. 50

51 Disadvantages of theory-based evaluation Can easily become overly complex if the scale of activities is large or if an exhaustive list of factors and assumptions is assembled. Stakeholders might disagree about which determining factors they judge important, which can be time-consuming to address. 51

52 Formal Surveys Formal surveys can be used to collect standardized information from a carefully selected sample of people or households. Surveys often collect comparable information for a relatively large number of people in particular target groups. 52

53 Uses of formal surveys Providing baseline data against which the performance of the strategy, program, or project can be compared. Comparing different groups at a given point in time. Comparing changes over time in the same group. Comparing actual conditions with the targets established in a program or project design. Describing conditions in a particular community or group. Providing a key input to a formal evaluation of the impact of a program or project. 53

54 Advantages of formal surveys Findings from the sample of people interviewed can be applied to the wider target group or the population as a whole. Quantitative estimates can be made for the size and distribution of impacts. 54

55 Disadvantages of formal surveys With the exception of CWIQ, results are often not available for a long period of time. The processing and analysis of data can be a major bottleneck for the larger surveys even where computers are available. LSMS and household surveys are expensive and time-consuming. Many kinds of information are difficult to obtain through formal interviews. 55

56 Rapid Appraisal Methods Rapid appraisal methods are quick, low-cost ways to gather the views and feedback of beneficiaries and other stakeholders, in order to respond to decision-makers’ needs for information 56

57 Use of Rapid Appraisal Methods Providing rapid information for management decision-making, especially at the project or program level. Providing qualitative understanding of complex socioeconomic changes, highly interactive social situations, or people’s values, motivations, and reactions. Providing context and interpretation for quantitative data collected by more formal methods. 57

58 Advantages and Disadvantages of Rapid Appraisal Methods Advantages Low cost. Can be conducted quickly. Provides flexibility to explore new ideas Disadvantages Findings usually relate to specific communities or localities—thus difficult to generalize from findings. Less valid, reliable, and credible than formal surveys. 58

59 Resource needs Non-directive interviewing e.g. Key informant interviews group facilitation e.g. Focus groups discussions & community group interviews field observation Structured or semi-structured interview or group discussion guides note-taking and Basic statistical skills 59

60 Participatory methods Participatory methods provide active involvement in decision-making for those with a stake in a project, program, or strategy and generate a sense of ownership in the M&E results and recommendations 60

61 Use of participatory methods Learning about local conditions and local people’s perspectives and priorities to design more responsive and sustainable interventions Identifying problems and trouble-shooting problems during implementation Evaluating a project, program, or policy 61

62 Advantages of participatory methods Examines relevant issues by involving key players in the design process. Establishes partnerships and local ownership of projects. Enhances local learning, management capacity, and skills. Provides timely, reliable information for management decision-making. 62

63 Disadvantages of participatory methods Sometimes regarded as less objective. Time-consuming if key stakeholders are involved in a meaningful way. Potential for domination and misuse by some stakeholders to further their own interests. 63

64 Commonly Used Participatory Tools Stakeholder analysis is the starting point of most participatory work and social assessments. It is used to develop an understanding of the power relationships, influence, and interests of the various people involved in an activity and to determine who should participate, and when Beneficiary assessment involves systematic consultation with project beneficiaries and other stakeholders to identify and design development initiatives, signal constraints to participation, and provide feedback to improve services and activities. Participatory rural appraisal is a planning approach focused on sharing learning between local people, both urban and rural, and outsiders. It enables development managers and local people to assess and plan appropriate interventions collaborativelyoften using visual techniques so that non-literate people can participate. Participatory monitoring and evaluation involves stakeholders at different levels working together to identify problems, collect and analyze information, and generate recommendations. 64

65 Cost-Benefit and Cost-Effectiveness Analysis Cost-benefit and cost-effectiveness analysis are tools for assessing whether or not the costs of an activity can be justified by the outcomes and impacts. Cost-benefit analysis measures both inputs and outputs in monetary terms. Cost-effectiveness analysis estimates inputs in monetary terms and outcomes in non-monetary quantitative terms (such as improvements in student reading scores). 65

66 Use of cost benefit and effectiveness analysis Informing decisions about the most efficient allocation of resources. Identifying projects that offer the highest rate of return on investment. 66

67 Advantages of cost benefit and effectiveness analysis Good quality approach for estimating the efficiency of programs and projects. Makes explicit the economic assumptions that might otherwise remain implicit or overlooked at the design stage. Useful for convincing policy-makers and funders that the benefits justify the activity. 67

68 Disadvantages of cost benefit and effectiveness analysis Fairly technical, requiring adequate financial and human resources available. Requisite data for cost-benefit calculations may not be available, and projected results may be highly dependent on assumptions made. Results must be interpreted with care, particularly in projects where benefits are difficult to quantify. 68

69 Impact Evaluation Impact evaluation is the systematic identification of the effects – positive or negative, intended or not – on individual households, institutions, and the environment caused by a given development activity such as a program or project. Impact evaluation helps us better understand the extent to which activities reached the intended target group and the magnitude of their effects on the target group. 69

70 Impact Evaluation Impact evaluations can range from: – large scale sample surveys in which project populations and control groups are compared before and after, and possibly at several points during program intervention; – to small-scale rapid assessment and participatory appraisals where estimates of impact are obtained from combining group interviews, key informants, case studies and available secondary data. 70

71 Use of impact evaluation Measuring outcomes and impacts of an activity and distinguishing these from the influence of other, external factors. Helping to clarify whether costs for an activity are justified. Informing decisions on whether to expand, modify or eliminate projects, programs or policies. Drawing lessons for improving the design and management of future activities. Comparing the effectiveness of alternative interventions. Strengthening accountability for results. 71

72 Advantages of impact evaluation Provides estimates of the magnitude of outcomes and impacts for different demographic groups, regions or over time. Provides answers to some of the most central development questions – to what extent are we making a difference? What are the results on the ground? How can we do better? Systematic analysis and rigor can give managers and policy-makers added confidence in decision-making. 72

73 Disadvantages of impact evaluation Some approaches are very expensive and time- consuming, although faster and more economical approaches are also used. Reduced utility when decision-makers need information quickly. Difficulties in identifying an appropriate counter- factual. 73

74 Resource requirements for impact evaluations Can be as high as between $200,000 and $900,000. Simpler and rapid impact evaluations can be conducted for significantly less that $100,000 and in some cases for as little as $10,000 - $20,000. Strong technical skills in social science research design, management, analysis and reporting. Ideally, a balance of quantitative and qualitative research skills on the part of the evaluation team. Rapid assessment evaluations can often be conducted in less than 6 months. 74

75 Impact Evaluation designs Randomized evaluation designs, involving the collection of information on project and control groups at two or more points in time, provide the most rigorous statistical analysis of project impacts and the contribution of other factors. – cost, time, methodological or ethical constraints. Thus most impact evaluations use less expensive and less rigorous evaluation designs. 75

76 Impact evaluation design approaches 1.Randomized pre-test post-test evaluation. 2.Quasi-experimental design with before and after comparisons of project and control populations. 3.Ex-post comparison of project and non-equivalent control group. 4.Rapid assessment ex-post impact evaluations. 76

77 Randomized pre-test post-test evaluation Subjects are randomly assigned to project and control groups. Questionnaires or other data collection instruments are applied to both groups before and after the project intervention. Additional observations may also be made during project implementation. 77

78 Quasi-experimental design Where randomization is not possible, a control group is selected which matches the characteristics of the project group as closely as possible. Sometimes the types of communities from which project participants were drawn will be selected. Where projects are implemented in several phases, participants selected for subsequent phases can be used as the control for the first phase project group. 78

79 Ex-post comparison of project and non-equivalent control group Data are collected on project beneficiaries and a non-equivalent control group is selected as for Model 2. Data are only collected after the project has been implemented. Multivariate analysis is often used to statistically control for differences in the attributes of the two groups. 79

80 Rapid assessment ex-post impact evaluations Participatory methods can be used to allow groups to identify changes resulting from the project, who has benefited and who has not, and what were the project’s strengths and weaknesses. Triangulation is used to compare the group information with the opinions of key informants and information available from secondary sources. Case studies on individuals or groups may be produced to provide more in-depth understanding of the processes of change. 80

81 Data collection Plan Systems used for data collection, processing, analysis and reporting. The strength of these systems determines the validity of the information obtained. Potential errors in data collection, or in the data themselves, must be carefully considered when determining the usefulness of data sources. 81

82 Elements of a data collection plan the timing and frequency of collection the person or agency responsible for the collection the information needed for the indicators any additional information that will be obtained from the source. 82

83 Dissemination and use How the information gathered will be stored, disseminated and used should be defined at the planning stage of the project and described in the M&E plan. users of this information Dissemination channels - communicated effectively to policy makers and other interested stakeholders. 83

84 Next Steps Develop Draft Logical Framework to serve as basis for development of indicators – Logical Framework evolves from our theory of change The Logical Framework will then will be used to develop the M&E plan including the data collection plan and tools 84

85 Thank you! 85

86 86


Download ppt "1 Secure Health Programme Monitoring and Evaluation December 12, 2013 Violet Murunga African Institute for Development Policy (AFIDEP) 1."

Similar presentations


Ads by Google