Presentation is loading. Please wait.

Presentation is loading. Please wait.

Importance of Monitoring and Evaluation. Lecture Overview  Monitoring and Evaluation  How to Build an M&E System.

Similar presentations


Presentation on theme: "Importance of Monitoring and Evaluation. Lecture Overview  Monitoring and Evaluation  How to Build an M&E System."— Presentation transcript:

1 Importance of Monitoring and Evaluation

2 Lecture Overview  Monitoring and Evaluation  How to Build an M&E System

3 MONITORING & EVALUATION What is M, what is E, why and how to monitor

4 What is Monitoring  Ongoing process that generates information to inform decision about the program while it is being implemented.  Routine collection and analysis of information to track progress against set plans and check compliance to established standards  Helps identify trends & patterns, adapts strategies, and inform decisions  Key words: Continuous – ongoing, frequent in nature Collecting and analyzing information – to measure progress towards goals Comparing results – assessing the performance of a program/project

5 Why is Monitoring Important?  Evidence of how much has been or has NOT been achieved Quantitative: numbers, percentage Qualitative: narrative or observation  Examination of trends  Highlight problems  Early warning signs  Corrective actions  Evaluate effectiveness of management action  Determine achievement of results

6 What is Evaluation  Evaluation is an assessment of an intervention to determine its relevance, efficiency, effectiveness, impact, and sustainability. The intention is to provide information that is credible and useful, enabling incorporation of lessons learned into decision making processes.  Key Words: Assessment – of the value of an event of action Relevance Efficiency Effectiveness Impact Sustainability Lessons learned

7 What is Evaluation? Evaluation Program Evaluation Impact Evaluation

8 What is M and what is E? Monitoring Measures progress towards goals, but doesn’t tell us the extent to which results achieved or the impact Continuous, frequent Has to take place during the intervention Evaluation Measures whether progress towards goal is caused by the intervention - causality Infrequent, time bound Can evaluate ongoing or completed intervention

9 Evaluation Program Evaluation Impact Evaluation Monitoring and Evaluation Monitoring

10 Components of Program Evaluation  What are the characteristics of the target population? What are the risks and opportunities? What programs are most suitable?  What is the logical chain connecting our program to the desired results?  Is the program being rolled out as planned? Is their high uptake among clients? What do they think of it?  What was the impact and the magnitude of the program?  Given the magnitude of impact and cost, how efficient is the program? Are your questions connected to decision-making? Needs assessment Program theory assessment Monitoring and process evaluation Impact evaluation Cost effectiveness

11 Evaluation Programme Evaluation Impact Evaluation Program Evaluation

12 Who is this Evaluation For?  Academics  Donors Their Constituents  Politicians / policymakers  Technocrats  Implementers  Proponents, Skeptics  Beneficiaries

13 How can Impact Evaluation Help Us?  Answers the following questions What works best, why and when? How can we scale up what works?  Surprisingly little hard evidence on what works  Can do more with given budget with better evidence  If people knew money was going to programmes that worked, could help increase pot for anti-poverty programmes

14 Programs and their Evaluations: Where do we Start? Intervention  Start with a problem  Verify that the problem actually exists  Generate a theory of why the problem exists  Design the program  Think about whether the solution is cost effective Program Evaluation  Start with a question  Verify the question hasn’t been answered  State a hypothesis  Design the evaluation  Determine whether the value of the answer is worth the cost of the evaluation

15 Endline Evaluation Life Cycle of a Program Baseline Evaluation Change or improvement Distributing reading materials and training volunteers Reading materials delivered Volunteers trained Target children are reached Classes are run, volunteers show up Attendance in classes Entire district is covered Refresher training of teachers Tracking the target children, convincing parents to send their child Incentives to volunteer to run classes daily and efficiently (motivation) Efforts made for children to attend regularly Improve coverage Theory of Change/ Needs Assessment Designing the program to implement Background preparation, logistics, roll out of program Monitoring implementatio n Process evaluation Progress towards target Planning for continuous improvement Reporting findings - impact, process evaluation findings Using the findings to improve program model and delivery

16 Program Theory – a Snap Shot Impacts Outcomes Outputs Activities Inputs Results Implementation

17 HOW TO BUILD AN M&E SYSTEM With a Focus on measuring both implementation and results?

18 Methods of Monitoring  First hand information  Citizens reporting  Surveys  Formal reports by project/programme staff Project status report Project schedule chart Project financial status report Informal Reports Graphic presentations

19 Monitoring: Questions  Is the intervention implemented as designed? Does the program perform?  Is intervention money, staff and other inputs available and put to use as planned? Are inputs used effectively?  Are the services being delivered as planned?  Is the intervention reaching the right population and target numbers?  Is the target population satisfied with services? Are they utilizing the services?  What is the intensity of the treatment? Implementation Plans and targets InputsOutputsOutcomes

20 Implementing Monitoring  Develop a monitoring plan How should implementation be carried out? What is going to be changed? Are the staff’s incentives aligned with project? Can they be incentivized to follow the implementation protocol? How will you train staff? How will they interact with beneficiaries or other stakeholders? What supplies or tools can you give your staff to make following the implementation design easier? What can you do to monitor? (Field visits, tracking forms, administrative data, etc.) Intensity of monitoring (frequency, resources required,…)?

21 Ten Steps to a Results-based Monitoring and Evaluation System 12345678910 Conducting a readiness and needs assessment Selecting key indicators to monitor outcomes Planning for improvement selecting realistic targets Using evaluation information Using findings Agreeing on outcomes to monitor and evaluate Gathering baseline data on indicators Monitoring for results Reporting findings Sustaining the M&E system within the organization

22 1 2345678910 Conducting a needs and readiness assessment  What are the current systems that exist?  What is the need for the monitoring and evaluation?  Who will benefit from this system?  At what levels will the data be used?  Do we have organization willingness and capacity to establish the M&E system?  Who has the skills to design and build the M&E system? Who will manage?  What are the barriers to implementing M&E system on the ground (resource- crunch)?  How will you fight these barriers?  Will there be pilot programs that can be evaluated within the M&E system? - DO WE GO AHEAD?

23 12345678910 Agreeing on outcomes (to monitor and evaluate)  What are we trying to achieve? What is the vision that our M&E system will help us achieve?  Are there national or sectoral goals (commitment to achieving the MDGs)?  Political/donor driven interest in goals?  In other words, what are our Outcomes: Improving coverage, learning outcomes… broader than focusing on merely inputs and activities

24 12345678910 Selecting key indicators to monitor outcomes  Identify WHAT needs to get measured so that we know we have achieved our results?  Avoid broad based results, but assess based on feasibility, time, cost, relevance  Indicator development is a core activity in building an M&E system and drives all subsequent data collection, analysis, and reporting  Arriving at indicators will take come time  Identify plans for data collection, analysis, reporting PILOT! PILOT! PILOT!

25 12345678910 Gathering baseline data on indicators  Where are we today?  What is the performance of indicators today?  Sources of baseline information: Primary or Secondary data  Date types: Qualitative or Quantitative  Data collection instruments

26 12345678910 Planning for improvement selecting realistic targets  Targets – quantifiable levels of the indicators  Sequential, feasible and measurable targets  If we reach our sequential set of targets, then we will reach our outcomes!  Time bound – Universal enrolment by 2015 (outcome – better economic opportunities), Every child immunized by 2013 (outcome - reduction in infant mortality) etc.  Funding and resources available to be taken into account Target 1 Target 2 Target 3 Outcomes

27 12345678910 Monitoring for implementation and results Impacts Outcomes Outputs Activities Inputs Results Implementation Results monitoring Implementation monitoring Provision of materials; training of volunteers; usage of material; number of volunteers teaching Change in percentage children who cannot read; Change in teacher attendance

28 12345678910 Evaluation(?), Using Evaluation Information Monitoring does not information on attribution and causality. Information through Evaluation can be useful to  Helps determine are the right things being done  Helps select competing strategies by comparing results – are there better ways of doing things?  Helps build consensus on scale-up  Investigate why something did not work – scope for in-depth analysis  Evaluate the costs relative to benefits and help allocate limited resources

29 12345678910 Reporting findings  Reporting Findings: What findings are reported to whom, in what format, and at what intervals. A good M&E system should provide an early warning system to detect problems or inconsistencies, as well as being a vehicle for demonstrating the value of an intervention – so do not hide poor results.  Using Results: recognize both internal and external uses of your results  Sustaining the M&E System: Some ways of doing this are generating demand, assigning responsibilities, increasing capacity, gather ing trustworthy data. Sustaining M&E System Using Results

30 THANK YOU


Download ppt "Importance of Monitoring and Evaluation. Lecture Overview  Monitoring and Evaluation  How to Build an M&E System."

Similar presentations


Ads by Google