Importance of Monitoring and Evaluation. Lecture Overview  Monitoring and Evaluation  How to Build an M&E System.

Slides:



Advertisements
Similar presentations
Designing and Building a Results-Based Monitoring and Evaluation System: A Tool for Public Sector Management.
Advertisements

Reasons for Monitoring and Evaluation at the Project Level
Results Based Monitoring (RBM)
Introduction to Monitoring and Evaluation
M & E for K to 12 BEP in Schools
Overview M&E Capacity Strengthening Workshop, Maputo 19 and 20 September 2011.
Donald T. Simeon Caribbean Health Research Council
Project Monitoring Evaluation and Assessment
Ray C. Rist The World Bank Washington, D.C.
Role of Result-based Monitoring in Result-based Management (RBM) Setting goals and objectives Reporting to Parliament RBM of projects, programs and.
HOW TO WRITE A GOOD TERMS OF REFERENCE FOR FOR EVALUATION Programme Management Interest Group 19 October 2010 Pinky Mashigo.
Monitoring & Evaluation in World Bank Agriculture and Rural Development Operations SASAR Monitoring & Evaluation Workshop New Delhi; June 20, 2006.
Dr. G. Johnson, Program Evaluation and the Logic Model Research Methods for Public Administrators Dr. Gail Johnson.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
Results-Based Management
May 12 th Monitoring and Project Control. Objectives Anticipated Outcomes Express why Monitoring and Controlling are Important. Differentiate between.
1 RBM Background Development aid is often provided on a point to point basis with no consistency with countries priorities. Development efforts are often.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
May 8, 2012 MWP-K Learning Event Monitoring, evaluation, and learning (MEL) framework for the Millennium Water Program, Kenya.
KEYWORDS REFRESHMENT. Activities: in the context of the Logframe Matrix, these are the actions (tasks) that have to be taken to produce results Analysis.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
Overview of Evaluation Designs. Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences.
UNDAF M&E Systems Purpose Can explain the importance of functioning M&E system for the UNDAF Can support formulation and implementation of UNDAF M&E plans.
Prepared by the North Dakota State Data Center July HNDECA and ECCS Evaluation Dr. Richard Rathge Professor and Director North Dakota State Data.
M&E Basics Miguel Aragon Lopez, MD, MPH. UNAIDS M&E Senior Adviser 12 th May 2009.
Monitoring & Evaluation: The concepts and meaning Day 9 Session 1.
Slide 1 Monitoring & Evaluation of e-Government Projects, Performance Metrics for CM People Process Technology Outputs Outcomes Impact.
MAINSTREAMING MONITORING AND EVALUATION IN EDUCATION Can education be effectively managed without an M & E system in place?
21/4/2008 Evaluation of control measures 1. 21/4/2008 Evaluation of control measures 2 Family and Community Medicine Department.
Monitoring and Evaluation
Project Management Learning Program 23 Mar – 3 Aprl 2009, Mekong Institute, Khon Kaen, Thailand Managing for Development Results Results-Oriented Monitoring.
PREPARING FOR MONITORING AND EVALUATION 27 – 31 May 2013 Bangkok Bangkok Office Asia and Pacific Regional Bureau for Education United Nations Educational,
1 Results-based Monitoring, Training Workshop, Windhoek, Results-based Monitoring Purpose and tasks Steps 1 to 5 of establishing a RbM.
Kathy Corbiere Service Delivery and Performance Commission
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
What is Impact Evaluation … and How Do We Use It? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop.
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
Project Monitoring and Evaluation A useful Tool for Research Projects’ Administrators and Managers Folajogun V. Falaye. (Professor of Educational Research.
ICT4D: Evaluation Dr David Hollow DSV, University of Stockholm.
Dr. Anubha Rajesh, Early Education Services, ICF January 13, 2016 Invest in Young Children: National Conference on Mother- tongue Based Multilingual Early.
Wisconsin Personnel Development System Grant Click on the speaker to listen to each slide. You may wish to follow along in your WPDM Guide.
Session 1 S3.1 session day 5 4 training delivered by lead partners Habitat for Humanity, RedR and Shelter Centre on 2 nd to 9 th July 2011 in Thailand.
Logic Models How to Integrate Data Collection into your Everyday Work.
Evaluating the Quality and Impact of Community Benefit Programs
How to show your social value – reporting outcomes & impact
Project monitoring and evaluation
M&E Basics Miguel Aragon Lopez, MD, MPH
Project Management and Monitoring & Evaluation
Strategic Planning for Learning Organizations
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 7. Managers’ and stakeholders’ information needs.
9/16/2018 The ACT Government’s commitment to Performance and Accountability – the role of Evaluation Presentation to the Canberra Evaluation Forum Thursday,
Business Environment Dr. Aravind Banakar –
Business Environment Dr. Aravind Banakar –
Business Environment
Business Environment
Business Environment
Business Environment
RRI MONITORING AND EVALUATION
Multi-Sectoral Nutrition Action Planning Training Module
IX- PREPARING THE BUDGET
Logic Models and Theory of Change Models: Defining and Telling Apart
Results of the Organizational Performance
WHAT is evaluation and WHY is it important?
Performance and Quality Improvement
Changing the Game The Logic Model
Monitoring programs and projects
Monitoring and Evaluation in Communication Management
Integrating Gender into Rural Development M&E in Projects and Programs
Data for PRS Monitoring: Institutional and Technical Challenges
M & E Plans and Frameworks
Presentation transcript:

Importance of Monitoring and Evaluation

Lecture Overview  Monitoring and Evaluation  How to Build an M&E System

MONITORING & EVALUATION What is M, what is E, why and how to monitor

What is Monitoring  Ongoing process that generates information to inform decision about the program while it is being implemented.  Routine collection and analysis of information to track progress against set plans and check compliance to established standards  Helps identify trends & patterns, adapts strategies, and inform decisions  Key words: Continuous – ongoing, frequent in nature Collecting and analyzing information – to measure progress towards goals Comparing results – assessing the performance of a program/project

Why is Monitoring Important?  Evidence of how much has been or has NOT been achieved Quantitative: numbers, percentage Qualitative: narrative or observation  Examination of trends  Highlight problems  Early warning signs  Corrective actions  Evaluate effectiveness of management action  Determine achievement of results

What is Evaluation  Evaluation is an assessment of an intervention to determine its relevance, efficiency, effectiveness, impact, and sustainability. The intention is to provide information that is credible and useful, enabling incorporation of lessons learned into decision making processes.  Key Words: Assessment – of the value of an event of action Relevance Efficiency Effectiveness Impact Sustainability Lessons learned

What is Evaluation? Evaluation Program Evaluation Impact Evaluation

What is M and what is E? Monitoring Measures progress towards goals, but doesn’t tell us the extent to which results achieved or the impact Continuous, frequent Has to take place during the intervention Evaluation Measures whether progress towards goal is caused by the intervention - causality Infrequent, time bound Can evaluate ongoing or completed intervention

Evaluation Program Evaluation Impact Evaluation Monitoring and Evaluation Monitoring

Components of Program Evaluation  What are the characteristics of the target population? What are the risks and opportunities? What programs are most suitable?  What is the logical chain connecting our program to the desired results?  Is the program being rolled out as planned? Is their high uptake among clients? What do they think of it?  What was the impact and the magnitude of the program?  Given the magnitude of impact and cost, how efficient is the program? Are your questions connected to decision-making? Needs assessment Program theory assessment Monitoring and process evaluation Impact evaluation Cost effectiveness

Evaluation Programme Evaluation Impact Evaluation Program Evaluation

Who is this Evaluation For?  Academics  Donors Their Constituents  Politicians / policymakers  Technocrats  Implementers  Proponents, Skeptics  Beneficiaries

How can Impact Evaluation Help Us?  Answers the following questions What works best, why and when? How can we scale up what works?  Surprisingly little hard evidence on what works  Can do more with given budget with better evidence  If people knew money was going to programmes that worked, could help increase pot for anti-poverty programmes

Programs and their Evaluations: Where do we Start? Intervention  Start with a problem  Verify that the problem actually exists  Generate a theory of why the problem exists  Design the program  Think about whether the solution is cost effective Program Evaluation  Start with a question  Verify the question hasn’t been answered  State a hypothesis  Design the evaluation  Determine whether the value of the answer is worth the cost of the evaluation

Endline Evaluation Life Cycle of a Program Baseline Evaluation Change or improvement Distributing reading materials and training volunteers Reading materials delivered Volunteers trained Target children are reached Classes are run, volunteers show up Attendance in classes Entire district is covered Refresher training of teachers Tracking the target children, convincing parents to send their child Incentives to volunteer to run classes daily and efficiently (motivation) Efforts made for children to attend regularly Improve coverage Theory of Change/ Needs Assessment Designing the program to implement Background preparation, logistics, roll out of program Monitoring implementatio n Process evaluation Progress towards target Planning for continuous improvement Reporting findings - impact, process evaluation findings Using the findings to improve program model and delivery

Program Theory – a Snap Shot Impacts Outcomes Outputs Activities Inputs Results Implementation

HOW TO BUILD AN M&E SYSTEM With a Focus on measuring both implementation and results?

Methods of Monitoring  First hand information  Citizens reporting  Surveys  Formal reports by project/programme staff Project status report Project schedule chart Project financial status report Informal Reports Graphic presentations

Monitoring: Questions  Is the intervention implemented as designed? Does the program perform?  Is intervention money, staff and other inputs available and put to use as planned? Are inputs used effectively?  Are the services being delivered as planned?  Is the intervention reaching the right population and target numbers?  Is the target population satisfied with services? Are they utilizing the services?  What is the intensity of the treatment? Implementation Plans and targets InputsOutputsOutcomes

Implementing Monitoring  Develop a monitoring plan How should implementation be carried out? What is going to be changed? Are the staff’s incentives aligned with project? Can they be incentivized to follow the implementation protocol? How will you train staff? How will they interact with beneficiaries or other stakeholders? What supplies or tools can you give your staff to make following the implementation design easier? What can you do to monitor? (Field visits, tracking forms, administrative data, etc.) Intensity of monitoring (frequency, resources required,…)?

Ten Steps to a Results-based Monitoring and Evaluation System Conducting a readiness and needs assessment Selecting key indicators to monitor outcomes Planning for improvement selecting realistic targets Using evaluation information Using findings Agreeing on outcomes to monitor and evaluate Gathering baseline data on indicators Monitoring for results Reporting findings Sustaining the M&E system within the organization

Conducting a needs and readiness assessment  What are the current systems that exist?  What is the need for the monitoring and evaluation?  Who will benefit from this system?  At what levels will the data be used?  Do we have organization willingness and capacity to establish the M&E system?  Who has the skills to design and build the M&E system? Who will manage?  What are the barriers to implementing M&E system on the ground (resource- crunch)?  How will you fight these barriers?  Will there be pilot programs that can be evaluated within the M&E system? - DO WE GO AHEAD?

Agreeing on outcomes (to monitor and evaluate)  What are we trying to achieve? What is the vision that our M&E system will help us achieve?  Are there national or sectoral goals (commitment to achieving the MDGs)?  Political/donor driven interest in goals?  In other words, what are our Outcomes: Improving coverage, learning outcomes… broader than focusing on merely inputs and activities

Selecting key indicators to monitor outcomes  Identify WHAT needs to get measured so that we know we have achieved our results?  Avoid broad based results, but assess based on feasibility, time, cost, relevance  Indicator development is a core activity in building an M&E system and drives all subsequent data collection, analysis, and reporting  Arriving at indicators will take come time  Identify plans for data collection, analysis, reporting PILOT! PILOT! PILOT!

Gathering baseline data on indicators  Where are we today?  What is the performance of indicators today?  Sources of baseline information: Primary or Secondary data  Date types: Qualitative or Quantitative  Data collection instruments

Planning for improvement selecting realistic targets  Targets – quantifiable levels of the indicators  Sequential, feasible and measurable targets  If we reach our sequential set of targets, then we will reach our outcomes!  Time bound – Universal enrolment by 2015 (outcome – better economic opportunities), Every child immunized by 2013 (outcome - reduction in infant mortality) etc.  Funding and resources available to be taken into account Target 1 Target 2 Target 3 Outcomes

Monitoring for implementation and results Impacts Outcomes Outputs Activities Inputs Results Implementation Results monitoring Implementation monitoring Provision of materials; training of volunteers; usage of material; number of volunteers teaching Change in percentage children who cannot read; Change in teacher attendance

Evaluation(?), Using Evaluation Information Monitoring does not information on attribution and causality. Information through Evaluation can be useful to  Helps determine are the right things being done  Helps select competing strategies by comparing results – are there better ways of doing things?  Helps build consensus on scale-up  Investigate why something did not work – scope for in-depth analysis  Evaluate the costs relative to benefits and help allocate limited resources

Reporting findings  Reporting Findings: What findings are reported to whom, in what format, and at what intervals. A good M&E system should provide an early warning system to detect problems or inconsistencies, as well as being a vehicle for demonstrating the value of an intervention – so do not hide poor results.  Using Results: recognize both internal and external uses of your results  Sustaining the M&E System: Some ways of doing this are generating demand, assigning responsibilities, increasing capacity, gather ing trustworthy data. Sustaining M&E System Using Results

THANK YOU