Monitoring and Evaluation of Postharvest Training Projects

Slides:



Advertisements
Similar presentations
Program Evaluation: What is it?
Advertisements

Introduction to Monitoring and Evaluation
Overview M&E Capacity Strengthening Workshop, Maputo 19 and 20 September 2011.
Donald T. Simeon Caribbean Health Research Council
Gathering Evidence of Impact: A Continued Conversation Jan Middendorf Cindy Shuman Office of Educational Innovation and Evaluation.
Mywish K. Maredia Michigan State University
Selecting the Right Evaluation Method. Objectives Why should we evaluate? Which activities should we evaluate? When should we evaluate? How should we.
Project Monitoring Evaluation and Assessment
Reality Check For Extension Programs Deborah J. Young Associate Director University of Arizona Cooperative Extension.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
CONCEPT PAPER RESULT BASED PLANNING. RESULT-ORIENTED PLANNING Overall Objective/ Goal Specific Objective/ Purposes Expected Result/ Output Activities.
May 12 th Monitoring and Project Control. Objectives Anticipated Outcomes Express why Monitoring and Controlling are Important. Differentiate between.
Program Evaluation and Logic Models
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
1 What are Monitoring and Evaluation? How do we think about M&E in the context of the LAM Project?
The Targeting Outcomes of Programs (TOP) framework.
Evelyn Gonzalez Program Evaluation. AR Cancer Coalition Summit XIV March 12, 2013 MAKING A DIFFERENCE Evaluating Programmatic Efforts.
Framework for Monitoring Learning & Evaluation
ScWk 242 Course Overview and Review of ScWk 240 Concepts ScWk 242 Session 1 Slides.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
UK Aid Direct Introduction to Logframes (only required at proposal stage)
Program Evaluation.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 9, 2012.
Quality Evaluations in Education Interventions 1 March 2016 Dr Fatima Adam Zenex Foundation.
Session 7: Planning for Evaluation. Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring.
ICT4D: Evaluation Dr David Hollow DSV, University of Stockholm.
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
Monitoring and Evaluation Systems for NARS organizations in Papua New Guinea Day 4. Session 10. Evaluation.
Comprehensive Evaluation Concept & Design Analysis Process Evaluation Outcome Assessment.
Module 8 Guidelines for evaluating the SDGs through an equity focused and gender responsive lens: Overview Technical Assistance on Evaluating SDGs: Leave.
Logic Models How to Integrate Data Collection into your Everyday Work.
Evaluating the Quality and Impact of Community Benefit Programs
Introduction to Marketing Research
GFC M&E 101 THE BASICS OF MONITORING AND EVALUATION AT GFC
Measurement Tools ENACTUS TRAINING Quality of Life
Project monitoring and evaluation
Designing Effective Evaluation Strategies for Outreach Programs
Resource 1. Involving and engaging the right stakeholders.
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 9. Periodic data collection methods.
Gender-Sensitive Monitoring and Evaluation
Gender-Sensitive Monitoring and Evaluation
M&E Basics Miguel Aragon Lopez, MD, MPH
Session 13: Monitoring and Evaluation
Evaluation Plan Akm Alamgir, PhD June 30, 2017.
Technical Assistance on Evaluating SDGs: Leave No One Behind
Right-sized Evaluation
Health Education THeories
PURPOSE AND SCOPE OF PROJECT MONITORING AND EVALUATION M&E.
Community program evaluation school
Strategic Planning for Learning Organizations
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 7. Managers’ and stakeholders’ information needs.
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Measurement Tools ENACTUS TRAINING Quality of Life
Methods Choices Overall Approach/Design
Program Evaluation Essentials-- Part 2
Chapter Three Research Design.
Multi-Sectoral Nutrition Action Planning Training Module
Evaluation Jacqui McDowell.
Program Planning and Evaluation Essentials
Program Planning and Evaluation Methods
4.2 Identify intervention outputs
REAL (working with Singizi) Presentation to NSA 3rd/4th August
What is a Logic Model? A depiction of a program showing what the program will do and what it is to accomplish. A series of “if-then” relationships that.
Defining your Mission, Setting Goals and Objectives
Monitoring and Evaluating FGM/C abandonment programs
Integrating Gender into Rural Development M&E in Projects and Programs
What is a Logic Model? A depiction of a program showing what the program will do and what it is to accomplish. A series of “if-then” relationships that.
Data for PRS Monitoring: Institutional and Technical Challenges
M & E Plans and Frameworks
Presentation transcript:

Monitoring and Evaluation of Postharvest Training Projects Dr. Lisa Kitinoja The Postharvest Education Foundation

WHY do we want to do M&E? Why do we want to monitor and evaluate postharvest projects and programs? Accountability purposes/may be required by donors Making program improvements For future project planning/new proposal development

KEY FOCUS What changes have occurred in the participant population since the beginning of the program? To what extent are these changes attributable to the program?

Attribution of change Typically we want to measure changes that can be attributed to the program or project, so we need a BASELINE measurement of INDICATORS to characterize the current situation. Changes can be positive or negative, intended or unintended A theory of action or logic model can help us to attribute any measured changes to the program’s activities (use if-then logic). 

S.M.A.R.T. Types of OBJECTIVES OUTPUTS are under the direct control of the project/program OUTCOMES are short term or medium term effects IMPACTS are long term effects which may take years to develop S.M.A.R.T.

Bennett’s Hierarchy of Evidence One example of a logical framework where inputs and activities create outputs that can lead to outcomes and impacts. Having INDICATORS for each level will help to establish a plausible link to explain any measured changes. Inputs Activities Participation (OUTPUTS) Reactions of participants (short term OUTCOMES) Changes in participant Knowledge/Attitudes/Skills/Aspirations (short term OUTCOMES) Changes in participant Behaviors or Adoption of new Practices (medium term OUTCOMES) End results (long term IMPACTS) Objectives can be written in reference to any of the 7 levels of Bennett’s Hierarchy. A complex objective may include indicators that are related to two or more levels.

Counting the numbers of people trained We can count the numbers of people who participate in our training programs # of men # of women # of youths # of groups (cooperatives or associations) # of leaders # of students # of extension workers Etc. ACTIVITY LEVEL

Results for our Hort CRSP ToT program (2010-12) 36 young professionals trained as postharvest specialists 19 women, 17 men 35% overall increase knowledge 45% overall increase in skills

Counting indicators of the adoption of improved postharvest practices PRACTICE CHANGE LEVEL # of ZECCs constructed # of solar driers constructed or purchased # of new processed products being produced # of people using shade # of people using storage facilities # of different types (qualities, sizes, packaged produce, fresh cut, etc) of fresh produce being eaten at home or sold # of people selling new products in the market Etc.

Number of ZECCS constructed in SSA Good choice of an indicator, since there were zero ZECCS in SSA when the project started in 2010 21 new ZECCs have been constructed and are in use as of 2014 7 in and near Arusha, 4 in Ghana, 3 in Ethiopia, 2 in Nigeria, 2 in Zambia, 3 in Kenya

Evaluation Design: what is the overall plan for making comparisons? Experimental (requires random selection and assignment, large sample sizes, so is usually not possible to achieve) Quasi-experimental (may be possible if you have a lot of time and funding) Comparison to Baseline (if the indicators are expected to change over time) Comparison to a control group (if the program participants are expected to show changes that are more or different than those of a similar group that did not participate) Baseline = Pre-intervention

Types of data and typical data collection methods Quantitative data (statistics, counts, numbers, percentages, costs, etc.) can be collected via making direct measurements, conducting formal surveys, analyzing secondary databases Qualitative data (on perceptions, beliefs, ideas, aspirations, categories, behaviors, etc.) can be collected via observations, interviews, rapid rural appraisals, CSAM, focus groups

Impact Evaluation Impact or end results will often take years to develop, and may occur after the completion of the program

Evaluation Reports Include stakeholders in reviews of preliminary and final reports M&E Findings Recommendations What questions has the M&E process not been able to answer? What additional research is needed? What are the major lessons from the assessment? Including STAKEHOLDERS in the M&E planning and implementation process will improve the chances that the evaluation results are utilized for decision making and future planning KEY FOCUS What changes have occurred in the participant population since the beginning of the program? To what extent are these changes attributable to the program?