Measuring Progress: Strategies for Monitoring and Evaluation Rebecca Stoltzfus.

Slides:



Advertisements
Similar presentations
Educational Specialists Performance Evaluation System
Advertisements

The World Bank Human Development Network Spanish Impact Evaluation Fund.
Explanation of slide: Logos, to show while the audience arrive.
Overview M&E Capacity Strengthening Workshop, Maputo 19 and 20 September 2011.
Training Evaluation Presentation by Ranjith Menon.
Pennsylvania’s Continuous Improvement Process. Understanding AYP How much do you know about AYP?
Akoto Osei K, PhD Helen Keller International Asia Pacific Regional Office Monitoring Outcomes of Programs for Vitamin A Deficiency.
Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India.
Community Diagnosis.
Teenage conceptions in Wales The challenge of intervention and evaluation.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
DECISION SUPPORT RESEARCH TEAM “Providing expertise to improve health & wellbeing of families” Retention in a Study of Prenatal Care: Implications of attrition.
Understanding the role of child marriage on reproductive health outcomes: evidence from a multi- country study in South Asia Deepali Godha, David Hotchkiss,
 Have a basic understanding of common research designs and when to use them.  Have a basic understanding of logic models, when and why to use them.
Salud Mesoamérica Initiative Rena Eichler, PhD Broad Branch Associates.
Monitoring and Evaluation: Evaluation Designs. Objectives of the Session By the end of this session, participants will be able to: Understand the purpose,
1 Designing a Monitoring and Evaluation System for a Rural Travel and Transport Project Michael Bamberger Gender and Development Group The World Bank RTTP.
Conditional Cash Transfers for Improving Utilization of Health Services Health Systems Innovation Workshop Abuja, January 25 th -29 th, 2010.
Lecture 2: Health indicators and equity stratifiers Health inequality monitoring: with a special focus on low- and middle-income countries.
Poverty Reduction Information Systems (PRIS)
Higher Physical Education
DC Home visiting Implementation and impact evaluation
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
OVE’s Experience with Impact Evaluations Paris June, 2005.
Our recommendations were informed by an initial screen of all schools, community feedback and impact analyses, building walkthroughs, program assessments.
TRANSITION PROJECT LEARNING NETWORK WORKSHOP 3 AISLING PROJECT: TRANSITION PROJECT.
Addressing the SRH needs of married adolescent girls: Lessons from a case study in India K. G. Santhya Shireen J. Jejeebhoy Population Council, New Delhi.
Assessing the Evaluability of the Philippine Population Management Program (PPMP) Alejandro N. Herrin May 9, 2002.
Performance Measures AmeriCorps Project Director Training Saratoga, NY October 7 th – 9 th, 2013.
1 Adopting and Implementing a Shared Core Practice Framework A Briefing/Discussion Objectives: Provide a brief overview and context for: Practice Models.
Health Planning and Implementation in post-conflict Afghanistan by Laurence Laumonier-Ickx, MD November 8, 2006.
How Institutionalised are you ?. Environment You are making progress in a number of environmental areas including energy, water management, biodiversity,
Gender and Impact Evaluation
Indicators Dr Murali Krishna Public Health Foundation of India.
1 What are Monitoring and Evaluation? How do we think about M&E in the context of the LAM Project?
Gender and Development Effectiveness. Entry points for Tanzania? DPG Main, 8 May 2012 Anna Collins-Falk, Representative, UN Women on behalf of DPG Gender.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Well come to presentation. World Breastfeeding Trends Initiative (WBTi) Assessment of the Status of Global Strategy for Infant and Young Child Feeding.
Framework for evaluation of scaling up of programmes Consultative workshop on Health Impact Accounts Brussels 6 February 2012.
Short Programme Review on Child Health Experience from Sri Lanka Family Health Bureau Ministry of Health Sri Lanka 1 Regional Programme Managers Meeting.
Primer on Monitoring and Evaluation. The 3 Pillars of Monitoring and Evaluation  Identifying the Performance Indicators  Collecting information using.
Screen 1 of 23 Targeting Introduction to Targeting LEARNING OBJECTIVES Explain the key principles of targeting. Understand the steps of the targeting process.
Monitoring at the Household Level Methods, Problems, and Use of Critical Information.
WOMEN, CHILDREN, AND PUBLIC HEALTH MPH 600 INTRODUCTION TO PUBLIC HEALTH W. TWEEL, MD, MPH.
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
+ IDENTIFYING AND IMPLEMENTING EDUCATIONAL PRACTICES SUPPORTED BY RIGOROUS EVIDENCE: A USER FRIENDLY GUIDE Presented by Kristi Hunziker University of Utah.
Institute for International Programs An international evaluation consortium Institute for International Programs An international evaluation consortium.
1 Policy and programme lessons from the Multi-Country Evaluation (MCE) of IMCI The MCE Team.
Africa RISING M&E Expert Meeting Addis Ababa, 5-7 September 2012.
The context for the revised guidance Alan Inglis Assistant Principal, John Wheatley College.
Evaluation Plan Steven Clauser, PhD Chief, Outcomes Research Branch Applied Research Program Division of Cancer Control and Population Sciences NCCCP Launch.
Personal Environment Assessments “PEAs” Interdisciplinary tools that can be used by all staff.
Human resources for maternal, newborn and child health: opportunities and constraints in the Countdown priority countries Neeru Gupta Health Workforce.
Intervention and Support Inclusion Questions. Early and Strategic  How does the school provide purposeful early intervention and support to lift the.
The US Preventive Services Task Force: Potential Impact on Medicare Coverage Ned Calonge, MD, MPH Chair, USPSTF.
Annual Operational Plan 5 Mid-term (July – December 2009) Progress report Dr S K Sharif Director Public Health & Sanitation.
Module 1: Program Planning Cycle (Role of Surveys and Linking Indicators to Plans) Outcome Monitoring and Evaluation Using LQAS.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
IMPLEMENTATION AND PROCESS EVALUATION PBAF 526. Today: Recap last week Next week: Bring in picture with program theory and evaluation questions Partners?
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
Associate Professor Cathy Gunn The University of Auckland, NZ Learning analytics down under.
Definition of indicators Facilitators’ Workshop on District Health Performance Improvement Lilongwe, 25 th – 27 th March 2015.
Country Profile Bangladesh emerged as an independent and sovereign country in 1971  Area: 147,570 sq. km  Population: million (72% rural, 28%
E VALUATION PLAN FOR COURSE DESIGN Embedding Literacy and Numeracy Module.
Importance of Monitoring and Evaluation. Lecture Overview  Monitoring and Evaluation  How to Build an M&E System.
Measuring Results and Impact Evaluation: From Promises into Evidence
Evaluation of Nutrition-Sensitive Programs*
Sampling for Impact Evaluation -theory and application-
Presentation transcript:

Measuring Progress: Strategies for Monitoring and Evaluation Rebecca Stoltzfus

“Feedback is the breakfast of champions.” Dannon Nutrition Leadership Institute, 1999

“Any program worth implementing is worth evaluating.” sometime in my graduate studies

“Experienced evaluators design their evaluations to address the specific questions of concern to decision-makers.” Habicht J-P, Victora CG & Vaughan JP. Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact. Internat J Epidemiol 1999; 28:

Starting Questions: WHO? – Who will the evaluation inform? WHY? – What questions will it answer? HOW? – What evaluation design will provide the answers, with sufficient confidence and at lowest cost?

Who are the decision-makers? Community organizations and members Program implementors Policy-makers Donor agencies Researchers Different decision-makers need different information and in different forms to guide their decisions.

Why? What questions do they need to have answered? Four types of data: – Provision – Utilization – Coverage – Impact

Questions of Provision Are the services and supplies available? – No. of clinics offering iron pills Are they accessible? – Proportion of population < 10 km from clinic Is their quality adequate? – Proportion of staff with key knowledge about iron

Questions of Utilization Are the services being used? – Are clinics being attended? – Are improved seeds being purchased? – Are people getting microcredit?

Questions of Coverage Is the target population getting the intervention? – Proportion of pregnant women who attend MCH clinic at least twice in pregnancy – Proportion of children sleeping under a bednet – Proportion of poor accessing credit Coverage is significantly more difficult to assess than utilization because it requires a population- based denominator.

Questions of Impact Have health outcomes improved? – Maternal mortality, perinatal mortality, child development, child mortality Have crop yields increased? Have household incomes increased?

Three Basic Designs Monitoring (Adequacy) – Measuring target indicators over time Plausibility Evaluation – Building a reasonable argument for causality, without a randomized trial Probability Evaluation – Establishing cause and effect by randomly allocating program and non-program areas

Strength of Evidence Monitoring (Adequacy) Plausibility Evaluation Probability Evaluation Impact Evaluation

Strength of Evidence Monitoring (Adequacy) Plausibility Evaluation Probability Evaluation Increasing Confidence Increasing Cost

Why Randomize? In impact evaluations we want to know: – Are those who received the intervention better off? (easy) – Is that benefit attributable to the intervention? (hard) To obtain causal attribution, we want to know what would have been the fate of the intervened without the intervention (the counterfactual) Many options: – Crossover designs with self as control – Pre-post with non-randomized control; difference of differences – Propensity score matching – Constructing a counterfactual from other survey data – Instrumental variables or other econometric methods – Randomized control; considered most rigorous

Randomization does not justify “closurization” “For want of a good name for it, I have chosen a terrible one.... Your randomize and then you close your eyes.”

The Axes of WHAT and HOW Where does your work sit? MonitoringPlausibilityProbability Provision Utilization Coverage Impact