PPA 502 – Program Evaluation Lecture 3b – Outcome Monitoring.

Slides:



Advertisements
Similar presentations
Donald T. Simeon Caribbean Health Research Council
Advertisements

State Staff Development and Training Team January 2012.
Program Evaluation. Lecture Overview  Program evaluation and program development  Logic of program evaluation (Program theory)  Four-Step Model  Comprehensive.
Ray C. Rist The World Bank Washington, D.C.
Decision-Making and Strategic Information Workshop on M&E of PHN Programs July 24-August 11, 2006 Addis Ababa.
PPA 502 – Program Evaluation Lecture 2b – Evaluability Assessment.
PPA 502 – Program Evaluation
PPA 502 – Program Evaluation
3 Chapter Needs Assessment.
Orientation to Performance and Quality Improvement Plan
Performance Management Upul Abeyrathne, Dept. of Economics, University of Ruhuna, Matara.
Developing the Marketing Plan
At the end of this module, participants should have a better understanding of the following : Elements of Gender Mainstreaming Basics of Gender Analysis.
WMO UNEP INTERGOVERNMENTAL PANEL ON CLIMATE CHANGE NATIONAL GREENHOUSE GAS INVENTORIES PROGRAMME WMO UNEP IPCC Good Practice Guidance Simon Eggleston Technical.
Quality Improvement Prepeared By Dr: Manal Moussa.
Action Research In Organizational Development. Action Research Coined by Kurt Lewin (MIT) in 1944 Reflective process of progressive problem solving Also.
Evaluation tools. Content Interview Focus group Interview are the main methods for collecting data in qualitative research. more structured the questions.
Dr. G. Johnson, Program Evaluation and the Logic Model Research Methods for Public Administrators Dr. Gail Johnson.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
1 Module 4: Designing Performance Indicators for Environmental Compliance and Enforcement Programs.
Parliamentary Committees in Democracies: Unit 4 Research Services for Parliamentary Committees.
State Higher Education Assessment Policies: State Higher Education Assessment Policies: Findings from Case Studies Thomas E. Perorazio John J.K. Cole The.
May 12 th Monitoring and Project Control. Objectives Anticipated Outcomes Express why Monitoring and Controlling are Important. Differentiate between.
Performance Measurement and Analysis for Health Organizations
Slide 1 D2.TCS.CL5.04. Subject Elements This unit comprises five Elements: 1.Define the need for tourism product research 2.Develop the research to be.
Long-term twinning seconding and young talents’ involvement for the improvement of land administration development projects Fredrik Zetterquist Managing.
MEET U.S. Performance Measurement Confidential – Do not Distribute NP STRATEGIES MEASURING PERFORMANCE IN THE NONPROFIT ORGANIZATION MEET U.S. In-Region.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
1 Experiences of Using Performance Information in Budget Process 27 th Annual Meeting of Senior Budget Officials Sydney, June 5 th 2006 Teresa Curristine.
Evaluation and Monitoring Methodologies Strengthening the Legislature – Challenges and Techniques K. Scott Hubli, NDI.
Ways for Improvement of Validity of Qualifications PHARE TVET RO2006/ Training and Advice for Further Development of the TVET.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
Presented by Linda Martin
Teaching Today: An Introduction to Education 8th edition
ROLE OF INFORMATION IN MANAGING EDUCATION Ensuring appropriate and relevant information is available when needed.
MAINSTREAMING MONITORING AND EVALUATION IN EDUCATION Can education be effectively managed without an M & E system in place?
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
Results Management: Principles and Strategies based on the work of Gary L. Bowen, Ph.D. and Dennis Orthner, Ph.D School of Social Work University of North.
Review: Alternative Approaches II What three approaches did we last cover? What three approaches did we last cover? Describe one benefit of each approach.
Assessing Organizational Communication: Strategic Communication Audits Chapter 1 Communication Audits as Organizational Development.
Building and Recognizing Quality School Systems DISTRICT ACCREDITATION © 2010 AdvancED.
Getting There from Here: Creating an Evidence- Based Culture Within Special Education Ronnie Detrich Randy Keyworth Jack States.
© 2007 SRI International CPATH Principal Investigators Meeting: Program Evaluation Update March 26, 2010 By the Center for Education Policy Dr. Raymond.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 24, 2009.
Systems Accreditation Berkeley County School District School Facilitator Training October 7, 2014 Dr. Rodney Thompson Superintendent.
A Guide for Management. Overview Benefits of entity-level controls Nature of entity-level controls Types of entity-level controls, control objectives,
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
United Nations Oslo City Group on Energy Statistics OG7, Helsinki, Finland October 2012 ESCM Chapter 8: Data Quality and Meta Data 1.
Quality Frameworks: Implementation and Impact Notes by Michael Colledge.
Kathy Corbiere Service Delivery and Performance Commission
Catholic Charities Performance and Quality Improvement (PQI)
Quality Assessment of MFA’s evaluations Rita Tesselaar Policy and operations Evaluation Department Netherlands Ministry of Foreign Affairs.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 9, 2012.
IDEV 624 – Monitoring and Evaluation Evaluating Program Outcomes Elke de Buhr, PhD Payson Center for International Development Tulane University.
Evaluating Engagement Judging the outcome above the noise of squeaky wheels Heather Shaw, Department of Sustainability & Environment Jessica Dart, Clear.
Alice Pedretti, Project Manager Effective management of complaints for companies Lessons learned from the Management of Complaints Assessment Tool Amsterdam,
Selection Criteria and Invitational Priorities School Leadership Program U.S. Department of Education 2005.
Evaluation What is evaluation?
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
Monitoring and Evaluation Systems for NARS organizations in Papua New Guinea Day 4. Session 10. Evaluation.
Road Owners and PMS Christopher R. Bennett Senior Transport Specialist East Asia and Pacific Transport The World Bank Washington, D.C.
Session: 5 Using the RDQA tool for System Assessment
بسم الله الرحمن الرحيم.
Effect combined IMPACT on achieving outcomes Organizational OUTPUTS
Introduction to the PRISM Framework
Lecturette 1: Leveraging Change through Strategic Planning
Lecturette 1: Leveraging Change through Strategic Planning
Data for PRS Monitoring: Institutional and Technical Challenges
Presentation transcript:

PPA 502 – Program Evaluation Lecture 3b – Outcome Monitoring

Introduction  The routine and periodic monitoring of outcomes is an important development in the evolution of performance monitoring systems.  Outcome monitoring requires the routine measurement and reporting of important indicators of outcome-oriented results.

What Is Outcome Monitoring?  Outcome monitoring is the regular (periodic, frequent) reporting of program results in ways that stakeholders can use to understand and judge those results.  The indicators measured should have some validity, some meaning that is closely tied to performance expectations.  The ways in which they are reported should also have utility, that is, they must be easily interpreted and focus attention on the key points.

Other Forms of Monitoring  Program monitoring – Site visits by experts for compliance-focused reviews of program operations. Designed to remedy procedural deficiencies.  Outcome monitoring is outcome-focused or results-oriented. –Built into the routines of data reporting within program operations. –Provides frequent and public feedback on performance.  Outcome monitoring is also not impact assessment, which measures in what ways the program produced the outcomes.

Why Do Outcome Monitoring  The accountability mandate. –Modern demands for accountability require proof. –Examples: local government (north Carolina), human services (Florida) pdf pdf –Government performance and results act. –USGAO.

Why Do Outcome Monitoring  Directed performance improvements. –A tool for making more efficient use of resources. –The essence of continuous quality improvement is the focused diagnosis of barriers to better performance, followed by the design of alternatives to remove or circumvent the barriers, the implementation of trials to test those alternatives, and finally the expansion of successful efforts to raise performance levels while shrinking variability in performance. –Florida example.

Why Do Outcome Monitoring  Commitment to continuous performance improvement. –Comparative snapshot of performance for all those who are responsible for outcomes. –Stimulates competition and unleashes creativity.  More efficient use of support resources. –Performance assessment focuses diagnostic skills on specific, underperforming elements of the program. –Increases efficiencies in the conduct of program evaluations. Raw data for evaluation Focus evaluator’s attention on programs most relevant to stakeholders.

Why Do Outcome Monitoring?  Growing confidence in organizational performance. –No system creates a PR nirvana. Critics will always find ammunition. –But a good outcome monitoring system can limit the damage by underscoring ongoing improvement efforts. –Internally, outcome monitoring provides perspective to officials burdened with program details.

Design Issues for Outcome Monitoring  What to measure? –Measures must be appropriate –Measures must sufficiently cover the range of intended outcomes. –Stakeholders should be involved in the identification of outcome measures.  How many measures? –A small number of highly relevant measures for upper management. –A more comprehensive set of measures to supplement the key indicators.

Design Issues for Outcome Monitoring  How (and how often) should performance be measured? –Automated measures allow more frequent assessment than labor-intensive data collection and reporting systems. –Some measures cannot be determined from automated systems. They should be built into program operations. But, cannot but too many burdens on program staff. Sampling, but affected by sample size. Contract requirements. Mobilization of outside groups. –Final answer: whatever it takes.

Design Issues for Outcome Monitoring  How to present the results. –Varies by message, sender, and receiver. –Different levels of aggregation and different emphases. –Graphics. –Data should be comparative. –Review presentation standards periodically.

Pitfalls in Outcome Monitoring  Unrealistic expectations. –Not a panacea. –Data collection is not easy. –Size and scope often underestimated.  Avoiding a clear focus on outcomes. –Easier to measure inputs, processes, and outputs than outcomes. Some may not be measurable directly. Persistence, good communications, and group facilitation skills can overcome resistance.

Pitfalls in Outcome Monitoring  Irrelevance. –Measures far removed from program reality. –Changes in policy priorities without requisite changes in performance measures.  Unwarranted conclusions. –Program targeting rather than performance improvement.