Planning for Performance Measurement:

Slides:



Advertisements
Similar presentations
Focusing an Evaluation Ben Silliman, Youth Development Specialist NC 4-H Youth Development.
Advertisements

Results Based Monitoring (RBM)
Introduction to Impact Assessment
Donald T. Simeon Caribbean Health Research Council
Prepared by BSP/PMR Results-Based Programming, Management and Monitoring Presentation to Geneva Group - Paris Hans d’Orville Director, Bureau of Strategic.
Logic Models and Evaluation Glen W. White 1 Jamie Simpson 2 1 University of Kansas, Research and Training Center on Independent Living 2 Kansas Department.
2014 AmeriCorps External Reviewer Training Assessing Need, Theory of Change, and Logic Model.
Theory of Change, Impact Monitoring, and Most Significant Change EWB-UK Away Weekend – March 23, 2013.
Program Evaluation. Lecture Overview  Program evaluation and program development  Logic of program evaluation (Program theory)  Four-Step Model  Comprehensive.
OJJDP Performance Measurement Training 1 Incorporating Performance Measurement in the Formula Grant RFP and Application Format Presenter: Pat Cervera,
Developing a Logic Model
Getting on the same page… Creating a common language to use today.
An Introduction to Monitoring and Evaluation for National TB Programs.
Dennis McBride, Ph.D. The Washington Institute (253) Goal Driven Logic Models.
Evaluation is a professional and ethical responsibility and is a core part of PHN professional practice Commitment to evaluation helps build the PHN intelligence.
The Lumina Center Grantseeking Workshop Series Presents Outcomes & Evaluations April 20, 2006.
Logic Modeling for Success Dr Kathryn Wehrmann Illinois State University.
1 Minority SA/HIV Initiative MAI Training SPF Step 3 – Planning Presented By: Tracy Johnson, CSAP’s Central CAPT Janer Hernandez, CSAP’s Northeast CAPT.
Presented By: Tracy Johnson, Central CAPT
Role of Result-based Monitoring in Result-based Management (RBM) Setting goals and objectives Reporting to Parliament RBM of projects, programs and.
1 Introduction to State Logic Models and Related Planning Stephanie Lampron, NDTAC.
How to Develop the Right Research Questions for Program Evaluation
2014 AmeriCorps External Reviewer Training
PROGRAM PLANNING, IMPLEMENTATION & EVALUATION The Service Delivery Model Link Brenda Thompson Jamerson, Chair Services to Youth Facet May 8-12, 2013.
1 Module 4: Designing Performance Indicators for Environmental Compliance and Enforcement Programs.
NJ - 1 Performance Measurement Reporting Development Services Group, Inc. Don Johnson For more information contact Development Services Group, Inc
This project is funded by the EUAnd implemented by a consortium led by MWH Logical Framework and Indicators.
May 12 th Monitoring and Project Control. Objectives Anticipated Outcomes Express why Monitoring and Controlling are Important. Differentiate between.
Sociology 3322a. “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards.
Performance Measurement Overview 9/18/2015 Performance Measurement Overview 1 What is it? Why do it?
JJDPA Reauthorization 2009: An Update DMC Action Network Annual Meeting May 15, 2009.
Program Evaluation and Logic Models
Preparing for the Main Event Using Logic Models as a Tool for Collaboratives Brenda M. Joly Leslie M. Beitsch August 6, 2008.
1 Introduction to Evaluating the Minnesota Demonstration Program Paint Product Stewardship Initiative September 19, 2007 Seattle, WA Matt Keene, Evaluation.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Evaluation Assists with allocating resources what is working how things can work better.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
QUICK OVERVIEW - REDUCING REOFFENDING EVALUATION PACK INTRODUCTION TO A 4 STEP EVALUATION APPROACH.
Purposes of Evaluation Why evaluate? Accountability: Justify trust of stakeholders (funders, parents, citizens) by showing program results (Summative)
Coordinating Council on Juvenile Justice and Delinquency Prevention Quarterly Meeting – October 21, 2011 Bryan Samuels, Commissioner Administration on.
Julie R. Morales Butler Institute for Families University of Denver.
Prepared by the North Dakota State Data Center July HNDECA and ECCS Evaluation Dr. Richard Rathge Professor and Director North Dakota State Data.
The Power of Monitoring: Building Strengths While Ensuring Compliance Greta Colombi and Simon Gonsoulin, NDTAC.
Program Evaluation Dr. Ruth Buzi Mrs. Nettie Johnson Baylor College of Medicine Teen Health Clinic.
Mapping the logic behind your programming Primary Prevention Institute
Logic Models: Laying the Groundwork for a Comprehensive Evaluation Office of Special Education Programs Courtney Brown, Ph.D. Center for Evaluation & Education.
Kathy Corbiere Service Delivery and Performance Commission
Catholic Charities Performance and Quality Improvement (PQI)
1 Sequenced Information Strategy –incorporating short-term programme proposal Paris21 Consortium meeting : June 2000 Tony Williams UK Department.
Changing the way the New Zealand Aid Programme monitors and evaluates its Aid Ingrid van Aalst Principal Evaluation Manager Development Strategy & Effectiveness.
Evaluation: from Objectives to Outcomes Janet Myers, PhD MPH AIDS Education and Training Centers National Evaluation Center
 2007 Johns Hopkins Bloomberg School of Public Health Section B Logic Models: The Pathway Model.
National blended learning programmes How QA & Evaluation strategies help to improve performance – demonstrably! Jean Garrod Centrex.
Key Leader Orientation 3- Key Leader Orientation 3-1.
Unit 6. Effective Communication and Collaboration This unit focuses on efforts to reduce juvenile delinquency through a collaborative process of community-based,
Performance Measurement 101. Performance Measurement Performance Measurement is: –The ongoing monitoring and reporting of program accomplishments and.
Are we there yet? Evaluating your graduation SiMR.
Comprehensive Youth Services Assessment and Plan February 21, 2014.
Developing a Program Logic. What is a Program Logic? A program logic is like a roadmap for a project that sets out how a project will achieve its desired.
Session 7: Planning for Evaluation. Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring.
Identifying Monitoring Questions from your Program Logic.
Selection Criteria and Invitational Priorities School Leadership Program U.S. Department of Education 2005.
Strategic Planning for Learning Organizations
Introduction to Program Evaluation
Strategic Prevention Framework – Planning
بسم الله الرحمن الرحيم.
مديريت كارورزي پزشكي اجتماعي.
Service Array Assessment and Planning Purposes
OGB Partner Advocacy Workshop 18th & 19th March 2010
Data for PRS Monitoring: Institutional and Technical Challenges
Presentation transcript:

Planning for Performance Measurement: Reviewing the Logic Models and Program Areas Presenters: Marcia Cohen, DSG, Inc. Heidi Hsia, OJJDP

What Is Performance Measurement? It is directly related to program goals and objectives. It measures progress quantitatively. It is not exhaustive. It provides a “temperature” reading—it may not tell you everything you want to know but provides a quick and reliable gauge of selected results. Performance measurement is a system of tracking progress in accomplishing specific goals, objectives, and outcomes. Both performance measurement and traditional program evaluation are necessary—and they share some common elements—but they serve different purposes, involve different processes, and can be conducted at different times in the life of a program. Performance measurement is a narrower form of tracking progress in relation to goals, objectives, and outcomes than program evaluation. It monitors a few vital signs related to program performance objectives, outputs, and outcomes. While program evaluation comprehensively examines programs using systematic, objective, and unbiased procedures in accordance with social science research methods and research design, performance measurement looks at a few indicators, is usually done annually, and usually by program staff. Why do performance measurement? Answer: To improve services, strengthen accountability, enhance decision-making, and support strategic planning.

Measurement vs. Evaluation Impact evaluations are broader and assess the overall or net effects— intended or unintended—of the program as a whole.* Impact evaluation Scope Evaluation Outcome evaluations investigate whether the program causes demonstrable effects on specifically defined target outcomes.* Outcome evaluation Process evaluations investigate the process of delivering the program, focusing primarily on inputs, activities, and outputs.* Process evaluation How does performance measurement differ from program evaluation? Evaluation is a formal process for collecting, analyzing, and interpreting information about a program’s implementation and effectiveness. It uses procedures that are systematic,objective, and unbiased. If you look at evaluation on a hierarchy, we would have Impact Evaluation at the top, which assesses the net effects of the program as a whole generally using a rigorous research design. Under that, we would have outcome evaluation, which measures both the immediate and long-term effectiveness of program services. It answers questions about results the program is having on the participants, community, and society. Under that we would have Process evaluation, which measures and documents the service delivery process that leads to immediate results and outcomes. Answers questions about how well a program is being run and whether it is being carried out as planned. Beneath that, we have Performance Measurement. Performance measurement does not make any rigorous effort to prove that the results were caused by the program alone or other external events. Evaluation requires adherence to a research design while performance measurement and process evaluation do not. However, the effort that goes into both performance measurement and process evaluation—that is, defining goals, objectives, and measures—can be used to lay the groundwork for an outcome evaluation. (The Title V newsletter had more information on the difference between evaluation and performance measurement.) Performance Measurement Program Monitoring Time * Evaluation definitions excerpt from: Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. Internet WWW page, at URL: <http://trochim.human.cornell.edu/kb/index.htm> (version current as of Aug. 02, 2000).

Outputs Versus Outcomes Outputs are products of program implementation/activities. Outcomes are benefits or changes as a result of the program. There are two types of outcomes: Short-term: The first benefits or changes experienced and the ones most closely related to program outputs. Long-term: Link a program’s short-term and long-term outcomes. Often they are changes in practice, policy, decision-making, or behavior. Let’s review definitions related to performance measurement. There are two types of performance indicators:  Output Indicators measure the products of a program’s implementation or activities. They are generally measured in terms of the volume of work accomplished, such as, amount of service delivered, staff hired, systems developed, sessions conducted, materials developed, policies, procedures, and/or legislation created. Examples include the number of juveniles served, the number of hours of service provided to participants, the number of staff trained, the number of detention beds added, the number of materials distributed, the number of reports written, and the number of site visits conducted. Outcome indicators measure the benefits or changes for individuals, the juvenile justice system, or the community as a result of the program. Outcomes are easiest to remember by the acronym BASK: they may be related to behavior, attitudes, skills, or knowledge. They may also relate to values, condition, or other attributes. Examples are changes in the academic performance of program participants, changes in the recidivism rate of program participants, changes in client satisfaction level, or changes in the conditions of confinement in detention. There are two levels of outcomes: Short-term Outcomes are the first benefits or changes participants or the system experience and are the ones most closely related to and influenced by the program’s outputs. They should occur during the program or by the program’s end. For direct service programs, they generally include changes in recipients’ awareness, knowledge, and attitudes. For programs designed to change the juvenile justice system, they include changes to the juvenile justice system that occur during or by the end of the program.  Long-term Outcomes link a program’s initial outcomes to the longer-term outcomes it desires for participants, recipients, the system, or the community. Often they are changes in practice, policy, decision-making or behavior that result from participants’ or service recipients’ new awareness, knowledge, attitudes, or skills or short-term changes in the jj system. They generally occur within 6 months to 1 year after the program ends, such as changes in arrest rate, reductions in truancy, or reductions in substance use.They are meaningful changes, often in condition or status, or overall problem behavior that gave rise to the program/ intervention in the first place. They are the most removed benefits that the program can expect to influence and usually occur more than 1 year after completion. They should relate back to the program’s goals, such as reducing delinquency.

Outputs Versus Outcomes (cont’d.) With regard to the Formula Grants and Title V performance measures, to summarize: Outputs are at the micro level and reflect program-level activity. Outcomes are at the macro level and, when aggregated, will reflect Federal outcomes. A good performance measurement system should be results oriented and focus on desired outcomes, less on outputs. With regard to the Formula Grants and Title V performance measures: Outputs are at the micro-level and reflect program-level activity especially among subgrantees. Outcomes are at the macro-level and, when aggregated, will reflect Federal outcomes. Both outputs and outcomes have been aggregated according to type of objective, such as, increase organizational capacity, improve planning and development, improve program efficiency, reduce delinquency, or increase accountability. A good performance measurement system keeps the following principles in mind. Measures should be:  Results-oriented – focused primarily on desired outcomes, less on outputs Important – concentrate on significant matters Reliable – accurate, consistent information over time Useful – information is valuable to both policy and program decision-makers and can be used to provide continuous feedback on performance to staff and managers Quantitative – expressed in terms of rates or percentages Realistic – measures are set that can be calculated Cost-effective – the measures themselves are sufficiently valuable to justify the cost of collecting the data Easy to interpret – do not require an advanced degree in statistics to use and understand Comparable – can be used for benchmarking against other organizations, internally and externally Credible – users have confidence in the validity of the data Source: Fairfax County (VA), Manual for Performance Measurement, 2002)

Outcome Measure Definitions Short-term: Occurs during the program or by the end of the program. Long-term: Occurs 6 months to 1 year after program completion.

Logic Model: Template A graphic representation that clearly lays out the logical relationships between the problem to be addressed, program activities, outputs, and outcomes. Outcomes Problem Activities Outputs A logic model is a graphic representation that clearly lays out the logical relationships between the problem, program activities, outputs, and outcomes. It is a description of how the program theoretically works to achieve benefits for participants. Short term Long term

Overall Formula Grants and Title V Programs Logic Model OUTCOMES Outputs Juvenile Justice System Improvement Program Areas: 19, 23, 31, 33 Core Requirements Program Areas: 6, 8, 10, 17, 28 Activities Increased system capacity Improved planning and development Improved program quality Improved monitoring of compliance Implement processes Implementing Title V programs for keeping at-risk youth from offending or first-time, nonserious youth out of the JJ system. Program Areas: 3, 4, 9, 10, 11, 12, 13, 15, 16, 18, 20, 21, 22, 25, 26, 27, 32, 34 Implementing Formula Grants prevention and intervention programs 1, 2, 3, 4, 5, 7, 9, 10, 11, 12, 13, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 29, 30, 32, 33, 34 Improved youth outcomes Improved policies, procedures, operations, staffing, service delivery Short Term Long Term Improved juvenile justice systems Increased compliance with Core Requirements Reduced recidivism Increased accountability Problems To improve juvenile justice systems by increasing compliance with the Core Requirements and increasing availability and types of prevention and intervention programs Goals To support both State and local prevention and intervention efforts and juvenile justice systems improvements Objectives Juvenile Delinquency Key = System-level indicator = Program-level indicator The first logic model we produced was for the entire OJJDP Formula Grants and Title V grants programs. Above and in your packets is the logic model for the entire Formula Grants Program. We describe the problem to be addressed as juvenile delinquency. The goals are to improve JJ systems by increasing compliance with the Core Requirements and increasing the availability and types of prevention and intervention programs. The objectives are: To support both State and local prevention and intervention efforts and JJ system improvements. The activities are, as I already described, categorized into 34 program areas grouped as follows: JJ System Improvement PAs: 19, 23, 31, 33 Core Requirement PAs: 6, 8, 10, 17, 28 Title V Program PAs: 3,4, 9, 10, 11, 12, 13, 15, 16, 18, 20, 21, 22, 25, 26, 27, 32, 34 Formula Grants Program PAs:1, 2, 3, 4, 5, 7, 9, 10, 11, 12, 13, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 29, 30, 32, 33, 34 The outputs include: increased system capacity as well as improved monitoring of core requirements, improved program quality, improved planning and development, and improved program efficiency. Short-term outcomes include: improved policies, operations, staffing, and service delivery Long-term outcomes include: improved jj systems, reduced youth relapse and reduced delinquency. These outcomes are illustrative and not an exhaustive list. You need to look at each logic model for the detailed outcomes expected from each program area. Outcome measures are for illustrative purposes only and are not comprehensive. To see a comprehensive list of outcomes, refer to the individual program area logic models.

Title V Program Areas Title V programs are for keeping at-risk youth from offending or first-time, nonserious youth out of the JJ system. Title V has 18 Prevention and Early Intervention Program Areas: 3, 4, 9, 10, 11, 12, 13, 15, 16, 18, 20, 21, 22, 25, 26, 27, 32, 34

Formula Grants Program Areas Three Formula Grants Program Area categories: Prevention and Intervention Program Areas: 1, 2, 3, 4, 5, 7, 9, 10, 11, 12, 13, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 29, 30, 32, 33, 34 JJ System Improvement Program Areas: 19, 23, 31, 33 Core Requirements Program Areas: 6, 8, 10, 17, 28