Evaluating PEI Programs: Focus on Outcomes CalMHSA PEI TTACB Work Group Wednesday, May 14 th & Thursday, May 15 th Facilitated by RAND and SRI.

Slides:



Advertisements
Similar presentations
Program Evaluation: What is it?
Advertisements

Integrating the NASP Practice Model Into Presentations: Resource Slides Referencing the NASP Practice Model in professional development presentations helps.
Template: Making Effective Presentation about Your Evidence-based Health Promotion Program This template is intended for you to adapt to your own program.
Program Evaluation. Overview and Discussion of: Objectives of evaluation Process evaluation Outcome evaluation Indicators & Measures Small group discussions.
Introduction to Monitoring and Evaluation
Planning, Using, and Adapting County Data Systems CalMHSA PEI TTACB Work Group March 5, 2014 Facilitated by RAND and SRI Planning, Using, and Adapting.
Standardized Scales.
Donald T. Simeon Caribbean Health Research Council
Using Data to Measure and Report Program Impact Anne Basham, MFA MEMconsultants.
The Ohio Mental Health Consumer Outcomes System A Training for Family Members Prepared by Velma Beale, M.A. NAMI Ohio For the Ohio Department of Mental.
Comprehensive Organizational Health AssessmentMay 2012Butler Institute for Families Comprehensive Organizational Health Assessment Presented by: Robin.
What You Will Learn From These Sessions
Compassion. Action. Change. CalMHSA Staff Recommendations for County PEI Funded Activities in Phase II FY CalMHSA Board of Directors Meeting December.
Grantee Program Plan. Components of the Program Plan Goals  Objectives  Activities  Techniques & Tools  Outcomes/Change.
Catulpa Community Support Services.  Use of an electronic data entry program to record demographic data and case notes to reflect service delivery 
Healthy Child Development Suggestions for Submitting a Strong Proposal.
ISB Notice and preparing for the implementation of the new IAPT Data Standard Shaun Crowe Mental Health, Employment and IAPT Mental Health Collaborative.
Bridgeport Safe Start Initiative Update Meeting September 23, 2004 Bridgeport Holiday Inn.
1 Minority SA/HIV Initiative MAI Training SPF Step 3 – Planning Presented By: Tracy Johnson, CSAP’s Central CAPT Janer Hernandez, CSAP’s Northeast CAPT.
PHAB's Approach to Internal and External Evaluation Jessica Kronstadt | Director of Research and Evaluation | November 18, 2014 APHA 2014 Annual Meeting.
Evaluation. Practical Evaluation Michael Quinn Patton.
For more information, please contact Jonny Andia at 1.
Community Planning Training 1-1. Community Plan Implementation Training 1- Community Planning Training 1-3.
Studying treatment of suicidal ideation & attempts: Designs, Statistical Analysis, and Methodological Considerations Jill M. Harkavy-Friedman, Ph.D.
Continuous Quality Improvement (CQI)
Building a Data Culture Data Guru Roles, Responsibilities & Expectations.
How to Develop the Right Research Questions for Program Evaluation
1-2 Training of Process FacilitatorsTraining of Coordinators 5-1.
2014 AmeriCorps External Reviewer Training
Creating a service Idea. Creating a service Networking / consultation Identify the need Find funding Create a project plan Business Plan.
PRESENTATION TO THE MHSA PLANNING STAKEHOLDER STEERING COMMITTEE FEBRUARY 24, 2014 MHSA THREE-YEAR PROGRAM AND EXPENDITURE PLAN –
Evaluation: A Necessity for the Profession, an Essential Support for Practitioners Linking Intervention with Outcome Bryan Hiebert University of Victoria.
Session 8 Early Risk Communication Campaign Planning Session 8 Slide Deck Slide 8-1.
Evaluating REACH Funded Projects Using the REACH Theory of Change.
Sociology 3322a. “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards.
The Evaluation Plan.
Program Evaluation and Logic Models
1 What are Monitoring and Evaluation? How do we think about M&E in the context of the LAM Project?
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Working Definition of Program Evaluation
Measuring and Improving Practice and Results Practice and Results 2006 CSR Baseline Results Measuring and Improving Practice and Results Practice and Results.
Evelyn Gonzalez Program Evaluation. AR Cancer Coalition Summit XIV March 12, 2013 MAKING A DIFFERENCE Evaluating Programmatic Efforts.
Understanding TASC Marc Harrington, LPC, LCASI Case Developer Region 4 TASC Robin Cuellar, CCJP, CSAC Buncombe County.
Transforming Community Services Commissioning Information for Community Services Stakeholder Workshop 14 October 2009 Coleen Milligan – Project Manager.
Workshop 6 - How do you measure Outcomes?
 2008 Johns Hopkins Bloomberg School of Public Health Evaluating Mass Media Anti-Smoking Campaigns Marc Boulay, PhD Center for Communication Programs.
“The Effect of Patient Complexity on Treatment Outcomes for Patients Enrolled in an Integrated Depression Treatment Program- a Pilot Study” Ryan Miller,
OVERVIEW Partners in Pregnancy is a community program aimed at giving young couples the resources they need with their pregnancy. Young pregnant couples.
Integrating Knowledge Translation and Exchange into a grant Maureen Dobbins, RN, PhD SON, January 14, 2013.
1. Housekeeping Items June 8 th and 9 th put on calendar for 2 nd round of Iowa Core ***Shenandoah participants*** Module 6 training on March 24 th will.
Transforming Patient Experience: The essential guide
Covered California: Promoting Health Equity and Reducing Health Disparities Covered California Board Meeting March 21, 2013.
Community Planning 101 Disability Preparedness Summit Nebraska Volunteer Service Commission Laurie Barger Sutter November 5, 2007.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
Kathy Corbiere Service Delivery and Performance Commission
PEI Regulations Overview: What’s Different and What’s the Same?
Program Evaluation Principles and Applications PAS 2010.
Training of Process Facilitators 1- Training of Process Facilitators 5-1.
Session 2: Developing a Comprehensive M&E Work Plan.
RE-AIM Framework. RE-AIM: A Framework for Health Promotion Planning, Implementation and Evaluation Are we reaching the intended audience? Is the program.
IMPLEMENTATION AND PROCESS EVALUATION PBAF 526. Today: Recap last week Next week: Bring in picture with program theory and evaluation questions Partners?
[Presentation location] [Presentation date] (Confirm ABT logo) Building Bridges and Bonds (B3): An introduction.
Middle Managers Workshop 2: Measuring Progress. An opportunity for middle managers… Two linked workshops exploring what it means to implement the Act.
Welcome to Workforce 3 One U.S. Department of Labor Employment and Training Administration Webinar Date: April 30, 2014 Presented by: U.S. Departments.
Introduction Social ecological approach to behavior change
Performance and Progress 2012/2013. Why We Do an Annual Data Presentation To assess the Levy’s performance in various categories against goals. To highlight.
Logic Models How to Integrate Data Collection into your Everyday Work.
Implementing the Specialized Service Professional State Model Evaluation System for Measures of Student Outcomes.
Health care for the Homeless Strategic Planning 2018
Presentation transcript:

Evaluating PEI Programs: Focus on Outcomes CalMHSA PEI TTACB Work Group Wednesday, May 14 th & Thursday, May 15 th Facilitated by RAND and SRI

Introductions Please tell us: Please tell us: –Name –County –Role or Position –Involvement with PEI Evaluation Designer or participant in evaluations Designer or participant in evaluations Consumer of evaluation results Consumer of evaluation results Interested in learning more Interested in learning more 2

Today’s work group is part of RAND’s CalMHSA TTACB project Collaboration between CA counties, community-based service providers, CalMHSA, SRI, and RAND Collaboration between CA counties, community-based service providers, CalMHSA, SRI, and RAND Provide support for PEI implementation throughout CA Provide support for PEI implementation throughout CA Today we will focus on planning and conducting outcome evaluations for different types of PEI programs 3

Agenda Welcome and introductions Welcome and introductions Building outcome evaluation approaches Building outcome evaluation approaches Breakout session #1; Report back and discussion Breakout session #1; Report back and discussion Lunch Lunch Breakout session #2; Report back and discussion Breakout session #2; Report back and discussion Analysis and use of data Analysis and use of data Engaging partners in evaluation Engaging partners in evaluation Wrap Up and evaluation Wrap Up and evaluation 4

Today’s workshop builds upon our previous work Within GTO framework today’s focus is on steps 7-9 Within GTO framework today’s focus is on steps 7-9 Delivering PEI Programs

Why is evaluating outcomes important? Results can be used for: Results can be used for: –Accountability –Program or service quality improvement –PEI planning and advocacy Ultimately it is what clients and payers care about most Ultimately it is what clients and payers care about most –Is this program meeting its objectives? –Why or why not?

We will focus on how to evaluate implementation and client outcomes PEI Funding What is developed? What is delivered and to whom? What changes are expected? Are there public health benefits? Improvements in long-term outcomes: Suicide Incarceration Homelessness School drop out Foster care Unemployment Differences across groups More and better prevention and early intervention New and enhanced Community resources Program activities Changed knowledge, behaviors and attitudes Improved resilience and emotional well- being Implementation Outcomes Client Outcomes

For each type of PEI program we will go through a series of questions What are the key implementation outcomes? What are the key implementation outcomes? What are the key client outcomes and benchmarks? What are the key client outcomes and benchmarks? What are the key evaluation questions for each outcome? What are the key evaluation questions for each outcome? What are the potential data sources? What are the potential data sources? What methods of data collection and analysis will be used? What methods of data collection and analysis will be used? How will the results be interpreted and used? How will the results be interpreted and used?

We will review outcome evaluation strategies for these types of PEI programs System Change Efforts System Change Efforts Outreach and Public Awareness Campaigns Outreach and Public Awareness Campaigns Gatekeeper Education and Training Gatekeeper Education and Training Screening and Referral Screening and Referral Counseling and Support Counseling and Support Clinical Services for Early Intervention Clinical Services for Early Intervention

Example: Screening for Needs and Referral to Service Programs Mobile screening programs Mobile screening programs SBIRT SBIRT Promatoras Promatoras Health fairs Health fairs Screening that occurs in probation, social services, health and education settings Screening that occurs in probation, social services, health and education settings

Recall that Logic Models Visually Represent Pathways from Programs Activities Results Reduced suicide and Mental-health related Prolonged suffering Incarceration Homelessness School drop out Out-of home removal Unemployment Differences across groups Increased access to additional services Increased help- seeking Increased assessment of need Increased use of treatment, counseling and support services if needed Identifying individuals at risk for mental illness Increased identification and referral of at-risk individuals Screening and Referral Programs PEI Funding Where is it going? What is it doing? Does it make a difference? Are there public health benefits?

Logic Models Guide Evaluation Efforts Client Outcomes Screening and Referral Programs What is the goal of the program? What is the target population for the program? What benchmarks have been set for the program to meet? How will the results be used? Implementation Outcomes Program Evaluation questions Who is being reached by the program? How closely do participants match the intended audience? How many and which patients were referred for additional services? How many people are being screened ? Evaluation questions What proportion of patients access and engage in appropriate services as a result of the screening and referral program? What proportion of participants experience reduced symptoms / improved recovery? Do improvements in outcomes meet your county benchmark goals?

How Elements in the Logic Model Can Be Evaluated Evaluation questions related to client outcomes Where is it going?What is it doing? Does it make a difference? Evaluation questions related to program implementation Screening and Referral Programs How will it be evaluated? Counts and characteristics of individuals screened, identified and referred Characteristics of target population Counts and characteristics of individuals completing the referral Surveys, focus groups or individual interviews of individuals about ease of referral process

What Tools and Resources Are Needed Evaluation questions related to client outcomes Where is it going?What is it doing? Does it make a difference? Evaluation questions related to program implementation Screening and referral programs How will it be evaluated? What tools/resources will be needed? Access to participants post screening and referral process Counts of individuals completing referral Administrative data on target population Counts of numbers of screeners and referrals Tracking system

Analysis Approach Evaluation questions related to client outcomes Where is it going?What is it doing? Does it make a difference? Evaluation questions related to program implementation Screening and referral programs How will it be evaluated? What tools/resources will be needed? How will it be analyzed? Counts and descriptive statistics of individuals screened and referred for services Compare characteristics of those screened and referred to target population Proportion and descriptive statistics of those completing referral Proportion and descriptive statistics of those with reduced symptoms

Interpreting Results Evaluation questions related to client outcomes Where is it going?What is it doing? Does it make a difference? Evaluation questions related to program implementation Screening and Referral Programs How will it be evaluated? What tools/resources will be needed? How will it be analyzed? How will you tell if program is meeting its objectives? Examine how well screened individuals match the target population. Examine whether proportion of target population screened and referred met benchmarks Examine whether proportion of those completing referral met your benchmarks

Where do objectives for program performance come from? Objectives for program performance allow you to measure results against goals Objectives can come from: –Research results or evidence based program standards –Performance of comparable programs –Program performance in previous years –Consensus of stakeholder groups and program teams about what is meaningful improvement

Did outcomes improve? Even without objective benchmarks results can be used to evaluate and improve performance PEI Funding Where is it going? What is it doing? Does it make a difference? What do the program activities do? What new program activities were put in place? What are the program goals and objectives? Existing resources Results

Preparation for Breakout Groups

Small Groups Morning sessions Morning sessions –Outreach and Public Awareness Campaigns –System Change Efforts –Gatekeeper Education and Training Efforts Afternoon sessions Afternoon sessions – Screening and Referral –Counseling and Support –Early Intervention Clinical Services

Small Group Exercise: Overview 1. Review example program(s) to evaluate 2. Review priority outcomes and key evaluation questions Program implementation outcomes Short-term or client outcomes 3. Review and discuss sample measures 4. Discuss evaluation approach and methods 5. Report back on challenges, resources, and approaches identified

What Are Priority Evaluation Questions? Program implementation questions: Are we serving our target audience / people in need? Are we serving our target audience / people in need? Are we providing high-quality services? Are we providing high-quality services? Short-term or client outcome questions: Are we making a difference across different levels of need and with different populations? Are we making a difference across different levels of need and with different populations? – –Are these differences meaningful? How do our results compare to our benchmarks? How do our results compare to our benchmarks?

What Makes a Good Measure? Criteria to consider: Brevity / level of burden to respondents Ease of administration and scoring Meaningfulness of items Cost Sensitivity to change Appropriateness for multiple cultural, language, and ethnic groups Reliability and validity Potential use across programs

What is a Feasible Evaluation Approach? What method(s) will you use to collect data? What method(s) will you use to collect data? – –Survey, direct assessment (with what measures?) – –Interview, focus group – –Records extract, data export Who will be your respondent? Data collector? Who will be your respondent? Data collector? When will you collect data? When will you collect data? – –Single point – –Pre/post/ follow-up Duration of interval(s) How will you store/manage data? How will you store/manage data? – –Hardware, software, and capacity issues

Resources Notebooks with sample measures: Notebooks with sample measures: –Each small group can review copies related to specific program types Thumb drives with sample measures: Thumb drives with sample measures: –Includes sample measures for all 6 program types –Presented as “Thank You” for completing the evaluation form at the end of the workshop Take home

Breakout Session #1 Group 1 – Outreach Group 2 – Gatekeeper Group 3 – System Group A Group 4 – System Group B

Breakout Session #2 Group 5 – Screening Group 6 – Clinical Group 7 – Counseling Group A Group 8 – Counseling Group B

Analysis and Use of Data

How do you turn raw data into useful information? How do you turn raw data into useful information? –Decide on purpose of evaluation –Analysis –Interpretation –Dialogue

Where do evaluation data come from? Program descriptions and materials Program descriptions and materials Staff training and other kinds of training materials Staff training and other kinds of training materials Administrative data about enrollment and participation Administrative data about enrollment and participation Observations of program activities Observations of program activities Survey data from participants – demographics, opinions about the program, standardized assessments of outcomes Survey data from participants – demographics, opinions about the program, standardized assessments of outcomes Qualitative data – focus groups with staff, participants, individual reports and stories Qualitative data – focus groups with staff, participants, individual reports and stories

How will the evaluation results be used? Decide how the information will be used Decide how the information will be used –Accountability –Program or service quality improvement –PEI planning and advocacy Decide who will be using the information Decide who will be using the information –Clinicians and service delivery staff –Program planners and managers –Community leaders and stakeholders –Legal and financial accountability auditors Purposes and users guide analysis and reporting Purposes and users guide analysis and reporting –What was learned? –Were program goals and objectives met? –With whom will results be shared? –Given these results, what are the next steps? –How can you improve on what you are doing?

Analysis and Use of Data How do you turn raw data into useful information? How do you turn raw data into useful information? –Decide on purpose of evaluation –Analysis –Interpretation –Dialogue

Questions Drive Analysis Approaches Did outcomes improve? Where is it going?What is it doing? Does it make a difference? What do the program activities do? What new program activities were put in place? Is there evidence that the services should work? Is the content relevant and appropriate for target populations? How much does the program cost? What resources does it require to implement? How feasible is it, given the available resources? What is the intended capacity of the program? Is the program sustainable? How many people are reached by the program? What proportion of providers are delivering the service? Is the program acceptable to stakeholders (clients and providers)? Is the program appropriate or compatible with patient and provider expectations and skills? Is the intervention being delivered with fidelity? What proportion of the target population is being reached? Is the program reaching clients equitably? Did services improve participants’ knowledge, behaviors or attitudes? Did services improve participants’ resilience and emotional well-being? Were social connections and family well-being helped? Did the program lead to a stronger community? Did participants access and use more community resources or mental health treatment?

Collecting and Presenting Cost Data How much does the program cost? How much does the program cost? –Identify costs for a single program Costs clearly associated with that program Costs clearly associated with that program Allocation of costs shared with other programs Allocation of costs shared with other programs –Identify elements of cost Program development or “set up” costs Program development or “set up” costs Training Training Cost associated with ongoing activities Cost associated with ongoing activities Cost per participant Cost per participant –Identify time period covered Match time periods for costs with other data Match time periods for costs with other data

Questions Drive Analysis Approaches Did outcomes improve? Where is it going?What is it doing? Does it make a difference? What do the program activities do? What new program activities were put in place? Is there evidence that the services should work? Is the content relevant and appropriate for target populations? How much does the program cost? What resources does it require to implement? How feasible is it, given the available resources? What is the intended capacity of the program? Is the program sustainable? How many people are reached by the program? What proportion of providers are delivering the service? Is the program acceptable to stakeholders (clients and providers)? Is the program appropriate or compatible with patient and provider expectations and skills? Is the intervention being delivered with fidelity? What proportion of the target population is being reached? Is the program reaching clients equitably Did services improve participants’ knowledge, behaviors or attitudes? Did services improve participants’ resilience and emotional well-being? Were social connections and family well-being helped? Did the program lead to a stronger community? Did participants access and use more community resources or mental health treatment?

How many people are reached by the program? What does it mean to be “reached” by the program? What does it mean to be “reached” by the program? –Single touch programs Sign in counts, numbers of materials distributed, website visits and downloads Sign in counts, numbers of materials distributed, website visits and downloads –Multi-session programs Number who enrolled or started the program Number who enrolled or started the program Number who completed number of sessions required in relation to program goals Number who completed number of sessions required in relation to program goals Number who completed all sessions Number who completed all sessions –Variable session programs What defines meaningful participation? What defines meaningful participation?

Duplicated vs. Unduplicated Counts 37 Duplicated Count: Duplicated Count: –Program participant may be counted more than one time in a grant year. –Might occur if a client received multiple services in the same reporting period within one program or across multiple programs Unduplicated Count: Unduplicated Count: –One (1) person/client is counted only once, no matter how many different services the client is receiving during the funding period – could be within or across programs –Favored in reporting guidelines –Requires reporting system accessible across programs that tracks individuals by ID number

Capturing the Statistics of Participation Simple counts Simple counts –Number of people who enrolled or attended within a defined period of time Distributions and cross-tabulations Distributions and cross-tabulations % completed 1 session % completed 2-5 sessions % completed full program (6 sessions) Adds to 100% of total participation Mean and standard deviation Mean and standard deviation –Mean (average) number of sessions completed and variation around the mean number

Enrollments by Month – Counts by Ethnicity

Program Participation – Counts Cross Tabbed by Ethnicity

Questions Drive Analysis Approaches Did outcomes improve? Where is it going?What is it doing? Does it make a difference? What do the program activities do? What new program activities were put in place? Is there evidence that the services should work? Is the content relevant and appropriate for target populations? How much does the program cost? What resources does it require to implement? How feasible is it, given the available resources? What is the intended capacity of the program? Is the program sustainable? How many people are reached by the program? What proportion of providers are delivering the service? Is the program acceptable to stakeholders (clients and providers)? Is the program appropriate or compatible with patient and provider expectations and skills? Is the intervention being delivered with fidelity? What proportion of the target population is being reached? Is the program reaching clients equitably Did services improve participants’ knowledge, behaviors or attitudes? Did services improve participants’ resilience and emotional well-being? Were social connections and family well-being helped? Did the program lead to a stronger community? Did participants access and use more community resources or mental health treatment?

Capturing Views of the Program Capture data from: Capture data from: Program participant (and drop out) opinions Program participant (and drop out) opinions Community or stakeholder views Community or stakeholder views Capture data about whether the program is: Capture data about whether the program is: Meeting needs of targeted group Meeting needs of targeted group Culturally appropriate for targeted group Culturally appropriate for targeted group Well delivered Well delivered Data sources: Data sources: Qualitative – focus groups, individual interviews reported as ranges and descriptions Qualitative – focus groups, individual interviews reported as ranges and descriptions Surveys – analyzed as statistical data on means and distributions Surveys – analyzed as statistical data on means and distributions

Does Program Meet Needs?

Questions Drive Analysis Approaches Did outcomes improve? Where is it going?What is it doing? Does it make a difference? What do the program activities do? What new program activities were put in place? Is there evidence that the services should work? Is the content relevant and appropriate for target populations? How much does the program cost? What resources does it require to implement? How feasible is it, given the available resources? What is the intended capacity of the program? Is the program sustainable? How many people are reached by the program? What proportion of providers are delivering the service? Is the program acceptable to stakeholders (clients and providers)? Is the program appropriate or compatible with patient and provider expectations and skills? Is the intervention bei ng delivered with fidelity? What proportion of the target population is being reached? Is the program reaching clients equitably Did services improve participants’ knowledge, behaviors or attitudes? Did services improve participants’ resilience and emotional well-being? Were social connections and family well-being helped? Did the program lead to a stronger community? Did participants access and use more community resources or mental health treatment?

Characteristics of Participants What are the key characteristics of interest? What are the key characteristics of interest? –Age –Sex –Race/ethnicity –Financial need –Level of risk for negative mental health outcomes –Special groups – LGTBQ, military, etc. What characteristics were targeted in terms of stakeholder goal setting and program design? What characteristics were targeted in terms of stakeholder goal setting and program design? What proportion of the participants are in the among the groups that were targeted beneficiaries from this program? What proportion of the participants are in the among the groups that were targeted beneficiaries from this program? What proportion of the targeted beneficiaries were reached by the program? What proportion of the targeted beneficiaries were reached by the program?

Statistics of Describing Participants Distributions by age and sex (pie chart) Distributions by age and sex (pie chart) Proportions of participants who endorse: (histogram) Proportions of participants who endorse: (histogram) –Racial ethnic categories –Affiliation with special groups Considerations in reporting statistics of participants Considerations in reporting statistics of participants –Mandated reporting may require specific categories –Using same categories across programs allows standardized reporting and comparisons Proportion of population served requires combining program participation and program target information Proportion of population served requires combining program participation and program target information

Questions Drive Analysis Approaches Did outcomes improve? Where is it going?What is it doing? Does it make a difference? What do the program activities do? What new program activities were put in place? Is there evidence that the services should work? Is the content relevant and appropriate for target populations? How much does the program cost? What resources does it require to implement? How feasible is it, given the available resources? What is the intended capacity of the program? Is the program sustainable? How many people are reached by the program? What proportion of providers are delivering the service? Is the program acceptable to stakeholders (clients and providers)? Is the program appropriate or compatible with patient and provider expectations and skills? Is the intervention being delivered with fidelity? What proportion of the target population is being reached? Is the program reaching clients equitably Did services improve participants’ knowledge, behaviors or attitudes? Did services improve participants’ resilience and emotional well-being? Were social connections and family well-being helped? Did the program lead to a stronger community? Did participants access and use more community resources or mental health treatment?

Capturing and Reporting Program Outcomes Did services improve outcomes? Did services improve outcomes? By how much did services improve outcomes? By how much did services improve outcomes? How did results vary by key subgroups? How did results vary by key subgroups? –In terms of levels of program participation –By sex, age, race ethnicity, mental health risk, and for special groups Did participants’ outcomes reach targeted levels or expected benchmarks? Did participants’ outcomes reach targeted levels or expected benchmarks? Can outcomes be assessed at individual level or at the group level? Can outcomes be assessed at individual level or at the group level?

Before-and-After Design Requires measures of key outcomes before program and at the time of completion of the program Requires measures of key outcomes before program and at the time of completion of the program –Focus is only on program participants –When does the program “begin” and “end”? –Should programs collect data at regular intervals to avoid excluding those who do not complete? Allows assessment of program outcomes compared with expected goals or benchmarks – is this program doing as well as expected? Allows assessment of program outcomes compared with expected goals or benchmarks – is this program doing as well as expected? Allows assessment of improvement, but assumes that observed change is attributable to the program and not other causes Allows assessment of improvement, but assumes that observed change is attributable to the program and not other causes

More Sophisticated Approaches to Outcomes Randomized Trials Randomized Trials “Gold standard” for program impact assessment “Gold standard” for program impact assessment May be challenging to implement May be challenging to implement Impact was already assessed for evidence based programs Impact was already assessed for evidence based programs Statistical Approaches to Inferring Program Impact Statistical Approaches to Inferring Program Impact Differences-In-Differences Differences-In-Differences Propensity scoring Propensity scoring Comparison with previous or projected trends Comparison with previous or projected trends Attempt to rule out alternative explanations for observed effects Attempt to rule out alternative explanations for observed effects

Information to Provide in Reporting Outcomes Why outcome is relevant / important to which audiences Why outcome is relevant / important to which audiences Information about indicator / measure used Information about indicator / measure used How and when data were collected How and when data were collected Number of respondents and extent to which they represent number of participants served Number of respondents and extent to which they represent number of participants served Whether baseline and follow-up data are for the same participants Whether baseline and follow-up data are for the same participants Practical and statistical significance of change observed Practical and statistical significance of change observed Comparison or benchmark county, state, or national data if available Comparison or benchmark county, state, or national data if available

Analysis and Use of Data How do you turn raw data into useful information? How do you turn raw data into useful information? –Decide on purpose of evaluation –Analysis –Interpretation –Dialogue

Interpretation of Evaluation Results Evaluation data can be used to answer key questions: Are we serving our target audience / people in need? Are we serving our target audience / people in need? Are we making a difference at different levels of need and with different populations? Are we making a difference at different levels of need and with different populations? Where are the gaps? Where are the gaps? Where do programs need to be enhanced or redirected? Where do programs need to be enhanced or redirected? Does the effect justify the cost of the services? Does the effect justify the cost of the services?

Did outcomes improve? By putting results together in a “story”, the logic model can be used to make sense of observed results PEI Funding Where is it going? What is it doing? Does it make a difference? What are the program goals and objectives? What do the program activities do? What new program activities were put in place? Existing resources Results

Example Using Common Measure: K-6 Adult Psychological Distress Relevance/importance: Identifying instances of serious psychological distress and potential reductions in distress over time Relevance/importance: Identifying instances of serious psychological distress and potential reductions in distress over time Six items: nervous, hopeless, depressed, worthless, low energy Six items: nervous, hopeless, depressed, worthless, low energy Collected at program entry and exit Collected at program entry and exit Score ranges from 0-24; 13+ is high risk; indicates non- serious psychological distress, less than 10 is lower risk Score ranges from 0-24; 13+ is high risk; indicates non- serious psychological distress, less than 10 is lower risk Age: 18 and above Age: 18 and above County and state data available from the CHIS County and state data available from the CHIS

Divide Scores Into Key Risk Groups High Risk Some Risk Low Risk Program 1 Program 2 Program 3 Program 4 Overall

Are programs enrolling targeted groups? High Risk Some Risk Low Risk Program 1 Program 2 Program 3 Program 4 Overall

Do program outcomes change? High Risk Some Risk Low Risk Program 1 Program 2 Program 3 Program 4 Overall

Program participation and retention/reporting # people # reporting High Risk Some Risk Low Risk Program 1 Program 2 Program 3 Program 4 Overall

Program 1 Program 2 Program 3 Program 4 Overall Outcomes in Relation to Program Cost # people # reporting Cost $ 40K 25K 100K 190K High Risk Some Risk Low Risk

Looking at Outomes Within a Program by Race/Ethnicity of Participants # people # reporting High Risk Some Risk Low Risk Black White Hispanic Other Overall

Analysis and Use of Data How do you turn raw data into useful information? How do you turn raw data into useful information? –Analysis –Presentation –Interpretation –Dialogue

Communicating About Evaluation Outcomes “Knowledge is power” – making data easy to understand builds interest and confidence “Knowledge is power” – making data easy to understand builds interest and confidence Stakeholders who see the bigger picture are more able to make trade-offs among different interests Stakeholders who see the bigger picture are more able to make trade-offs among different interests Decision makers feel the process is transparent and goals are clear Decision makers feel the process is transparent and goals are clear Funders believe that sensible management decisions are being made Funders believe that sensible management decisions are being made Poor performers are encouraged to improve Poor performers are encouraged to improve Well performing programs receive additional investment Well performing programs receive additional investment

Partnering for Program Evaluation

Who are the stakeholders in evaluation? County PEI administration County PEI administration County behavioral/mental health agency County behavioral/mental health agency County partner agencies – social services, health, justice County partner agencies – social services, health, justice Oversight groups – MH commissions, Board of Supervisors, business and community groups Oversight groups – MH commissions, Board of Supervisors, business and community groups Consumers of services and MH advocated Consumers of services and MH advocated Service providers – county and contracted Service providers – county and contracted

Service providers are a key partner Evaluation requires their cooperation Evaluation requires their cooperation –Commitment to reaching benchmarks –Resources for data collection –Maybe access to their staff and participants Evaluation results offer opportunities for improving quality of service Evaluation results offer opportunities for improving quality of service –Comparison over time –Improvement initiatives –Incorporate into clinical process –Telling their stories –Become more accountable to skeptics, funders and leadership

Evaluation poses several challenges for service providers Skills and experience may be lacking Skills and experience may be lacking Evaluation may not be a priority or seem to be useful Evaluation may not be a priority or seem to be useful Data collection and other participation costs time and money Data collection and other participation costs time and money Collecting data takes away from clinical care time Collecting data takes away from clinical care time Evaluation increases pressure to perform Evaluation increases pressure to perform –Risk of lost funding

Engaging service providers in evaluation Include in evaluation design and planning Include in evaluation design and planning Ask to specify goals and objectives in statements of work Ask to specify goals and objectives in statements of work Mandate participation in contracts Mandate participation in contracts –Including resources Offer training and support for measures administration and data utilization Offer training and support for measures administration and data utilization

Group Discussion: Increasing opportunities and decreasing challenges OpportunitiesChallenges Monitor performance over timeLack of evaluation skills Make data informed improvementCosts Incorporate into clinical processLess clinical care time Tell story of program benefitsLack of utility Increased accountabilityRisks

What are the benefits of successful partnering? Better inform designs Better inform designs –Set priorities among program types –Establish benchmarks Facilitate data collection and analysis Facilitate data collection and analysis –Improves quality Leverage resources Leverage resources –Engage existing expertise –Reduce duplication Motivate utilization of results Motivate utilization of results Routinize and sustain evaluation over time Routinize and sustain evaluation over time