Download presentation
Presentation is loading. Please wait.
Published byMathew Nadler Modified over 9 years ago
1
Developing Indicators to Assess Program Outcomes Kathryn E. H. Race Race & Associates, Ltd. Panel Presentation at American Evaluation Association Meeting Toronto, Ontario October 26, 2005 race_associates@msn.com
2
Two Broad Perspectives 1. Measuring the Model 2. Measuring Program Outcomes
3
Measuring the Program Model n What metrics can be used to assess program strategies or other model components? n How can these metrics be used in outcomes assessment?
4
Measuring Program Strategies or Components n Examples: Use of rubric(s), triangulated data from program staff/target audience
5
At the Program Level n How does the program vary when implemented? -- across site n What strategies were implemented at full strength? (dosing - program level) n Aid in defining core strategies; how much variation is acceptable?
6
At the Participant Level n Use attendance data to measure strength of intervention (Dosing - participant level) n Serve to establish basic minimum for participation n Theoretical - Establish threshold for change to occur
7
Measuring Program Outcomes: Indicators Where Do You Look? n Established instruments n Specifically designed measures tailored to the particular program n Suggestions from the literature n Or in some combination of these
8
Potential Indicators: n Observations n Existing Records n Responses to Interviews Rossi, Lipsey & Freeman (2004) n Surveys n Standardized Tests n Physical Measurement
9
Potential Indicators: In simplest form: n Knowledge n Attitude/Affect n Behavior (or behavior potential)
10
What Properties Should an Indicator Have? n Reliability n Validity n Sensitivity n Relevance n Adequacy n Multiple Measures
11
Reliability: Consistency of Measurement n Internal Reliability (Cronbach’s alpha) n Test-retest n Inter-rater n Inter-incident
12
Reliability: Relevant Questions If Existing Measure n Can reliability estimates generalize to the population in question? n Use the measure as intended?
13
Reliability: Relevant Questions If New Measure n Can its reliability be established or measured?
14
Validity: Measure What’s Intended n If existing measures: is there any measure of its validity? n General patterns of behavior -- similar findings from similar measures (convergence validity)
15
Validity n Dissimilar findings when measures are expected to measure different concepts or phenomenon n General patterns of behavior or stable results
16
Sensitivity: Ability to Detect a Change n Is the measure able to detect a change (if there is a real change)? n Is the magnitude of expected change measurable?
17
Relevance: Indicators Relevant to the Program n Engage stakeholders n Establish beforehand -- avoid/reduce counter arguments if results are not in expected direction
18
Adequacy n Are we measuring what’s important, of priority? n What are we measuring only partially and what are we measuring more completely? n What are the measurement gaps? n Not just the easy “stuff”
19
All These Properties Plus ….. Use of Multiple Measures
20
n No one measure will be enough n Many measures, diverse in concept or drawing from different audiences n Can compensate for under- representing outcomes n Builds credibility
21
In Outcomes Analysis Results from indicators presented in context: n Of the program n Level of intervention n Known mediating factors n Moderating factors in the environment
22
In Outcomes Analysis (con’t.) Use program model assessment in outcomes analysis n At the participant level n At the program level
23
In Outcomes Analysis (con’t.) Help gauge the strength of program intervention with measured variations in program delivery.
24
In Outcomes Analysis (con’t.) Help gauge the strength of participant exposure to the program.
25
In Outcomes Analysis (con’t.) Links to strength of intervention (program model assessment) and magnitude of program outcomes can become compelling evidence toward understanding potential causal relationships between program and outcomes.
26
Take-away Points n Measures of program strategies/ strength of intervention can play an important role in outcomes assessment.
27
Take-away Points (con’t.) n Look to multiple sources to find/ create indicators n Assess properties of indicators in use n Seek to use multiple indicators
28
Take-away Points (con’t.) n Present results in context n Seek to understand what is measured ….. n And what is not measured
29
Take-away Points (con’t.) And … n Don’t under-estimate the value of a little luck.
30
Developing Indicators to Assess Program Outcomes Kathryn E. H. Race Race & Associates, Ltd. Panel Presentation at American Evaluation Association Meeting Toronto, Ontario October 26, 2005 race_associates@msn.com
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.