Developing Indicators to Assess Program Outcomes Kathryn E. H. Race Race & Associates, Ltd. Panel Presentation at American Evaluation Association Meeting.

Slides:



Advertisements
Similar presentations
Integrating the NASP Practice Model Into Presentations: Resource Slides Referencing the NASP Practice Model in professional development presentations helps.
Advertisements

Building capacity for assessment leadership via professional development and mentoring of course coordinators Merrilyn Goos.
Standardized Scales.
Chapter 8 Flashcards.
Study Objectives and Questions for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Implementing the Integrated Planning & Reporting framework ~ Key findings so far~ Identified from a review of council documents during the s508A application.
Measurement Reliability and Validity
Program Evaluation Strategies to Improve Teaching for Learning Rossi Ray-Taylor and Nora Martin Ray.Taylor and Associates MDE/NCA Spring School Improvement.
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
A plain English Description of the components of evaluation Craig Love, Ph.D. Westat.
Process Evaluation: Considerations and Strategies CHSC 433 Module 4/Chapter 8 L. Michele Issel UIC School of Public Health.
Reliability, Validity, Trustworthiness If a research says it must be right, then it must be right,… right??
MEASUREMENT. Measurement “If you can’t measure it, you can’t manage it.” Bob Donath, Consultant.
SWRK 292 Thesis/Project Seminar. Expectations for Course Apply research concepts from SWRK 291. Write first three chapters of your project or thesis.
Beginning the Research Design
Institute for Social & Behavioral Research Institute for Social & Behavioral Research.
The Methods of Social Psychology
1 Minority SA/HIV Initiative MAI Training SPF Step 3 – Planning Presented By: Tracy Johnson, CSAP’s Central CAPT Janer Hernandez, CSAP’s Northeast CAPT.
Agenda: Zinc recap Where are we? Outcomes and comparisons Intimate Partner Violence Judicial Oversight Evaluation Projects: Program Theory and Evaluation.
Designing Case Studies. Objectives After this session you will be able to: Describe the purpose of case studies. Plan a systematic approach to case study.
Reliability & Validity the Bada & Bing of YOUR tailored survey design.
Rosnow, Beginning Behavioral Research, 5/e. Copyright 2005 by Prentice Hall Ch. 6: Reliability and Validity in Measurement and Research.
Copyright © 2005 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Personality Assessment, Measurement, and Research Design.
Formulating the research design
performance INDICATORs performance APPRAISAL RUBRIC
Study announcement if you are interested!. Questions  Is there one type of mixed design that is more common than the other types?  Even though there.
Matt Moxham EDUC 290. The Idaho Core Teacher Standards are ten standards set by the State of Idaho that teachers are expected to uphold. This is because.
Copyright © 2001 by The Psychological Corporation 1 The Academic Competence Evaluation Scales (ACES) Rating scale technology for identifying students with.
Product Evaluation the outcome phase. Do the magic bullets work? How do you know when an innovative educational program has “worked”? How do you know.
Access to health care, social protection, and household costs of illness proposal Cost of illness working group INDEPTH AGM 2009, Pune.
Experimental Research
Interstate New Teacher Assessment and Support Consortium (INTASC)
Performance Measures AmeriCorps Project Director Training Saratoga, NY October 7 th – 9 th, 2013.
Instrumentation.
Too expensive Too complicated Too time consuming.
My Own Health Report: Case Study for Pragmatic Research Marcia Ory Texas A&M Health Science Center Presentation at: CPRRN Annual Grantee Meeting October.
Research Methods Irving Goffman People play parts/ roles
Note to evaluator… The overall purpose of this presentation is to guide evaluators through the completion of step 7 of the UFE checklist and to “level.
INTERNATIONAL SOCIETY FOR TECHNOLOGY IN EDUCATION working together to improve education with technology Using Evidence for Educational Technology Success.
WELNS 670: Wellness Research Design Chapter 5: Planning Your Research Design.
KATEWINTEREVALUATION.com Education Research 101 A Beginner’s Guide for S STEM Principal Investigators.
APUSH ‘themes’ (B.A.G.P.I.P.E.)
Chapter Five Measurement Concepts. Terms Reliability True Score Measurement Error.
Reliability & Validity
EDU 8603 Day 6. What do the following numbers mean?
URBDP 591 I Lecture 3: Research Process Objectives What are the major steps in the research process? What is an operational definition of variables? What.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
+ ©2014 McGraw-Hill Higher Education. All rights reserved. Chapter 2 Personality Assessment, Measurement, and Research Design.
Begin at the Beginning introduction to evaluation Begin at the Beginning introduction to evaluation.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Validity Validity is an overall evaluation that supports the intended interpretations, use, in consequences of the obtained scores. (McMillan 17)
JS Mrunalini Lecturer RAKMHSU Data Collection Considerations: Validity, Reliability, Generalizability, and Ethics.
Ch 9 Internal and External Validity. Validity  The quality of the instruments used in the research study  Will the reader believe what they are readying.
Alternative Assessment Chapter 8 David Goh. Factors Increasing Awareness and Development of Alternative Assessment Educational reform movement Goals 2000,
Assessing Responsiveness of Health Measurements Ian McDowell, INTA, Santiago, March 20, 2001.
Why So Much Attention on Rubric Quality? CAEP Standard 5, Component 5.2: The provider’s quality assurance system relies on relevant, verifiable, representative,
Reliability Ability to produce similar results when repeated measurements are made under identical conditions. Consistency of the results Can you get.
Research Methods Observations Interviews Case Studies Surveys Quasi Experiments.
Evaluation: from Objectives to Outcomes Janet Myers, PhD MPH AIDS Education and Training Centers National Evaluation Center
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
ISSUES & CHALLENGES Adaptation, translation, and global application DiClemente, Crosby, & Kegler, 2009.
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
© 2013 by Nelson Education1 Foundations of Recruitment and Selection I: Reliability and Validity.
A Collaborative Mixed-Method Evaluation of a Multi-Site Student Behavior, School Culture, and Climate Program.
Copyright © 2009 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 47 Critiquing Assessments.
Quality Assurance processes
DATA COLLECTION METHODS IN NURSING RESEARCH
Data Collection Methods
Gerald Dyer, Jr., MPH October 20, 2016
Presentation transcript:

Developing Indicators to Assess Program Outcomes Kathryn E. H. Race Race & Associates, Ltd. Panel Presentation at American Evaluation Association Meeting Toronto, Ontario October 26, 2005

Two Broad Perspectives 1. Measuring the Model 2. Measuring Program Outcomes

Measuring the Program Model n What metrics can be used to assess program strategies or other model components? n How can these metrics be used in outcomes assessment?

Measuring Program Strategies or Components n Examples: Use of rubric(s), triangulated data from program staff/target audience

At the Program Level n How does the program vary when implemented? -- across site n What strategies were implemented at full strength? (dosing - program level) n Aid in defining core strategies; how much variation is acceptable?

At the Participant Level n Use attendance data to measure strength of intervention (Dosing - participant level) n Serve to establish basic minimum for participation n Theoretical - Establish threshold for change to occur

Measuring Program Outcomes: Indicators Where Do You Look? n Established instruments n Specifically designed measures tailored to the particular program n Suggestions from the literature n Or in some combination of these

Potential Indicators: n Observations n Existing Records n Responses to Interviews Rossi, Lipsey & Freeman (2004) n Surveys n Standardized Tests n Physical Measurement

Potential Indicators: In simplest form: n Knowledge n Attitude/Affect n Behavior (or behavior potential)

What Properties Should an Indicator Have? n Reliability n Validity n Sensitivity n Relevance n Adequacy n Multiple Measures

Reliability: Consistency of Measurement n Internal Reliability (Cronbach’s alpha) n Test-retest n Inter-rater n Inter-incident

Reliability: Relevant Questions If Existing Measure n Can reliability estimates generalize to the population in question? n Use the measure as intended?

Reliability: Relevant Questions If New Measure n Can its reliability be established or measured?

Validity: Measure What’s Intended n If existing measures: is there any measure of its validity? n General patterns of behavior -- similar findings from similar measures (convergence validity)

Validity n Dissimilar findings when measures are expected to measure different concepts or phenomenon n General patterns of behavior or stable results

Sensitivity: Ability to Detect a Change n Is the measure able to detect a change (if there is a real change)? n Is the magnitude of expected change measurable?

Relevance: Indicators Relevant to the Program n Engage stakeholders n Establish beforehand -- avoid/reduce counter arguments if results are not in expected direction

Adequacy n Are we measuring what’s important, of priority? n What are we measuring only partially and what are we measuring more completely? n What are the measurement gaps? n Not just the easy “stuff”

All These Properties Plus ….. Use of Multiple Measures

n No one measure will be enough n Many measures, diverse in concept or drawing from different audiences n Can compensate for under- representing outcomes n Builds credibility

In Outcomes Analysis Results from indicators presented in context: n Of the program n Level of intervention n Known mediating factors n Moderating factors in the environment

In Outcomes Analysis (con’t.) Use program model assessment in outcomes analysis n At the participant level n At the program level

In Outcomes Analysis (con’t.) Help gauge the strength of program intervention with measured variations in program delivery.

In Outcomes Analysis (con’t.) Help gauge the strength of participant exposure to the program.

In Outcomes Analysis (con’t.) Links to strength of intervention (program model assessment) and magnitude of program outcomes can become compelling evidence toward understanding potential causal relationships between program and outcomes.

Take-away Points n Measures of program strategies/ strength of intervention can play an important role in outcomes assessment.

Take-away Points (con’t.) n Look to multiple sources to find/ create indicators n Assess properties of indicators in use n Seek to use multiple indicators

Take-away Points (con’t.) n Present results in context n Seek to understand what is measured ….. n And what is not measured

Take-away Points (con’t.) And … n Don’t under-estimate the value of a little luck.

Developing Indicators to Assess Program Outcomes Kathryn E. H. Race Race & Associates, Ltd. Panel Presentation at American Evaluation Association Meeting Toronto, Ontario October 26, 2005