Download presentation
Presentation is loading. Please wait.
Published byFelicity Underwood Modified over 9 years ago
1
Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis Implementation: What to Consider At Different Stages in the Research Process Panel presentation for the Institute for Education Sciences Annual Grantee Meeting September 7, 2011 Chris S. Hulleman, Ph.D.
2
Implementation vs. Implementation Fidelity Descriptive What happened as the intervention was implemented? A priori model How much, and with what quality, were the core intervention components implemented? Implementation Assessment Continuum Fidelity: How faithful was the implemented intervention (t Tx ) to the intended intervention (T Tx )? Infidelity: T Tx – t Tx Most assessments include both
3
Linking Fidelity to Causal Models Rubin’s Causal Model: – True causal effect of X is (Y i Tx – Y i C ) – RCT is best approximation – Tx – C = average causal effect Fidelity Assessment – Examines the difference between implemented causal components in the Tx and C – This difference is the achieved relative strength (ARS) of the intervention – Theoretical relative strength = T Tx – T C – Achieved relative strength = t Tx – t C Index of fidelity
4
Implementation assessment typically captures… (1) Essential or core components (activities, processes, structures) (2) Necessary, but not unique, activities, processes and structures (supporting the essential components of Tx) (3) Best practices (4) Ordinary features of the setting (shared with the control group) Intervention Fidelity assessment
5
Why is this Important? Construct Validity – Which is the cause? (T Tx - T C ) or (t Tx – t C ) – Degradation due to poor implementation, contamination, or similarity between Tx and C External Validity – Generalization is about t Tx – t C – Implications for future specification of Tx – Program failure vs. Implementation failure Statistical Conclusion Validity – Variability in implementation increases error, and reduces effect size and power
6
Why is this important? Reading First implementation results ComponentsSub- components Performance LevelsARS RFNon-RF Reading Instruction Daily (min.)105.087.00.63 Daily in 5 components (min.) 59.050.80.35 Daily with High Quality practice 18.116.20.11 Overall Average0.35 Adapted from Gamse et al. (2008) and Moss et al. (2008) Effect Size Impact of Reading First on Reading Outcomes =.05
7
5-Step Process (Cordray, 2007) 1.Specify the intervention model 2.Develop fidelity indices 3.Determine reliability and validity 4.Combine indices 5.Link fidelity to outcomes Conceptual Measurement Analytical
8
Some Challenges Intervention Models Unclear interventions Scripted vs. Unscripted Intervention Components vs. Best Practices Measurement Novel constructs: Standardize methods and reporting (i.e., ARS) but not measures (Tx-specific) Measure in both Tx & C Aggregation (or not) within and across levels Analyses Weighting of components Psychometric properties? Functional form? Analytic frameworks Descriptive vs. Causal (e.g., ITT) vs. Explanatory (e.g., LATE) See Howard’s Talk Next! Future Implementation Zone of Tolerable Adaptation Systematically test impact of fidelity to core components Tx Strength (e.g., ARS): How big is big enough?
9
Treatment Strength (ARS): How Big is Big Enough? Effect Size StudyFidelity ARS Outcome Motivation – Lab 1.880.83 Motivation – Field 0.800.33 Reading First* 0.350.05 *Averaged over 1 st, 2 nd, and 3 rd grades (Gamse et al., 2008).
10
Thank You! And Special Thanks to My Collaborators: Catherine Darrow, Ph. D. Amy Cassata-Widera, Ph.D. David S. Cordray Michael Nelson Evan Sommer Anne Garrison Charles Munter
11
Chris Hulleman is an assistant professor at James Madison University with joint appointments in Graduate Psychology and the Center for Assessment and Research Studies. Chris also co-directs the Motivation Research Institute at James Madison. He received his PhD in social/personality psychology from the University of Wisconsin-Madison in 2007, and then spent two years as an Institute for Education Sciences Research Fellow in Vanderbilt University’s Peabody College of Education. In 2009, he won the Pintrich Outstanding Dissertation Award from Division 15 (Educational Psychology) of the American Psychological Association. He teaches courses in graduate statistics and research methods, and serves as the assessment liaison for the Division of Student Affairs. His motivation research focuses on motivation in academic, sport, work, and family settings. His methodological interests include developing guidelines for translating laboratory research into the field, and developing indices of intervention fidelity. As a Research Affiliate for the National Center on Performance Incentives, Chris is involved in several randomized field experiments of teacher pay-for-performance programs in K-12 settings. His scholarship has been published in journals such as Science, Psychological Bulletin, Journal of Research on Educational Effectiveness, Journal of Educational Psychology, and Phi Delta Kappan. Department of Graduate Psychology James Madison University hullemcs@jmu.edu
12
Achieved Relative Strength (t tx ) = 0.15 Infidelity “Infidelity” (85)-(70) = 15 t C t tx T Tx TCTC.45.40.35.30.25.20.15.10.05.00 Treatment Strength Expected Relative Strength = T Tx - T C = (0.40-0.15) = 0.25 100 90 85 80 75 70 65 60 55 50 Outcome
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.