Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis Implementation Research Methods Meeting September 20-21, 2010.

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

LEE JOSEPH CRONBACH  Lee Joseph Cronbach was an American educational psychologist who made significant contributions to psychological testing and measurement.
Treatment Integrity and Program Fidelity: Necessary but Not Sufficient to Sustain Programs Ronnie Detrich Randy Keyworth Jack States Wing Institute.
Roger D. Goddard, Ph.D. March 21, Purposes Overview of Major Research Grants Programs Administered by IES; Particular Focus on the Education Research.
Modeling “The Cause”: Assessing Implementation Fidelity and Achieved Relative Strength in RCTs David S. Cordray Vanderbilt University IES/NCER Summer Research.
The SWIFT Center SCHOOLWIDE INTEGRATED FRAMEWORK FOR TRANSFORMATION.
July 2007 IDEA Partnership 1 RTI Process What is it?
1 Exploring Quasi-Experiments Lab 5: May 9, 2008 Guthrie, J.T., Wigfield, A., & VonSecker, C. (2000). Effects of integrated instruction on motivation and.
Analyzing Intervention Fidelity and Achieved Relative Strength David S. Cordray Vanderbilt University NCER/IES RCT Training Institute,2010.
1 Reading First Internal Evaluation Leadership Tuesday 2/3/03 Scott K. Baker Barbara Gunn Pacific Institutes for Research University of Oregon Portland,
1 National Reading First Impact Study: Critique in the Context of Oregon Reading First Oregon Reading First Center May 13, 2008 Scott K. Baker, Ph.D. Hank.
Petter Øgland, Department of Informatics, University of Oslo
EVAL 6970: Experimental and Quasi- Experimental Designs Dr. Chris L. S. Coryn Dr. Anne Cullen Spring 2012.
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Specifying a Purpose and Research Questions or Hypotheses
National Center on Response to Intervention Developed by the National Center on Response to Intervention and RMC Research RTI Integrity Framework: A Tool.
Implementation and Fidelity within a Theory-Driven Framework Greg Roberts, PhD. Vaughn Gross Center & National Center for Instruction The University of.
Reliability and factorial structure of a Portuguese version of the Children’s Hope Scale José Tomás da Silva Maria Paula Paixão Catarina Carvalho dos Santos.
Methods for assessing fidelity and quality of delivery of smoking cessation behavioural support Fabiana Lorencatto, Robert West, Carla Bruguera, & Susan.
Assessing Intervention Fidelity in RCTs: Concepts and Methods Panelists: David S. Cordray, PhD Chris Hulleman, PhD Joy Lesnick, PhD Vanderbilt University.
Our Leadership Journey Cynthia Cuellar Astrid Fossum Janis Freckman Connie Laughlin.
Office of Institutional Research, Planning and Assessment January 24, 2011 UNDERSTANDING THE DIAGNOSTIC GUIDE.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Achieved Relative Intervention Strength: Models and Methods Chris S. Hulleman David S. Cordray Presentation for the SREE Research Conference Washington,
Progressing Toward a Shared Set of Methods and Standards for Developing and Using Measures of Implementation Fidelity Discussant Comments Prepared by Carol.
The Role of Information in Systems for Learning Paul Nichols Charles DePascale The Center for Assessment.
Slide 1 Estimating Performance Below the National Level Applying Simulation Methods to TIMSS Fourth Annual IES Research Conference Dan Sherman, Ph.D. American.
Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis Implementation: What to Consider At Different Stages in the Research.
Critical Elements Effective teaching Alignment of curriculum and instruction to standards Instructional program coherence Fidelity of implementation Evaluation.
Assessing Intervention Fidelity in RCTs: Models, Methods and Modes of Analysis David S. Cordray & Chris Hulleman Vanderbilt University Presentation for.
Understanding Research Design Can have confusing terms Research Methodology The entire process from question to analysis Research Design Clearly defined.
JEFF ALEXANDER The University of Michigan The Challenge and Promise of Delivery System Research: A Meeting of AHRQ Grantees, Experts, and Stakeholders.
URBDP 591 I Lecture 3: Research Process Objectives What are the major steps in the research process? What is an operational definition of variables? What.
Laying the Foundation for Scaling Up During Development.
Chapter Thirteen Measurement Winston Jackson and Norine Verberg Methods: Doing Social Research, 4e.
Issues in Validity and Reliability Conducting Educational Research Chapter 4 Presented by: Vanessa Colón.
META-ANALYSIS, RESEARCH SYNTHESES AND SYSTEMATIC REVIEWS © LOUIS COHEN, LAWRENCE MANION & KEITH MORRISON.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Securing External Federal Funding Janice F. Almasi, Ph.D. Carol Lee Robertson Endowed Professor of Literacy University of Kentucky
Fidelity of Implementation A tool designed to provide descriptions of facets of a coherent whole school literacy initiative. A tool designed to provide.
Using State Tests to Measure Student Achievement in Large-Scale Randomized Experiments IES Research Conference June 28 th, 2010 Marie-Andrée Somers (Presenter)
Developing an evaluation of professional development Webinar #2: Going deeper into planning the design 1.
Part 2: Assisting Students Struggling with Reading: Multi-Tier System of Supports H325A
Progressing Toward a Shared Set of Methods and Standards for Developing and Using Measures of Implementation Fidelity Symposium Chair: Chris S. Hulleman,
Designing An Adaptive Treatment Susan A. Murphy Univ. of Michigan Joint with Linda Collins & Karen Bierman Pennsylvania State Univ.
Effectiveness of Selected Supplemental Reading Comprehension Interventions: Impacts on a First Cohort of Fifth-Grade Students June 8, 2009 IES Annual Research.
Open Forum: Scaling Up and Sustaining Interventions Moderator: Carol O'Donnell, NCER
MSP Regional Meeting February 13-15, 2008 Calli Holaway-Johnson, Ph.D. Charles Stegman, Ph.D. National Office for Research on Measurement and Evaluation.
The Cause…or the “What” of What Works? David S. Cordray Vanderbilt University IES Research Conference Washington, DC June 16, 2006.
Week Seven.  The systematic and rigorous integration and synthesis of evidence is a cornerstone of EBP  Impossible to develop “best practice” guidelines,
1 Teaching Supplement.  What is Intersectionality?  Intersectionality and Components of the Research Process  Implications for Practice 2.
Alexandria City Public Schools Preliminary Results of the 2016 Teaching, Empowering, Leading, and Learning (TELL) Survey. Dawn Shephard Associate Director, Teaching,
Analysis for Designs with Assignment of Both Clusters and Individuals
David S. Cordray, PhD Vanderbilt University
WRITING AND PUBLISHING RESEARCH ARTICLES
Teaching and Learning with Technology
Randomized Trials: A Brief Overview
Research Designs, Threats to Validity and the Hierarchy of Evidence and Appraisal of Limitations (HEAL) Grading System.
Developing HDFS Learning Goals
All readers by 3rd grade Guidance for the Use of Diagnostic
Future Directions Conference September 3rd, 2010
Delia L. Lang, PhD, MPH Elizabeth Reisinger Walker, PhD, MPH, MAT
Group Experimental Design
Domain-Specific Prior Knowledge and Learning: A Meta-Analysis
Descriptive Studies; Causality and Causal Inference
Non-Experimental designs: Correlational & Quasi-experimental designs
Considering Fidelity as an Element Within Scale Up Initiatives Validation of a Multi-Phase Scale Up Design for a Knowledge-Based Intervention in Science.
Analyzing Intervention Fidelity and Achieved Relative Strength
Some Further Considerations in Combining Single Case and Group Designs
Misc Internal Validity Scenarios External Validity Construct Validity
Presentation transcript:

Conceptualizing Intervention Fidelity: Implications for Measurement, Design, and Analysis Implementation Research Methods Meeting September 20-21, 2010 Chris S. Hulleman, Ph.D.

Implementation vs. Implementation Fidelity Descriptive What happened as the intervention was implemented? A priori model How much, and with what quality, were the core intervention components implemented? Implementation Assessment Continuum Fidelity: How faithful was the implemented intervention (t Tx ) to the intended intervention (T Tx )? Infidelity: T Tx – t Tx Most assessments include both

Linking Fidelity to Causal Models Rubin’s Causal Model: – True causal effect of X is (Y i Tx – Y i C ) – RCT is best approximation – Tx – C = average causal effect Fidelity Assessment – Examines the difference between implemented causal components in the Tx and C – This difference is the achieved relative strength (ARS) of the intervention – Theoretical relative strength = T Tx – T C – Achieved relative strength = t Tx – t C Index of fidelity

Implementation assessment typically captures… (1) Essential or core components (activities, processes, structures) (2) Necessary, but not unique, activities, processes and structures (supporting the essential components of Tx) (3) Best practices (4) Ordinary features of the setting (shared with the control group) Intervention Fidelity assessment

Why is this Important? Construct Validity – Which is the cause? (T Tx - T C ) or (t Tx – t C ) – Degradation due to poor implementation, contamination, or similarity between Tx and C External Validity – Generalization is about t Tx – t C – Implications for future specification of Tx – Program failure vs. Implementation failure Statistical Conclusion Validity – Variability in implementation increases error, and reduces effect size and power

Why is this important? Reading First implementation results ComponentsSub- components Performance LevelsARS RFNon-RF Reading Instruction Daily (min.) Daily in 5 components (min.) Daily with High Quality practice Overall Average0.35 Adapted from Gamse et al. (2008) and Moss et al. (2008) Effect Size Impact of Reading First on Reading Outcomes =.05

5-Step Process (Cordray, 2007) 1.Specify the intervention model 2.Develop fidelity indices 3.Determine reliability and validity 4.Combine indices 5.Link fidelity to outcomes Conceptual Measurement Analytical

Some Challenges Intervention Models Unclear interventions Scripted vs. Unscripted Intervention Components vs. Best Practices Measurement Novel constructs: Standardize methods and reporting (i.e., ARS) but not measures (Tx-specific) Measure in both Tx & C Aggregation (or not) within and across levels Analyses Weighting of components Psychometric properties? Functional form? Analytic frameworks Descriptive vs. Causal (e.g., ITT) vs. Explanatory (e.g., LATE) See Howard’s Talk Next! Future Implementation Zone of Tolerable Adaptation Systematically test impact of fidelity to core components Tx Strength (e.g., ARS): How big is big enough?

Treatment Strength (ARS): How Big is Big Enough? Effect Size StudyFidelity ARS Outcome Motivation – Lab Motivation – Field Reading First* *Averaged over 1 st, 2 nd, and 3 rd grades (Gamse et al., 2008).

Thank You! And Special Thanks to My Collaborators: Catherine Darrow, Ph. D. Amy Cassata-Widera, Ph.D. David S. Cordray Michael Nelson Evan Sommer Anne Garrison Charles Munter

Extras and Notes

Achieved Relative Strength = 0.15 Infidelity “Infidelity” (85)-(70) = 15 t C t tx T Tx TCTC Treatment Strength Expected Relative Strength = T Tx - T C = ( ) = Outcome

Linking Fidelity to Outcomes

Concerns and Questions Best practices vs. model-specific Fidelity – i.e., be wary of measures that find 100% fidelity! Fidelity as Achieved Relative Strength (ARS) How much ARS is enough? How to include ARS/fidelity into analytic framework – Weighting of core components – Combining (or not) of indices – Analytic framework Descriptive Causal (ITT) Explanatory (LATE, TOT, instrumental variables) ARS within multi-level models Combining indices within and across levels Zone of tolerable adaptation – Researcher vs. teacher perspective (fidelity is bad b/c I’m like everyone else and need to be creative) Fidelity to process vs. structure Fidelity as a moderator/mediator – When does fidelity = mediation, when does it not? – Tolerable adaptation vs. moderation Developmental studies – Develop valid and reliable indices – Determine which components matter – Implementation drivers