Download presentation
Presentation is loading. Please wait.
Published byEmory Evans Modified over 9 years ago
1
Progressing Toward a Shared Set of Methods and Standards for Developing and Using Measures of Implementation Fidelity Discussant Comments Prepared by Carol O’Donnell Institute of Education Sciences Society for Research on Educational Effectiveness Annual Meeting March 5, 2010
2
Thank you to the authors for inviting me to participate in this symposium and for providing their papers in advance. I commend the authors on their innovative work and for their work in guiding the field in this unexplored and new territory. My remarks that follow are my own and do not necessarily represent the views of the US Department of Education and the Institute of Education Sciences. Introduction
3
Papers’ Common Themes: What problems exist? Michael’s “black box”—if we want to determine effectiveness of an intervention, we need to define the treatment and its counterfactual. If “it” works, what is “it”? Developers often fail to identify the “critical components” of an intervention. Researchers often fail to measure whether components were delivered as intended. Did they have a greater influence on the DV than the comparison? How does implementation differ in the treatment and comparison group?
4
Common Themes: What problems exist? Most fidelity studies are correlational--not causal. Analyses show the size of effects obtained when interventions are implemented under classroom conditions are smaller than size of effects obtained during laboratory trials. The same has been found in meta-analyses comparing efficacy (under ideal conditions) & effectiveness studies (routine conditions). Lipsey, 1999; Petrosino & Soydan, 2005; Weisz, Weiss & Donenberg, 1992
5
Common Themes: Summary of Problems A.Lack of an attempt to define the intervention a priori (using a logic or change model). B.Lack of consensus on how to measure fidelity. C.Lack of attempts to use fidelity to analyze outcomes, especially if multi-dimensional. D.Lack of consistency on when and how to assess and report fidelity.
6
Solutions to Problems A-D (which correspond somewhat with the authors’ 5 step procedure for fidelity assessment)
7
Solution A: Define the intervention a priori Developers need to make the critical components explicit to users, distinguishing them between structure and process. Nice distinction between change and logic model, and between a descriptive model of implementation (what transpired as the intervention was put in place), vs. an a priori model which has explicit expectations about implementation of program components. Michael’s “intervention as implemented vs intervention as designed.” Anne noted challenges when intervention is unscripted. Not clear how we know the components are critical. Exploratory work should precede development.
8
Catherine - It’s important to distinguish “fidelity to process” from “global class quality” (context variables such as good classroom management). “Global class quality” variables are only fidelity variables if they are promoted by the intervention; otherwise, they are a separate construct from fidelity. Such variables may impact the effects of fidelity on outcomes (mediation moderation model); therefore, they are important to measure. Solution B: Measure fidelity
9
When possible, include fidelity measures in meta-analyses of efficacy studies. For sets of studies with broad overarching goals (such as PCER), consider whether there is a global fidelity model and measure, despite multiple interventions.
10
Solution B: Measure fidelity I commend the authors who argue for a standard for fidelity measures in education program evaluation. What can we do promote standardization? –IES requires researchers to conceptualize fidelity during development, assess fidelity during efficacy and scale-up evaluations, and use fidelity data in analyses (RFA now cites Hulleman & Cordray, 2009). –Teacher education programs and professional development should help teachers to be good consumers of curriculum materials and understand the role fidelity plays in its selection, implementation, and evaluation. –Psychometricians should be involved in the development of fidelity measures—examining their validity and reliability.
11
Solution C: Analyze fidelity data Michael mentioned the limitations of the intent-to-treat experimental model for explaining effects. Analysis of fidelity data varies greatly: –Descriptive - frequency or percentage of fidelity –Associative – simple correlations –Predictive – explaining variance in outcomes –Causal – RCT of teachers to high and low fidelity (rare) –Impact – fidelity as 3 rd moderator or mediator –Adjusting Outcomes – achieved relative strength vs comparing effect sizes of fidelity vs infidelity Analysis of fidelity data often disparate—what does it all mean? Need a more complete fidelity assessment to better understand construct validity and generalizability.
12
Solution C: Analyze fidelity data Achieved Fidelity vs Achieved Relative Strength Chris – intervention may fail to have effects because it does not differ from control on core components.
13
Solution D: Standardize when & how to assess fidelity Monitor fidelity during efficacy studies to gain confidence that outcomes are due to intervention (internal validity), and, Michael pointed out, to distinguish between implementation failure vs program failure. Determine if results under a specific structure replicate under other organizational structures (Chris’ operational model). Anne discussed fidelity’s impact on ongoing program development. Fidelity results should inform revisions. Also, decide what components of the intervention can be modified to better suit specific conditions without affecting the efficacy of the intervention.
14
Solution D: Standardize when & how Understand the implementation conditions, tools, and processes needed to reproduce positive effects under routine practice on a large scale (i.e., if fidelity is moving target, generalizability of scale-up research may be imperiled). Decide what training is required to provide those who will deliver the intervention with the knowledge and skills to deliver the intervention as intended. Decide if training is a part of “it”.
15
Questions to Consider
16
What role does the counterfactual play in the conception of the change and logic models? Is the structure and process framework similar to the change (constructs/process) v. logic model (structural)? Should fidelity measures for the comparison group (process only?) be different than fidelity measures for the treatment group (structure and process)? Positive infidelity – was this a result of “global class quality”? Is it really “infidelity” (which is NOT implementing a critical component) or was it just supplementing the curriculum (which has been shown to enhance outcomes (as long as fidelity is high). Questions to Consider: Define intervention
17
Is one fidelity measure enough? (The need for multiple measures enriches fidelity data, but complicates the model conceptually and structurally when measures are integrated. Multiple measures may inflate standard errors of parameter estimators. Need for parsimony.) Are scores on items additive? Subjects receiving the same fidelity score may have different implementation profiles (and ordinal scores are not interval). Can fidelity measures be universal, or program-specific? Standardize methods—not measures. Questions to Consider: Measure fidelity
18
Questions to Consider: Analyze fidelity data What are the dangers of removing low fidelity implementers from the sample, or creating bivariate median split between high and low fidelity users (which loses continuous data)? If an intervention is implemented at the classroom level, what is the unit of analysis—think about participant responsiveness (student fidelity, teacher fidelity, school fidelity)? Catherine pointed out that child fidelity was often ignored. What role does HLM play?
19
How do researchers encourage fidelity when creativity, variability, and local adaptations are encouraged? How do we distinguish between fidelity and adaptation? Are the intervention effects established under careful monitoring financially feasible? Questions to Consider: Standardize when & how
20
Bottom Line: Can the intervention be implemented with adequate fidelity under conditions of routine practice and yield positive results? Source: O’Donnell, 2008a
21
Thank you.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.