Download presentation
Presentation is loading. Please wait.
Published bySavanah Thatch Modified over 9 years ago
1
Developing, Measuring, and Improving Program Fidelity: Achieving positive outcomes through high-fidelity implementation SPDG National Conference Washington, DC March 5, 2012 Allison Metz, PhD, Associate Director, NIRN Frank Porter Graham Child Development Institute University of North Carolina
2
Program Fidelity 6 Questions What is it? Why is it important? When are we ready to assess fidelity? How do we measure fidelity? How can we produce high fidelity use of interventions in practice? How can we use fidelity data for program improvement?
3
“ PROGRAM FIDELITY ” “The degree to which the program or practice is implemented ‘as intended’ by the program developers and researchers.” “Fidelity measures detect the presence and strength of an intervention in practice.”
4
What is fidelity? Three components – Context: Structural aspects that encompass the framework for service delivery – Compliance: The extent to which the practitioner uses the core program components – Competence: Process aspects that encompass the level of skill shown by the practitioner and the “way in which the service is delivered” Question 1
5
Why is fidelity important? Interpret outcomes – is this an implementation challenge or intervention challenge? Detect variations in implementation Replicate consistently Ensure compliance and competence Develop and refine interventions in the context of practice Identify “active ingredients” of program Question 2
6
Why is fidelity important? Question 2 Effective Interventions The “WHAT” Effective Implementation The “HOW” Positive Outcomes for Children
7
Implementation Science EffectiveNOT Effective Effective NOT Effective IMPLEMENTATION INTERVENTION Actual Benefits (Institute of Medicine, 2000; 2001; 2009; New Freedom Commission on Mental Health, 2003; National Commission on Excellence in Education,1983; Department of Health and Human Services, 1999) Inconsistent; Not Sustainable; Poor outcomes Unpredictable or poor outcomes; Poor outcomes; Sometimes harmful from Mark Lipsey’s 2009 Meta- analytic overview of the primary factors that characterize effective juvenile offender interventions – “... in some analyses, the quality with which the intervention is implemented has been as strongly related to recidivism effects as the type of program, so much so that a well-implemented intervention of an inherently less efficacious type can outperform a more efficacious one that is poorly implemented.”
8
When are we ready to assess fidelity? Operationalize Part of Speech: verb Definition: to define a concept or variable so that it can be measured or expressed quantitatively Webster's New Millennium™ Dictionary of English, Preview Edition (v 0.9.7) Copyright © 2003-2008 Lexico Publishing Group, LLC The “it” must be operationalized whether it is: An Evidence-Based Practice or Program A Best Practice Initiative or New Framework A Systems Change Initiative or Element Question 3
9
How developed is your WHAT? Does this approach involve the implementation of an evidence-based program or practice that has been effectively implemented in other locations? Does this approach involve the purveyor or other “expert” support? How well-defined are the critical components of the approach? Does this approach involve the implementation of an evidence-informed approach that hasn’t been implemented often or ever? To what extent is the approach still being developed or fine-tuned? How clearly defined are the critical components of the approach?
10
Developing Practice Profiles Each critical component is a heading For each critical component, identify: –“gold standard” practice – “expected” –developmental variations in practice –ineffective practices and undesirable practices Adapted from work of the Heartland Area Education Agency 11, Iowa
11
Developing Practice Profiles Adapted from work of the Heartland Area Education Agency 11, Iowa
12
How do we measure fidelity? Establish fidelity criteria if not yet developed 1. Identify critical components, operationalize them and determine indicators a. Describe data sources b. Make indicators as objective as possible (e.g., anchor points for rating scales) 2. Collect data to measure these indicators (“preferably though a multi-method, multi- informant approach” (Mowbray, 2003)) 3. Examine the measures in terms of reliability and validity Question 4
13
How do we measure fidelity? Staff performance assessments serve as a mechanism to begin to identify “process” aspects of fidelity for newly operationalized programs Contextual (or structural) aspects of fidelity are “in service to” adherence and competence. – length, intensity, and duration of service (or dosage), roles and qualifications of staff – training and coaching procedures – case protocols and procedures – administrative policies – data collection requirements – inclusion/exclusion criteria of the target population Question 4
14
Performance Assessment Start with the Expected/Proficient column Develop an indicator for each Expected/Proficient Activity Identify “evidence” that this activity has taken place Identify “evidence” that this activity has taken place with high quality Identify potential data source(s)
15
Fidelity Criteria Parent Involvement and Leadership Practice Profile Partnering Expected/ Proficient Indicator that activity is happening (Adherence) Potential Data Source Indicator that activity is happening well (Competence) Potential Data Source Encourage and include parent involvement in educational decision-making Parent/Teacher meetings take place to develop goals and plans for child progress Observation Documentation Parent feels included and respected Parent Partnering Survey
16
How do we measure fidelity? If fidelity criteria are already developed 1. Understand reliability and validity of instruments a. Are we measuring what we thought we were? b. Is fidelity predictive of outcomes? c. Does fidelity assessment discriminate between programs? 2. Work with program developers or purveyors to understand the detailed protocols for data collection a. Who collects the data (expert raters, teachers) b. How often is data collected c. How are data scored and analyzed 3. Understand issues (reliability, feasibility, cost) in collecting different kinds of fidelity data a. Process data vs. Structural data Question 4
17
How do we measure fidelity? If adapting an approach… How well ‘developed’ is the program or practice being adapted? (Winter & Szulanski, 2001) Have core program components been identified?. Do adaptations change function or form? How will adaptation affect fidelity criteria and assessments? Question 4
18
How do we measure fidelity? Steps to measuring fidelity (new or established criteria): 1. Assure fidelity assessors are available, understand the program or innovation, and are well versed in the education setting 2. Develop schedule for conducting fidelity assessments 3. Assure adequate preparation for teachers/practitioners being assessed 4. Report results of the fidelity assessment promptly 5. Enter results into decision-support data system 5 Questions
19
Build, improve and sustain practitioner competency Create hospitable organizational and systems environments Appropriate leadership strategies. How can we produce high-fidelity implementation in practice?
20
“IMPLEMENTATION DRIVERS” Common features of successful supports to help make full and effective uses of a wide variety of innovations
21
© Fixsen & Blase, 2008 Performance Assessment (Fidelity) Coaching Training Selection Systems Intervention Facilitative Administration Decision Support Data System Integrated & Compensatory Competency Drivers Organization Drivers Leadership AdaptiveTechnical Improved Outcomes for Children and Youth Effective Education Strategies
22
Produce high-fidelity implementation? Fidelity is an implementation outcome ◦ Implementation Drivers influence how well or how poorly a program is implemented ◦ The full and integrated use of the Implementation Drivers supports practitioners in consistent, high-fidelity implementation of program ◦ Staff performance assessments are designed to assess the use and outcomes of the skills that are required for the high-fidelity implementation of a new program or practice Question 5
23
Produce high-fidelity implementation? Competency Drivers – Demonstrate knowledge, skills and abilities – Practice to criteria – Coach for competence and confidence Organizational Drivers – Use data to assess fidelity and improve program operations – Administer policies and procedures that support high-fidelity implementation – Implement needed systems interventions Leadership Drivers – Use appropriate leadership strategies to identify and solve challenges to effective implementation Question 5
24
Use fidelity data for program improvement? Program Review Process to create sustainable improvement cycle for program – Process and Outcome Data – measures, data sources, data collection plan – Detection Systems for Barriers – roles and responsibilities – Communication protocols – accountable, moving information up and down the system Questions to Ask – What formal and informal data have we reviewed? – What is the data telling us? – What barriers have we encountered? – Would improving the functioning of any Implementation Driver help address barrier? Question 6
25
Program Fidelity Fidelity has multiple facets and is critical to achieving outcomes Fully operationalized programs are pre-requisites for developing fidelity criteria Valid and reliable fidelity criteria need to be collected carefully with guidance from program developers or purveyors Fidelity is an implementation outcome; effective use of Implementation Drivers can increase our chances of high-fidelity implementation Fidelity data can and should be used for program improvement Summary
26
Program Fidelity Examples of fidelity instruments Teaching Pyramid Observation Tool for Preschool Classrooms (TPOT), Research Edition, Mary Louise Hemmeter and Lise Fox The PBIS fidelity measure (the SET) described at http://www.pbis.org/pbis_resource_detail_page.aspx?Type=4&PB IS_ResourceID=222 http://www.pbis.org/pbis_resource_detail_page.aspx?Type=4&PB IS_ResourceID=222 Articles Sanetti, L. & Kratochwill, T. (2009). Toward Developing a Science of Treatment Integrity: Introduction to the Special Series. School Psychology Review, Volume 38, No. 4, pp. 445–459. Mowbray, C.T., Holter, M.C., Teague, G.B., Bybee, D. (2003). Fidelity Criteria: Development, Measurement and Validation. American Journal of Evaluation, 24 (3), 315-340. Hall, G.E., & Hord, S.M. (2011). Implementing Change: Patterns, principles and potholes (3 rd ed.)Boston: Allyn and Bacon. Resources
27
Stay Connected! nirn.fpg.unc.edu www.scalingup.org www.implementationconference.org Allison.metz@unc.edu nirn@unc.edu
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.