Download presentation
Presentation is loading. Please wait.
Published byLouise Howard Modified over 9 years ago
1
Use of Fidelity Measurement in Statewide Technical Assistance Activities Gary Bond Festschrift Conference Indianapolis, IN September 24, 2009 Angela L. Rollins, Ph.D. ACT Center of Indiana IUPUI Psychology Department
2
Overview Background on fidelity use in general Background on fidelity use in general Use for fidelity in technical assistance (TA) efforts across various EBPs Use for fidelity in technical assistance (TA) efforts across various EBPs Some new directions Some new directions Further questions Further questions
3
Uses of Fidelity: Research Internal validity: examine model adherence/drift Internal validity: examine model adherence/drift Facilitating communication in literature Facilitating communication in literature Synthesizing a body of research Synthesizing a body of research Identifying critical ingredients of program models Identifying critical ingredients of program models Defining model adaptations Defining model adaptations
4
Uses of Fidelity: TA Benchmarks to add perspective Over time Over time Comparison to other programs Comparison to other programs Comparison to norms Comparison to norms Coaching Written report of strengths, weaknesses, recommendations for improvement Teaching moments within the assessment day itself
5
Uses of Fidelity: TA Defining program standards/critical ingredients Monitoring adherence to program standards – tricky new area Monitoring adherence to program standards – tricky new area Springboard for other TA activities Springboard for other TA activities Data gathering (Indiana) Data gathering (Indiana) Outcomes data reporting (Kansas) Outcomes data reporting (Kansas)
6
Indiana ACT Fidelity Approach 2 assessors 1 2 assessors 1 Semi-annual annual after first year Semi-annual annual after first year Even with these reductions, as # teams grow and # consultants remains constant, fidelity visits take a bigger piece of the overall consulting pie Even with these reductions, as # teams grow and # consultants remains constant, fidelity visits take a bigger piece of the overall consulting pie
7
Indiana ACT Fidelity Approach Testing phone fidelity measures (see McGrew talk tomorrow) Testing phone fidelity measures (see McGrew talk tomorrow) Level of burden has reduced a little over time, even for onsite assessment Level of burden has reduced a little over time, even for onsite assessment Team leaders more familiar with requests Team leaders more familiar with requests Agencies create reports over time (trust?) Agencies create reports over time (trust?) Assessors more adept as well - completing faster than in initial years Assessors more adept as well - completing faster than in initial years Still need ongoing support - reduce rater drift, feedback on scoring decisions Still need ongoing support - reduce rater drift, feedback on scoring decisions
8
Multiple Uses of the Visit Data gathering Data gathering Admission and discharge data for tracking Admission and discharge data for tracking Tracking raw data used to compute fidelity scores Tracking raw data used to compute fidelity scores raw turnover rates raw turnover rates actual FTE and caseload size actual FTE and caseload size actual frequency and intensity actual frequency and intensity Teaching moments: most extensive contact with site for the year in most cases Teaching moments: most extensive contact with site for the year in most cases
9
Multiple Uses of the Visit Report team’s scores over time in each report (examine annual progress) Report team’s scores over time in each report (examine annual progress) Additional monitoring of state standards compliance Additional monitoring of state standards compliance Additional information added to report where DACTS and state standards differ Additional information added to report where DACTS and state standards differ Noncompliance reported to team initially, to state if not remediated within ~90 days Noncompliance reported to team initially, to state if not remediated within ~90 days Research on real-world data, but problems with restriction in range (state standards dictate high fidelity) Research on real-world data, but problems with restriction in range (state standards dictate high fidelity)
10
Indiana DACTS data Human ResOrganizationServicesDACTS Total Baseline n=34 4.05 (.50)4.14 (.56)3.33 (.60)3.81 (.43) 12 mos n=33 4.28 (.34)4.56 (.42)3.70 (.52)4.14 (.34) 24 mos n=30 4.24 (.27)4.68 (.23)3.80 (.44)4.20 (.26) 36 mos n=27 4.28 (.30)4.69 (.19)3.80 (.35)4.21 (.20) 48 mos n=19 4.24 (.34)4.65 (.18)3.97 (.29)4.25 (.20) 60 mos n=13 4.33 (.34)4.63 (.26)3.82 (.58)4.22 (.34) 72 mos n=13 4.24 (.34)4.67 (.11)3.73 (.29)4.17 (.17) 84 mos n=2 4.41 (.32)4.86 (.00)3.95 (.21)4.36 (.20)
11
Fidelity Monitoring: IMR Semi-annual visits Semi-annual visits “Baseline” after 3-6 months is more meaningful for program development “Baseline” after 3-6 months is more meaningful for program development Emphasis on qualitative feedback rather than numeric score Emphasis on qualitative feedback rather than numeric score Data collection via observation of IMR sessions is vital Data collection via observation of IMR sessions is vital Immediate feedback at the end of visit important for QI function of fidelity Immediate feedback at the end of visit important for QI function of fidelity In progress: fidelity via audiotaped sessions – based on Mueser’s work In progress: fidelity via audiotaped sessions – based on Mueser’s work
12
Across EBPs Role of fidelity may vary in emphasis across EBPs Role of fidelity may vary in emphasis across EBPs Example: Does fidelity measurement of IMR play as large a role in implementation as it does for ACT? Example: Does fidelity measurement of IMR play as large a role in implementation as it does for ACT? Clinical competency needs pronounced for IMR, IDDT (see Mueser talk tomorrow) Clinical competency needs pronounced for IMR, IDDT (see Mueser talk tomorrow) Program structural needs for ACT very pronounced Program structural needs for ACT very pronounced
13
Future Directions Increasing calls to decrease the burden of fidelity measurement (e.g., We want to use one FTE or less for monitoring our whole state) Increasing calls to decrease the burden of fidelity measurement (e.g., We want to use one FTE or less for monitoring our whole state) Phone/remote, less intensive version (see McGrew) Phone/remote, less intensive version (see McGrew) Lower frequency of assessments Lower frequency of assessments Screening approach Screening approach key items key items follow-up with full visit if flagged during screen follow-up with full visit if flagged during screen
14
Future Directions Ohio developing The Evaluation Database: “TED” Ohio developing The Evaluation Database: “TED” Online data entry by consultant after visit Online data entry by consultant after visit Perhaps during onsite visit with wireless card Perhaps during onsite visit with wireless card More automated consensus and report function More automated consensus and report function Track when consultants enter data and when reports go out: QA for the TA Track when consultants enter data and when reports go out: QA for the TA Raw data elements as well for fuller data gathering Raw data elements as well for fuller data gathering IDDT and (SE?) so far, may collaborate to tailor to ACT IDDT and (SE?) so far, may collaborate to tailor to ACT Seeking funding to tie in consumer outcomes as well Seeking funding to tie in consumer outcomes as well Contact: debra.hrouda@case.edu Contact: debra.hrouda@case.edu
15
Questions to Ponder Are statewide fidelity scores a good measure of the TA center’s effectiveness? Are statewide fidelity scores a good measure of the TA center’s effectiveness? Would outside fidelity assessors (outside the state/entity) produce similar results? Would outside fidelity assessors (outside the state/entity) produce similar results? Beyond basic requirements for certification, what motivates teams to score high on fidelity scale? Beyond basic requirements for certification, what motivates teams to score high on fidelity scale? Can we marry research and practice when practice yields a restriction in range on fidelity scores? Can we marry research and practice when practice yields a restriction in range on fidelity scores?
16
Thank you Charles Boyle Charles Boyle Michelle Salyers Michelle Salyers Lia Hicks Lia Hicks Deb Hrouda Deb Hrouda
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.