SSIP Evaluation Workshop 2

Slides:



Advertisements
Similar presentations
Virginia - March 2014 (Content adapted from 2014 MSRRC Forum) Preparing for the State Systemic Improvement Plan.
Advertisements

Grantee Program Plan. Components of the Program Plan Goals  Objectives  Activities  Techniques & Tools  Outcomes/Change.
The Center for IDEA Early Childhood Data Systems ECTA/DaSy System Framework Self- Assessment June 18, 2015.
How to Develop the Right Research Questions for Program Evaluation
PBIS Data Review: Presented by Susan Mack & Steven Vitto.
Assessing Program Quality with the Autism Program Environment Rating Scale.
HECSE Quality Indicators for Leadership Preparation.
Creating Pathways for Education, Career and Life Success Webinar: Developing a Pathways Plan January 18, 2013 Facilitated by Jeff Fantine, Consultant.
Designing Local Curriculum Module 5. Objective To assist district leadership facilitate the development of local curricula.
Office of Special Education Programs U.S. Department of Education GRANT PERFORMANCE REPORT FOR CONTINUATION FUNDING.
The IEP: Drafting the IEP (Steps 1, 2, 3, and 4) Southwest Ohio Special Education Regional Resource Center Tuesday, November 7, 2006.
Suggested Components of a Schoolwide Reading Plan Part 1: Introduction Provides an overview of key components of reading plan. Part 2: Component details.
Evaluation Planning & Reporting for School Climate Transformation Grant (SCTG) Sites Bob Algozzine University of North Carolina at Charlotte Steve GoodmanMichigan's.
Fidelity of Implementation A tool designed to provide descriptions of facets of a coherent whole school literacy initiative. A tool designed to provide.
Connecticut Part C State Performance Plan Indicator 11 State Systemic Improvement Plan Phase II.
Goal Attainment Scales as a way to Measure Progress Amy Gaumer Erickson & Monica Ballay December 3, 2012.
Connecticut Part C State Performance Plan Indicator 11 State Systemic Improvement Plan Phase II.
Using PLCs to Build Expertise Community of Practice October 9, 2013 Tammy Bresnahan & Tammy Ferguson.
[Presentation location] [Presentation date] (Confirm ABT logo) Building Bridges and Bonds (B3): An introduction.
Infrastructure Analysis: Part C Christina Kasprzak, ECTA, DaSy Verna Thompson, Early Development and Learning Resources, Delaware Joicey Hurth, NERRC and.
Wisconsin Personnel Development System Grant Click on the speaker to listen to each slide. You may wish to follow along in your WPDM Guide.
Evaluating activities intended to improve the quality of Child Outcomes Data August 2016.
Presented at Annual Conference of the American Evaluation Association Anaheim, CA, November 2011 Lessons Learned about How to Support Outcomes Measurement.
PILOT SCHOOL PRINCIPAL EVALUATION
Moving Maryland Forward: Service Coordinator Resource Group
An Introduction to Implementation Tools to Help Build Implementation Capacity SPDG Evaluators May 2012 Michelle A. Duda, Dean L. Fixsen,
NC State Improvement Project
Phase I Strategies to Improve Social-Emotional Outcomes
Pacific and Caribbean States/Entities Early Intervention and
Using Formative Assessment
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 8. Routine monitoring.
Florida’s MTSS Project: Self-Assessment of MTSS (SAM)
Kristin Reedy, Co-Director June 24, 2016
OSEP Project Directors Meeting
RtI Innovations: Evaluation Anna Harms & Jose Castillo
Supporting Improvement of Local Child Outcomes Measurement Systems
Measuring Project Performance: Tips and Tools to Showcase Your Results
Evaluating Infrastructure Workshop Series
ECTA/DaSy System Framework Self-Assessment
Section 3 Evaluation and Assessment Documentation that Informs the 3 Global Outcomes and Eligibility Determination Facilitator’s Notes: Handouts used in.
Community Input Discussions:
ECTA/DaSy System Framework Self-Assessment Comparison Tool
Evaluating Practices Workshop Series
The Early Childhood Technical Assistance Center
Supporting States in Building a Child Outcomes Measurement System
Structures for Implementation
2018 OSEP Project Directors’ Conference
Resource 1. Evaluation Planning Template
Implementation Guide for Linking Adults to Opportunity
ECTA/DaSy System Framework Self-Assessment
Implementation, Monitoring, and NM DASH
Supporting Improvement of Local Child Outcomes Measurement Systems
Evaluating Infrastructure Workshop Series
Chicago Public Schools
Grantee Guide to Project Performance Measurement
Using Data for Program Improvement
Critical Element: Implementation Plan
OSEP Project Directors Meeting July 2018
Using Data for Program Improvement
PUBLIC SCHOOL CHOICE RENEWAL PROCESS
Integrating Results into Accountability Procedures and Activities
Overview of the Kansas Technical Assistance System Network: Using Technical Assistance to Facilitate Implementation of Evidence-based Practices Kerry.
Presenters: Ravyn Hawkins, Arkansas Department of Human Services
Encore Webinar February 13, 2019, 3:00pm ET
Part B: Evaluating Implementation Progress and Impact of Professional Development Efforts on Evidence-Based Practices   NCSI Cross-State Learning Collaboratives.
An Introduction to Evaluating Federal Title Funding
Measuring Child and Family Outcomes Conference August 2008
Special Ed. Administrator’s Academy, September 24, 2013
Training Chapter for the Advanced Field Experience Form (Pre-CPAST
Implementing, Sustaining and Scaling-Up High Quality Inclusive Preschool Policies and Practices: Application for Intensive TA September 10, 2019 Lise.
Presentation transcript:

SSIP Evaluation Workshop 2 SSIP Evaluation Workshop 2.0: Taking the Online Series to the Next Level Evaluating Infrastructure Breakout Improving Data, Improving Outcomes Pre-Conference August 14, 2018

State Groupings for Breakout Sessions Salon F: Practices GA, MA, LA CO, UT, AR CT, PA, ID-B HI, ID-C IL, WY Salon E: Infrastructure CT, IL, CO GA, FL

Expected Outcomes Participants will increase awareness of: Existing tools to measure infrastructure outcomes Considerations for selecting or adapting a tool to measure results of infrastructure improvements Using multiple methods to evaluate infrastructure outcomes How one state adjusted their evaluation plan to measure infrastructure improvements, including selecting tools

Evaluating Infrastructure Improvements Evaluate progress: How is implementation going? Not simply describing the activities that were implemented but relate them to the initial analysis Reporting on benchmarks or other indicators of system change Evaluate outcomes: What changes are we seeing? What’s the impact of those changes? How will the infrastructure support local Early Intervention Programs to implement EBPs? How will the infrastructure support scaling up and/or sustainability? Progress toward the SiMR is the ultimate goal, but we all know that is going to take some time. So in addition, you’re looking at progress in implementing the SSIP. This helps answer the question: What is descriptively and operationally different about your system at the end of the SSIP cycle? What will things look like when you’ve changed/improved your infrastructure? What do you want to know about the change?

"To measure an outcome is to measure the end result, not the work involved in getting there".

Definitions: Outputs and Outcomes Outputs: Direct, observable evidence that an activity has been completed as planned Outcomes: Statement of the benefit or change you expect as a result of the completed activities. Outcomes can vary based on two dimensions: When you would expect the outcomes to occur, i.e., short-term, intermediate or long-term (impact); and The level at which you are defining your outcome, e.g., state level, local/program level, practitioner, child/family. Focus primarily on the difference between outputs and outcomes. Infrastructure can happen at state and local level. For more information, see key terms and definitions in Evaluating Infrastructure Improvements Session 1 Pre-Work: https://dasycenter.org/wp-content/uploads/2018/01/Infrastructure_Session1_Pre-Work_011718_Final.docx

Example: Finance Activity: Develop and implement a plan to improve EI finance system to access additional Medicaid funds. Output: Finance plan Outcome: ???? What do you want your system to look like as a result of developing and implementing the finance plan to increase access to additional Medicaid funds? Performance indicator: ??? How will you know you achieved the outcome? Describe that a state has an issue with not accessing all available Medicaid funds for EI. As a result this state is implementing an activity related to developing and implementing a plan to improve the EI finance system to access additional Medicaid funds. They have identified an output: having a Finance Plan developed. Ask the participants: What might be an outcome of this activity? The question the group should consider in coming up with a potential outcome is: What do you want your system to look like as a result of developing and implementing the finance to increase access to Medicaid? One the group identified a potential outcome, ask them to identify a potential performance indicator. The questions they should consider in developing the performance indicator is: How will you know you achieved the outcome? If the group falters, here is a possible short term outcome and performance indicators: Outcome: The number of EI providers that are approved as Medicaid providers increases Performance indicators: 80% of EI providers will initiate steps to become Medicaid providers by February 2018 70% of all EI providers will be approved as Medicaid providers by December 2018 Here is a possible intermediate outcome and performance indicators: Outcome: The finance system will be enhanced to support and fund the EI system. 90% of the QIs in ECTA System Framework Finance subcomponents 1, 2 and 4 will have more than 50% of elements fully implemented by February 2019. The total funding available to state EI will increase by $$$$$ by March 2019

Determining Data Collection Approach Start by considering existing tools relevant to your infrastructure improvement (e.g., ECTA System Framework, model developer tools, other frameworks) For ECTA System Framework: Is there a component that aligns? If so, is there a subcomponent or quality indicator that aligns? Does the tool measure what you want it to measure? If not, can it be adapted? Will it measure improvements over time? What data do you already have (e.g., fiscal, personnel, accountability data) that can be used with the tool or will you need to collect new data? What additional data could you collect to better understand infrastructure improvement (e.g., qualitative data)? In determining their data collection, states need to take into account their capacity to do this work. Also, a tool might align well with what you want to measure but it doesn’t align with your outcome or performance indicator in your evaluation plan. As a result you may want to use the tool and revise your outcome and performance indicator so they align with each other.

Existing Tools for Evaluating Infrastructure ECTA System Framework State or Local Child Outcomes Measurement Framework Benchmarks of Quality for Home-Visiting Programs Model developer infrastructure tools See Evaluating Infrastructure Improvements Session 2 Pre-Work: https://dasycenter.org/wp-content/uploads/2018/01/Infrastructure_Session2_Pre-Work_013118_FINAL.docx

ECTA System Framework: Quality Indicators/ Elements of Quality This is example of one Quality indicator and several of the Elements of Quality that are under a subcomponent of Inservice PD and TA in the Personnel/Workforce Component of the System Framework. This is screen shot of the Self-Assessment. (Explain that multiple stakeholders, through consensus, establish a rating for each Element of Quality based upon the available evidences for the Element. A rating for the quality indicator is automatically calculated from the ratings for each of the Elements. Note; I don’t think it is necessary to describe the ratings e.g. 1-4 for the Elements and 1-7 for the Quality Indicators unless there are questions – this is on the next slide) The Personnel/Workforce component includes other subcomponents including: state personnel standards, preservice personnel development, recruitment and retention

Measuring Improvement: Using Framework Self-Assessment Tools Measure change over time: from Time 1 to Time 2 Compare QI ratings, e.g., Time 1 = 3, Time 2 = 5 Compare percent of elements fully implemented, e.g., Time 1 = 20%, Time 2 = 50% Compare to a standard QI rating = 6, at least 50% are fully implemented, the rest are partially implemented At least 50% of the elements are fully implemented Quality Indicator rating scale, 1 to 7: none to all fully implemented Benchmark: A standard against which a program’s results and progress can be compared. A benchmark is a similar measure for a similar group against which progress can be gauged. A QI rating of 7 would be the highest – all elements are fully implemented; which of these two sets a higher standard? In addition to quantitative descriptors in SSIP report, highlight examples of accomplishments achieved.

Considerations for Tool Selection or Adaptation Is the tool aligned with the infrastructure improvements you are implementing? If not, could it be adapted? Is it measuring what you want to measure? Is it practical to administer? Number of items Time required Can it be implemented consistently across those using the tool? Clarity of instructions and items Does the tool allow for enough variation to measure different degrees of progress? Does the tool provide useful information (e.g. data to determine if modifications to improvement activities are needed)? Measurement – does the tool measure your infrastructure improvements? Variation – need enough variation in the scoring scale (1-4, 1-5, etc.) to be able to measure different degrees of progress. Useful information for continuous improvement

Decision Points for Adapting Tool Design of the tool Phrasing of items – single concept Phrasing of items – clarity Selecting the response options Pilot testing the measure Method for rating Recorded sessions (if applicable) Randomization process (if applicable) Raters Training for raters Feely et al (2018) Design of the tool – format and content Phrasing of items – single concept – no double-barreled questions – break them into multiple questions Phrasing of items – clarity – avoid jargon, operationalized and clear Selecting the response options – yes/no, scale, other Pilot testing the measure – will you pilot the measure and revise it to ensure it can be used consistently (reliably)? Method for rating – live observation, video recording, self-assessment, other Recorded sessions (if applicable) – will any or all events be recorded? – Randomization process (if applicable) – will you randomize who is observed or which observations get rated? (not applicable for infrastructure) Raters – who are they, how many, and who do they rate Training for raters – what training/practice will be provided for raters on the tool and process? Will they meet interrater reliability before rating? Icon made by Freepik for flaticon.com

Considerations for Using the Tool Who participates (e.g. stakeholder groups, local programs, state staff)? How will information be collected (e.g., data system, checklist, self-rating scale, behavioral observation, interviews)? Online or hard-copy? Will data need to be collected from comparison groups? If so, will it be through pre- and post- collections? When will data collection happen? Is it easy to administer? Is training needed?

State X Example: Infrastructure Evaluation Challenges Implementing a variety of improvement activities related to: In-service PD system Local program infrastructure to support implementation of EBPS Child outcome measurement system Only measuring progress of infrastructure improvement through outputs (e.g. not measuring infrastructure improvements outcomes) Uncertain about available tools to measure infrastructure improvements and how to select or adapt them Limited state and local program staff time to adapt/develop tools and collect data

State X: In-service PD Improvement Activities Enhancing their in-service PD system by developing: provider competencies training materials procedures to sustain coaching with new providers State X had identified issues with their inservice professional development system and as a result implemented improvement activities such as developing provider competencies, developing training materials, and developing procedures to sustain coaching of new providers, etc. Since state X had not developed outcomes to measuring results of these infrastructure improvements in their evaluation plan, their next step was to develop the outcome, evaluation questions, performance indicator and determine what tool they would use to measuring improvements.

State X Outcome Evaluation of In-service PD Outcome Type Outcome Evaluation Question(s) How will we know (Performance Indicator) Measurement/ Data Collection Method Timeline/ Measurement Intervals Analysis Description State System-Level: Intermediate A sustainable statewide system is in place to support high-quality personnel development and technical assistance a. Has the statewide system for in-service personnel development and technical assistance improved (incremental progress)?   b. Does the state have a quality system for in-service personnel development and technical assistance? a.  The QI ratings for Indicator PN7 in the in-service personnel development subcomponent will have a QI rating of 5 in 2018 b.  The Quality Indicator PN7 for the in-service personnel development subcomponent will have a QI rating of 6 or 7 in 2019 System Framework Self-Assessment on in-service personnel development and technical assistance (Personnel/Work-force, subcomponent 4 – PN7) a. 3/18 b. Post measure 3/19 a. Compare the automatic calculated QI self-assessment score for PN7 to a rating of 5 in 3/18 b. Compare the automatically calculated QI self-assessment score for PN7 to a rating of 6 or 7 in 3/19 With TA support, State X developed an outcome for their in-service PD system, along with several evaluation questions and performance indicators. After exploring what existing tool were available, what data they had, and their capacity, State X determined they would use one Quality Indicator (PN7) and its Elements of Quality from the Personnel/Workforce Component of the System Framework to measure the results of their in-service PD improvement efforts. The state was initially thinking they would need to do focus groups to collect outcome data but they acknowledged time limitations on the part of state staff as well other key stakeholders to collect this kind of data. The state realized they had available evidences (e.g. existing data) to support their work in the Elements of Quality for PN7 and knew they had the capacity to complete this portion of the Framework Self-Assessment. You will see they are measuring the results their in-service PD improvement activities based upon one standard in 2018 and a higher standard in 2019 (e.g. PN7 will have a rating of 5 in 2018 and a 7 in 2019.) Ask: If the state wanted to go deeper into what was working, not working with their inservice PD system, what methods could the state use to supplement the existing data from the Framework Self-Assessment? Answer: One possibility would be conducting focus groups?

State X: Local Infrastructure Improvement Improvement Activity: Supporting demonstration sites in establishing the necessary personnel infrastructure to implement Coaching in Natural Learning Environment EBPs (Shelden and Rush) Outcome: EI Demonstration Sites will have the team structure necessary to implement EBP (Coaching in Natural Learning Environments) Tool: Checklist for Implementing a Primary Coach Approach to Teaming (Shelden & Rush) State X was also implementing activities to support each local demonstration sites to ensure they had the necessary personnel infrastructure to implement EBPs related to Coaching in Natural Learning Environments (Shelden and Rush). State X wanted to measure the results of these infrastructure improvements at the local level. They considered what they wanted their local demonstration sites to look like as a result of implementing these improvement activities to establish the outcome of: EI Demonstration Sites will have the team structure necessary to implement EBP (Coaching in Natural Learning Environments) State X explored what existing tools were available, especially those infrastructure tools developed by Rush and Shelden, and discussed if the tools would measure their outcome. They discussed the possibility of adapting an existing tool or developing a new. Regardless of whether they used an existing tool, an adapted tool or they developed a new tool, they realized they would have to collect new data from the local implementation sites since they did have this data available. As a result, they looked at the ease of use of the existing tools. State X decided to use the Checklist for Implementing a Primary Coach Approach to Teaming developed by Rush and Shelden to measure local personnel infrastructure improvements.

State X: Improving Child Outcome System Improvement Activities: Improving child outcome measurement system (e.g. developing new COS resources to support consistent COS ratings, developing family materials on COS process, developing processes for EI program’s ongoing use of COS data, revising COS training materials) Outcome: The state has an improved system for Child Outcome Measurement Tool: State Child Outcomes Measurement System Framework Self-Assessment [Data Collection, Analysis, and Using Data] Another infrastructure area State X was developing significant time in improving their child outcomes measurement system. They were developing new ECO resources to support consistent COS ratings, developing family materials on the ECO process, establishing processes that support EI program’s ongoing use of ECO data for program improvement, and revising ECO training materials. These activities crossed multiple infrastructure components including Data System and Personnel/Workforce. Similar to their previous process, the state discussed what they wanted their system to look like when these improvements activities were implemented. They initially discussed having multiple outcomes but considered their limited capacity to measure multiple outcomes. As a result, they landed on one outcome especially after exploring what existing tools would measure what they hoped to achieve with these infrastructure improvements. They had previously completed portions of the State Child Outcomes System Framework prior to implementation of these improvement activities so they had time 1 data already available. State X determined they had the capacity to complete sections of the State Child Outcomes System Framework annually to measure ongoing progress with their COS infrastructure.

Questions

State Work Time

How we will Work Together Today is a conversation Ask questions Tell us what you want to work on Tell us how we can support you going forward We asked you what you about your priorities for today. These priorities may have changed. We hope that you will be able engage in conversations with others states. This Photo by Unknown Author is licensed under CC BY

Optional Worksheets for State Work Time Evaluation Plan Worksheet Selecting an Infrastructure Tool Worksheet Decision Points for Adapting a Tool Worksheet

Key Resources Definitions: Evaluating Infrastructure Improvements Session 1 Pre-Work: https://dasycenter.org/wp-content/uploads/2018/01/Infrastructure_Session1_Pre-Work_011718_Final.docx Tools for evaluating infrastructure improvements: Evaluating Infrastructure Improvements Session 2 Pre-Work: https://dasycenter.org/wp-content/uploads/2018/01/Infrastructure_Session2_Pre-Work_013118_FINAL.docx Questions to refine evaluation, including data collection: Refining Your Evaluation: Data Pathway – From Source to Use: https://dasycenter.org/refining-your-evaluation-data-pathway-from-source-to-use/

Contact Information Christina Kasprzak, ECTA Christina.Kasprzak@unc.edu Ardith Ferguson, NCSI afergus@wested.org Sherry Franklin, ECTA Sherry.Franklin@unc.edu