Download presentation
Presentation is loading. Please wait.
Published byMadeline Bradley Modified over 6 years ago
1
Implementation Science: Measuring Initiative Effectiveness Beyond State Test Scores
Presented by: Kim Gibbons, Ph.D, Center for Applied Research and Educational Improvement Katie Pekel, Ed.D, University of Minnesota Principal in Residence
2
Agenda Why Program Evaluation? The Importance of Fidelity
Evaluation Steps Practical Examples
3
Why evaluate what we do? What is the return on investment?
Are we closing the gap fast enough? What works in your context? Are we leveraging our resources in an effective manner? Is what we are doing better or worse than what we have previously done?
4
Why Program Evaluation?
Helps bring together school improvement and data- based decision-making Process for systematically determining the effectiveness of a school program and how it can be improved. Answers two questions: How well are we doing? How can we improve?
5
How would your budget decisions change with the answers to these questions?
Students of teachers receiving support from instructional coaches gained 4 months more learning than students of teachers who did not receive coaching. Students of teachers who received professional development in math fared no better than students of teachers who did not receive the professional development. Elementary buildings implementing an MTSS framework saw a 50% reduction in SLD prevalence over 3 years.
6
Program Evaluation vs Research
Program Evaluation – Value judgments about a program Research Findings are presented in terms of impact (Looking for statistically significant findings) Not a value judgment “Gold Standard” includes a treatment and control group See Handout: Glossary of Terms_Evaluation vs Research
8
Program Evaluation: How Often Does it Happen?
51% of administrators in MN rated their capacity to evaluate policies and programs as poor. High-quality program evaluation is infrequent due to : Lack of time (78%) Inadequate staffing/expertise (63%) Cost (53%) CAREI Statewide Needs Assessment: Don’t really need to be an expert in program evaluation to ask how are we doing? How can we improve?
9
What are the barriers to conducting program evaluation. https://www
10
20-80 Rule It’s better to do an “average” effort at evaluation than to do no evaluation at all. Many evaluation techniques are easy to execute, can make use of existing data, and can be performed on a scale that is practical. The challenge is to conduct an evaluation that yields useful data.
11
What is one thing you would love to evaluate in your building
12
Five Elements of Fidelity
This slide provides a more in-depth look at the components of fidelity often discussed in the literature (Dane & Schneider, 1998; Gresham et al., 1993; O’Donnell, 2008). While the elements or components of fidelity and their organization may vary, this provides one example of a way to think about fidelity. The slide includes five elements of fidelity adherence, exposure, quality of delivery, program specificity, and student engagement. When we discuss adherence we are focused on how well we stick to the plan/curriculum/assessment—do we implement as the plan/curriculum/assessment was intended to be implemented based on research? Exposure/duration refers to how often a student receives an intervention and how long an intervention lasts. When thinking about fidelity we are considering whether the exposure/duration being used with a student matches the recommendation by the author/publisher of the curriculum. Not only is it important to adhere to the plan/curriculum/assessment, it is also important to look at the quality of delivery. This refers to how well the intervention, assessment, or instruction is delivered. For example, do you use good teaching practices? Another component is program specificity or how well the intervention is defined and different from other interventions. Having clearly defined interventions/assessments allows teachers to more easily adhere to the program as defined. Just as quality of delivery is important, it is important to also focus on student engagement or how engaged and involved the students are in the intervention or activity. Following a prescribed program alone is not enough.
13
Getting Started: Planning an Evaluation
Three Questions: What are we looking for? How will we look for it? How will we use the data? See Handout: Program Evaluation Overview
14
Evaluation Types Formative: (process) conducted internally by staff working within the program Summative: (outcome) conducted by those outside the program in response to questions prompted from outside U of M Resource for Summative Evaluations: CAREI – Center for Applied Research and Educational Improvement - See Handout: Program Evaluation Overview
15
Evaluation Steps Designing Your Evaluation Plan
Determine the Purpose of the Evaluation Establish an Evaluation Team Determine the Stakeholders and Audience Craft an Evaluation Plan Summarize and Communicate Results See Handout: Evaluation Plan Template
16
#1 Determine the Purpose
Program improvement Needs assessment evaluation Program continuation Applied Example: Systematically determine the quality/effectiveness of a push-in special education service delivery model in Grade 3 reading.
17
#2 Establish an Evaluation Team
Important to involve stakeholders to contribute to the evaluation plan Helpful to have one coordinator of the evaluation Applied Example: General education teachers, special education teachers, school psychologist, building principal, and elementary literacy collaborative planner
18
#3 Determine Stakeholders and Audience
Helps team to determine: Evaluation Planning Communication of results Applied Example: Teachers, parents, district office, education district.
19
#4 Craft an Evaluation Plan
Clear plan for not only what you are evaluating, but the steps you will take to gather and analyze information, share the information, and plan for improvements. Evaluation Plan Components Create Evaluation Questions Determine Data Sources Collect Data Analyze and Interpret Data
20
Evaluation Plan Components: Evaluation Questions
Three to five questions that can be answered by data gathered and analyzed Applied Example: How much growth is occurring for students with disabilities who receive their reading instruction in the general education? What is the level of engagement for students with disabilities during reading instruction? What is the impact on outcomes for all students?
21
Evaluation Plan Components: Data Sources
Qualitative Data measure types that may be represented by name, symbol or code. Examples: Observations, focus groups, interviews and open ended survey questions are all qualitative data. Quantitative Data are measures of data that are expressed as numbers or numeric variable (how many, how much, how often). Examples: Test scores, attendance records, scaled response surveys, college applications completed
22
Data Sources Applied Example: NWEA MAP Test
General Outcome Measures for IEP Goal Monitoring Fidelity of Implementation Checks
23
Data Sources- Surveys Audience – Respondents
Students, Parents, Staff, Community Platform Student management system Naviance Survey Monkey Response Scale Likert Forced choice Same throughout See Handout: Survey Writing Strongly Disagree Disagree Somewhat Disagree Somewhat Agree Agree Strongly Agree
24
Surveys cont. Survey Introduction Purpose and use
Length of time to complete (pilot survey to determine) Survey Construction Group like question topics on one screen Questions should be clear Responses on horizontal radio buttons Demographics at the end Completion screen
25
Data Sources - Focus Groups
Selecting Focus Group Participants Size of group (4-6) Cross section of the school population Prepping students for good responses Conducting a Focus Group Use a protocol – Sample provided 2 people (facilitator and recorder) plus record digitally if possible 30-45 minutes May want to have probing questions prepared for each question Analyzing Focus Group Data Coding by grouping like responses Can use numbers to express coded responses Individual quotes that capture or illustrate a point See Handout: Focus Group Instructions
26
Evaluation Plan Components: Collect Data
The evaluation team will determine how to collect the data and who is responsible for its collection. The team will need to consider: Who they need to collect data from The best times to collect the data How long it will take to collect
27
Collect Data Applied Example:
Functional Assessment System for Teachers (FAST) Adaptive Reading CBM’s Observations of student engagement
28
Evaluation Plan Components: Analyze and Interpret Data
Data should be analyzed and interpreted into clear and easy to understand responses to the questions. It is important to remember: Multiple data sources will often inform each question Data triangulation strengthens the judgment of the evaluation
29
Analyze and Interpret Applied Example:
15 SPED students received special education “push-in” services during core reading time across three different Grade 3 classrooms FAST Growth: 14/15 (93%) made typical or more growth Section 1: 5/5 (100%) made typical or more growth Section 2: 4/4 (100%) made typical or more growth Section 3: 5/6 (83%) made typical or more growth Engagement for SWD’s was comparable to typical peers. TIES Edspring
30
Analyze and Interpret Applied Example:
3rd Grade Across All Teachers on aReading: Increased at or above proficient from 72% (128 students) to 81% (149 students) Decreased percent of high-risk students from 19% (32 students) to 9% (13 students) Data supports that all students can grow even when SPED students with reading needs are included in the core
31
#5 Summarize and Communicate Results
Full Evaluation Report Executive Summary Report Program description and context Evaluation purpose Evaluation questions Data sources including the instruments in an appendix Data analysis Conclusions Program context and evaluation purpose Findings by question *Sample CAREI evaluation reports can be found at
32
Communicating Results: SCRED Examples
Applied Example: 3 Tier Study Example Reports Elementary School
33
Summing it All Up 20% of effort generates 80% of results!
In this era of accountability for results, we need to know: whether we are leveraging our resources in an effective manner. Is what we are doing better or worse than what we have previously done? 20% of effort generates 80% of results! Fidelity matters Use a systematic process You see, in public education, we love to take on new initiatives. People talk about initiative overload. You've just passed the beginning of school. Everybody goes to in-service. Go into in-service is like having New Year's resolutions. Everybody makes New Year's resolutions about what they're going to do to basically improve during the school year. However, what we know about New Year's resolutions is only about 8% of the people actually have followed through 6 months later. You have to change that statistic as it relates to you and results-driven accountability. what's the most immediate past experience that you've had, or your school's had, your district has had in trying to improve results? If things haven't worked well in the past, people tend to conclude that it was because the initiative was in some way, even as a cheerleading aspect, it just wasn't for them, but in many instances, what we found in the analysis of implementing improvement programs is that the people never ever bought in. They never ever saw it as being something that they could do, should do, or would do. Think about that past experience.
34
Your Questions? Thank you!!!!!!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.