Presentation is loading. Please wait.

Presentation is loading. Please wait.

Session Overview Need for Ongoing, Sensitive Assessment to Inform Intervention Why Progress Monitoring? Selecting Assessments Aggregating for Program.

Similar presentations


Presentation on theme: "Session Overview Need for Ongoing, Sensitive Assessment to Inform Intervention Why Progress Monitoring? Selecting Assessments Aggregating for Program."— Presentation transcript:

0 Demonstrating Growth for Low-Achieving Students With Disabilities
Assessment Toolbox Demonstrating Growth for Low-Achieving Students With Disabilities Laura Kuchle, American Institutes for Research Lou Danielson, American Institutes for Research Pakethia Harris, American Institutes for Research Kristin Ruedel, American Institutes for Research Lynn Fuchs, Vanderbilt University

1 Session Overview Need for Ongoing, Sensitive Assessment to Inform Intervention Why Progress Monitoring? Selecting Assessments Aggregating for Program Evaluation Monitoring Implementation Discussion

2 Drivers for This Work National Center for Systemic Improvement (NCSI)
Supporting states with planning and implementing their state systemic improvement plans, part of OSEP’s Results Driven Accountability National Center on Intensive Intervention (NCII) Improving intervention for students with severe and persistent learning and behavioral difficulties

3 Who Is in the Room?

4 Need for Ongoing, Sensitive Assessment to Inform Intervention

5 Why Do We Need Data-Driven Intensive Intervention?
Students with disabilities have… Lower academic achievement Higher dropout rates Higher arrest rates For more information: 2015 NAEP ( Sanford et al., 2011; Planty et al., 2008; Aud et al., 2012

6 Who Needs Intensive Intervention?
Students with disabilities who are not making adequate progress in their current instructional program Students who present with very low academic achievement and/or high-intensity or high-frequency behavior problems (typically those with disabilities) Students in a tiered intervention system who have not responded to secondary intervention programs delivered with fidelity

7 Types of Assessment Type When? Why? Summative After
Assessment of Learning Diagnostic Before Identify skill strengths and weakness Formative During Assessment for Learning

8 Why Summative Assessments Are Not Enough
Low sensitivity to improvement for the target population Assessment is too infrequent to inform timely intervention adaptations If using state tests, many don’t start until third grade

9 Why Progress Monitoring?

10 What Is Progress Monitoring?
A standardized method of formative assessment tells us how well students are responding to instruction. Progress monitoring tools have the following characteristics: Brief assessments Repeated measures that capture student learning Measures of age-appropriate outcomes Reliable, valid, and evidence based

11 Progress Monitoring Informs Student-Level Intervention Decisions
Data allow us to… Estimate the rates of improvement across time. Compare the efficacy of different forms of instruction. Identify students who are not demonstrating adequate progress. Determine when an instructional change is needed.

12 Approaches to Progress Monitoring
What is the difference?

13 Benefits of Monitoring Progress With General Outcome Measures
Number of Assessments/15 Weeks Effect Size (SD) Percentile Gain 1 .34 13.5 5 .53 20 10 .60 22.5 15 .66 24.5 .71 26 25 .78 28.5 30 .82 29 Source: Bangert-Drowns, Kulik, and Kulik (1991); similar results found by Fuchs and Fuchs (1986).

14 Common Reading Measures
Recommended Grades Letter Naming Fluency (LNF) Letter Sound Fluency (LSF) Phoneme Segmentation Fluency (PSF) K Nonsense Word Fluency (NWF) Late K–1 Word Identification Fluency (WIF) 1 Passage Reading Fluency (PRF), also called Oral Reading Fluency (ORF) Late 1–4 Maze or Maze Fluency 4+

15 Common Mathematics Measures
Domain Measures Grades Early numeracy Oral Counting Next Number Number Identification Quantity Discrimination Missing Number K–1 Computation M-CBM Math Computation Number Facts 1–8 Concepts and applications Math Concepts and Applications Concepts Concepts/Applications 2–8

16 Selecting Assessments

17 Review of Screening Tools
Center on Response to Intervention

18 Review of Progress Monitoring Tools
National Center on Intensive Intervention

19 Considerations When Selecting or Evaluating a Tool
Skills to be measured—age and grade appropriate Cost and training requirements Administration and scoring time Data management Technical rigor (consider population)

20 Dimensions of Technical Rigor
Reliability Validity Evidence of being sensitive to change Alternate/parallel forms: different versions of the assessment that are of comparable difficulty

21 Should We Ever Assess Off-Level…
Should We Ever Assess Off-Level…? Consider the Purpose of the Assessment Screening to identify students at risk for poor learning outcomes should always occur at grade level and do the following: Determine students’ response to grade-level core instruction. Assess performance relative to grade-level expectations. Provide schoolwide data regarding the percentage of students in each grade level who are at or below benchmarks.

22 Should We Ever Assess Off-Level…
Should We Ever Assess Off-Level…? Consider the Purpose of the Assessment (2) Progress monitoring should be done at grade level when possible, but the following is also applicable: It must also match a student’s instructional level. If a student’s performance is well below grade-level expectations, grade-level probes are unlikely to be sensitive to growth. Off-level assessment may be warranted in these cases.

23 More Information From NCII
Academic Progress Monitoring ng-academic-progress-monitoring-individualized- instructional-planning-dbi-training Academic Diagnostic Assessment ormal-academic-diagnostic-assessment-using- data-guide-intensive-instruction-dbi-training

24 Aggregating for Program Evaluation

25 Thinking Beyond the Student Level
Aggregate progress monitoring data to inform programmatic decisions. Sample question: Have our system changes (e.g., professional development or new service delivery framework) resulted in improved student performance?

26 How Can Progress Monitoring Data Inform Systems-Level Decisions?
Aggregate progress monitoring results for program evaluation: Timely and sensitive outcome measure Already available as important part of improvement efforts targeting students with intensive needs: Information about both outcomes and implementation

27 Potential Challenges Whose data do we need?
Sample considerations Tracking a sample across years How can we aggregate across different progress monitoring tools and measures?

28 Selecting a Sample For smaller populations, can you analyze all available progress monitoring data? If selecting program pilot sites, consider the feasibility of program implementation and collecting or accessing data. If sampling only a portion of program participants, consider the representativeness of your sample.

29 Tracking a Sample Across Multiple School Years
Longitudinally examine performance for a group of students. What about students who enter or exit the program or target group? Consider tracking progress for a broader sample of students who are at risk. NCSI Brief: Advantages of Assessing SiMR Progress by Tracking a Sample Over Time advantages-of-assessing-simr-progress-by- tracking-a-sample-over-time/

30 Aggregating Across Different Measures
Within a state or district, schools may use different progress monitoring tools. Within a school, students receive different measures according to grade and time of year. Within a grade, students’ progress may need to be monitored with different measures: Skill targeted by intervention Current level of performance

31 One Possible Approach: Percentage of Expected Slope
How does the student’s slope or rate of improvement compare to the “expected” slope for students at the same instructional level? Consult norms provided by the tool developer: The NCII tool chart includes the reliability and predictive validity of the slope. Find the mean or median slope for the appropriate grade and time of year, for the measure used for progress monitoring: When available, find the norm for the student’s initial level of performance.

32 Calculating the Percentage for One Student
Student’s slope divided by expected slope Example: Julia’s slope = 1 word per minute (wpm) per week Expected slope = 2 wpm/week Julia’s percentage of expected slope = 1 2 =.5=50%

33 Aggregating Across Students
Find the mean percentage across all students. Example: Jim = % Mary = 200% Ted = 150% Linda = 100% Mean = 125%

34 Aggregation Example—Multiple Measures and Grades
Student Measure Student’s Slope Expected Slope Percentage Jim Third-grade reading fluency 0.5 1 50% Mary First-grade math computation 1.5 0.75 200% Ted Sixth-grade math concepts and applications 0.3 0.2 150% Linda First-grade nonsense word fluency 100% Mean percentage across all students = 125% Literacy students = 75% Math students = 175% First-grade students = 150%

35 Using Mean Percentage for Program Evaluation
Compare the program mean to the baseline or comparison program or group. Examples: Comparison groups: School A (new program) = 125% School B (business as usual) = 80% Pre/post for one group: Baseline (old program) = 90% New program = 125%

36 Monitoring Implementation

37 Importance of Evaluating Implementation
Implementation will be challenging and occur over time. Early and ongoing (formative) evaluation of implementation will help to: Document early successes. Identify solutions that foster expected progress.

38 NCSI Implementation Evaluation Matrix
Available under “SSIP Phase III” resources at:

39 Evaluating District and School Frameworks
If you are using a specific framework for assessment and intervention, you may find tools to help you assess the use of that framework or process. For example: RTI Essential Components Integrity Rubric and Worksheet components-rti-integrity-rubric-and-worksheet Schoolwide PBIS evaluation tools

40 NCII Tools for Measuring Data-Based Individualization (DBI) Implementation
DBI Implementation Rubric and Interview implementation-rubric-and-interview Student-Level DBI Implementation Checklists data-based-individualization-implementation-checklists DBI Implementation Log: Daily and Weekly Intervention Review individualization-implementation-log-daily-and-weekly- intervention-review Intensive Intervention Implementation Review Log intervention-implementation-review-log

41 Intervention Fidelity
If you are using one or more specific interventions or programs, see if the publisher: Identifies intervention components considered essential for: Strong implementation Achieving intended student outcomes Provides a tool for assessing fidelity (e.g., checklist or rubric)

42 Questions and Discussion
The learning continues! Give us your name and address if you would like to know more about this issue. Connect with NCSI and NCII for additional resources and future webinars

43 References Bangert-Drowns, R. L., Kulik, J. A., & Kulik, C.-L. C. (1991). Effects of frequent classroom testing. Journal of Educational Research, 85, 89–99. Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta- analysis. Exceptional Children, 53(3), –208.

44 THANK YOU! For more information, please contact Kristin Ruedel, the NCSI Data Use Lead, at

45 THANK YOU! http://ncsi.wested.org | @TheNCSI


Download ppt "Session Overview Need for Ongoing, Sensitive Assessment to Inform Intervention Why Progress Monitoring? Selecting Assessments Aggregating for Program."

Similar presentations


Ads by Google