Presentation is loading. Please wait.

Presentation is loading. Please wait.

Best Practices in Data-Based Decision Making Within an RTI Model Gary L. Cates, Ph.D. Illinois State University GaryCates.net Ben Ditkowsky, Ph.D. Lincolnwood.

Similar presentations


Presentation on theme: "Best Practices in Data-Based Decision Making Within an RTI Model Gary L. Cates, Ph.D. Illinois State University GaryCates.net Ben Ditkowsky, Ph.D. Lincolnwood."— Presentation transcript:

1 Best Practices in Data-Based Decision Making Within an RTI Model Gary L. Cates, Ph.D. Illinois State University GaryCates.net Ben Ditkowsky, Ph.D. Lincolnwood School District 74 MeasuredEffects.Com

2 Acknowledgments Cates, Blum, & Swerdlik (2011). Authors of Effective RTI Training and Practices: Helping School and District Teams Improve Academic Performance and Social Behavior and this PowerPoint presentation. Champaign, IL: Research Press.

3

4 Universal Core Curriculum Universal Screening Measures Identification of At-Risk Students Standard Educational Diagnostic Tool Tier II Standard Protocol Instruction Progress Monitoring Individualized Diagnostic Assessment Tier III Individualized Instruction Progress Monitoring Special Education Entitlement Progress Monitoring

5 Response to Intervention Is Data Based, Decision Making Comprehensive system of student support for academics and behavior Has a prevention focus Matches instructional needs with scientifically based interventions/instruction for all students Emphasizes data-based decision making across a multi-tiered framework

6 Tier III Individualized Instruction Tier II Small-Group Standard Protocol Instruction Tier I Core Universal Curriculum

7 Data Based Decision Making with Universal Screening Measures

8 Presentation Activity 1 What have you heard about universal screening measures? What are your biggest concerns?

9 3 Purposes of Universal Screening  Predict which students are at risk for not meeting AYP (or long-term educational goals)  Monitor progress of all students over time  Reduce the need to do more in-depth diagnostic assessment with all students  Needed for reading, writing, math, and behavior

10 Rationale for Using Universal Screening Measures  It is analogous to medical check-ups (but three times a year, not once)  Determine whether all students are meeting milestone (i.e., benchmarks) for predicted adequate growth  Provide intervention/support if they are not

11 Characteristics of Universal Screening Measures  Brief to administer  Allow for multiple administration  Simple to score and interpret  Predict fairly well students at risk for not meeting AYP

12 Presentation Activity 2 What universal screening measures do you have in place currently for: – Reading? – Writing? – Math? – Behavior? How do these fit with the characteristics of USM outlined on the previous slide?

13 Examples of Universal Screening Measures for Academic Performance (USM-A) Curriculum-Based Measurement

14 Data-Based Decision Making with USM-A

15 Student Identification: Percentile Rank Approach Dual discrepancy to determine a change in intensity (i.e., tier) of service Cut Scores – Consider percentiles – District-derived cut scores are based on screening instruments’ ability to predict state scores Rate of Improvement – Average gain made per day/per week?

16 sampling of students all students included

17 Student Identification: Dual-Discrepancy Approach Rate of Improvement Average gain made per day/per week? Compared to peers (or cut score) over time

18 sampling of students all students included

19 Dual Discrepancy Discrepant from peers (or empirically supported cut score) at data collection point 1 (e.g., fall benchmark) Discrepancy continues or becomes larger at point 2 (e.g., winter benchmark) – This is referred to a student’s rate of improvement (ROI)

20

21 Resources as a Consideration Example of comparing percentile rank or some national cut score without considering resources You want to minimize: – False positives – False negatives This can be facilitated with an educational diagnostic tool

22 Correlations Direction (positive or negative) Magnitude/strength (0 to 1) If you want to understand how much overlap (i.e., variance) between the two is explained, then square your correlation r =.70then about 49% overlap (i.e., variance)

23

24 A Word About Correlations A correlation tells us about the strength of a relationship A correlation does not tell… – …the direction of the relationship If A causes B, or if B cause A – …if the relationship is causal or if there is another variable if C causes A and B Strong correlations do not always equate to accurate prediction of specific populations

25 Presentation Activity 3 How are you currently making data-based decisions using the universal screening measures you have? Do you need to make some adjustments to your decision-making process? If you answered yes to the question above, What might those adjustments be?

26 Data-Based Decision Making with USM-B

27 Some Preliminary Points Social behavior screening is just as important as academic screening We will focus on procedures (common sense is needed: If a child displays severe behavior, then bypass the system we will discuss today) We will focus on PBIS and SSBD – The programs are examples of basic principles – You do not need to purchase these exact programs

28 Screening: Office Discipline Referrals And Teacher Nomination Confirmation: Rating Scales

29 Office Discipline Referrals Good as a stand-alone screening tool for externalizing behavior problems Also good for analyzing schoolwide data – Discussed later

30 Teacher Nomination Teachers are generally good judges Nominate three students as externalizers Nominate three students as internalizers Trust your instincts and make decision – There will be more sophisticated process to confirm your choices

31 Confirming Teacher Nominations with Other Data Teacher, Parent, and Student Rating Scales – BASC – CBCL (Achenbach)

32 Example: Systematic Screening for Behavior Disorders (SSBD) Critical Events Inventory: – 33 severe behaviors (e.g., physical assault, stealing) in checklist format – Room for other behaviors not listed Adaptive Scale: Assesses socially appropriate functional skills (e.g., following teacher directions) Maladaptive Scale: Assesses risk for developing antisocial behavior (e.g., testing teacher limits)

33 Data-Based Decision Making Using Universal Screening Measures for Behavior Computer software available Web-based programs also available See handout (Microsoft Excel Template)

34 Average Referrals Per Day Per Month

35 ODR Data by Behavior

36 ODR Data by Location

37 ODR Data by Time of Day

38 ODR Data by Student

39 Review of Important Points: Academic Peformance USMs used for screening and progress monitoring It is important to adhere to the characteristics when choosing a USM USM-A’s typically are similar to curriculum- based measurement procedures There are many ways to choose appropriate cut scores, but it is critical that available resources be considered

40 Review of Important Points: Behavior Social behavior is an important area for screening Number of office discipline referrals is a strong measure for schoolwide data analysis and external behavior Both internalizing and externalizing behaviors should be screened using teacher nominations Follow-up with rating scales Use computer technology to facilitate the data-based decision-making process

41 Data Based Decision Making with Diagnostic Tools for Academic Performance and Social Behavior

42 Presentation Activity 1 What have you heard about diagnostic tools? What are your biggest concerns?

43 3 Purposes of Diagnostic Tools  Follow up with any student identified on the USM as potentially needing additional support  Identify a specific skill or subset of skills for which students need additional instructional support  Assist in linking students with skill deficits to empirically supported intervention

44 Rationale for Using Universal Screening Measures  Rule out any previous concerns flagged by a universal screening measure  Find an appropriate diagnosis  Identify an effective treatment

45 Characteristics of Diagnostic Tools  Might be administered in a one-to-one format  Require more time to administer than a USM  Generally contain a larger sample of items than a USM  Generally have a wider variety of items than a USM

46 Presentation Activity 2 What diagnostic tools (DT) do you have in place currently for: – Reading? – Writing? – Math? – Behavior? How do these fit with the characteristics of DTs outlined on the previous slide?

47 Examples of Diagnostic Tools for Academic Skills (DT-A) at Tier III and Special Education Curriculum Based Evaluation

48

49 Curriculum-Based Evaluation 1.Answer this: What does the student need in addition to what is already being provided (i.e., intensification of service)? 2.Conduct an analysis of student responding – Record review: Work samples – Observation: Independent work time – Interview: Ask the student why he or she struggles 3.Develop a hypothesis based on the above 4.Formulate a “test” of this hypothesis

50 Data-Based Decision Making with DT-A

51 Example of CBE: Tammy Fourth-grade student Did not make adequate progress with the Tier II standard protocol intervention in winter School psychologist administered an individual probe (i.e., diagnostic tool) and observed Tammy’s completion of this probe An analysis of responding yielded a diagnosis of the problem This diagnosis of the problem informs intervention selection

52 1. What seems to be the problem? 2. What should the intervention target? 3. Describe something a teacher could do to target this problem. 4. Do you have to buy an expensive program just for Tammy?

53 Revisiting the 3 Purposes of Diagnostic Tools: Tammy  Follow up with any student identified on the USM as potentially needing additional support  Identify a specific skill or subset of skills for which students need additional instructional support  Assist in linking students with skill deficits to empirically supported intervention

54 Revisiting the Characteristics of Diagnostic Tools: Tammy  Might be administered in a one-to-one format  Require more time to administer than a USM  Generally contain a larger sample of items than a USM  Generally have a wider variety of items than a USM

55 Presentation Activity 3 How are you currently making data-based decisions using the diagnostic tools you have? Do you need to make some adjustments to your decision-making process? If you answered yes to the question above, what might those adjustments be?

56 Data-Based Decision Making with Diagnostic Tools for Social Behavior (DT-B)

57 Screening: Teacher Nomination And Office Discipline Referrals Confirmation: Rating Scales Descriptive Functional Assessment: Interviews, Record Review, Observations Experimental Functional Analysis: FBA plus Manipulation of the environment to note effects

58 Office Discipline Referrals Good as a stand-alone screening tool for externalizing behavior problems Also good for analyzing schoolwide data – Discussed later See example teacher nomination form – Chapter 2 of book and on CD

59 Teacher Nomination Teachers are generally good judges Nominate three students as externalizers Nominate three students as internalizers Trust your instincts and make decision – There will be more sophisticated process to confirm your choices See example teacher nomination form – Chapter 2 of book and on CD

60 Confirming Teacher Nominations with Other Data Teacher, Parent, and Student Rating Scales – BASC – CBCL (Achenbach)

61 Example: Systematic Screening for Behavior Disorders (SSBD) Critical Events Inventory: – 33 severe behaviors (e.g., physical assault, stealing) in checklist format – Room for other behaviors not listed Adaptive Scale: Assesses socially appropriate functional skills (e.g., following teacher directions) Maladaptive Scale: Assesses risk for developing antisocial behavior (e.g., testing teacher limits)

62 Functional Assessment and/or Experimental Functional Analysis Set of procedures that requires extensive training Functional Assessment: Results in a testable hypothesis about reason for behaviors (e.g., social attention, escape, tangible reinforcement, sensory reinforcement) Functional Analysis: Results in empirical support for the tested hypothesis

63 Functional Assessment: Remember to RIOT Record review – ODRs, antecedent-behavior-consequence (A-B-C) logs, teacher narratives Interview – Teacher, child, parent, key personnel Observation – A-B-C logs, frequency counts – Classroom observations Test (not done): This is what the experimental functional analysis is all about

64 Data-Based Decision Making Using DT-B: Antecedent-Behavior-Consequence Logs

65 1. What patterns do you see here? 2. What is the likely function of behavior?

66 Data-Based Decision Making Using DT-B: Frequency Counts

67 1.What day does the behavior most often occur? What day is it least likely to occur? 2.What time of day does the behavior most often occur? Least often? 3.When should someone come to visit if they wanted to witness the behavior? Note: It is just as important to look at when the behavior occurs as it is to look at when it doesn’t.

68 Data-Based Decision Making Using DT-B: Direct Behavioral Observations

69 1.What can you get from this? 2.Are all of these behaviors severe enough to warrant individualized intervention?

70 Experimental Functional Analysis Experimentally testing a hypothesis about why a behavior occurs: – Social attention – Escape – Tangible reinforcement – Sensory reinforcement Requires expertise, cooperation, and time Strongest empirically supported method available today for identifying cause(s) of behavior

71 Example of Experimental Functional Analysis: Talking Out in Class Potential FunctionTest Condition Tangible reinforcementContingent access to reinforcement AttentionContingent reprimand EscapeContingent break upon talking out after demand Sensory stimulationLeave isolated in room Control conditionFree time with attention and no demands

72 What is the primary function of behavior?

73 Review of Important Points Three Purposes for Diagnostic Tools – As a follow-up to USM – To identify a specific skill that needs additional support – To assist in linking students to intervention Four Characteristics of Diagnostic Tools – Might be administered in a one-to-one format – Require more time to administer than a USM – Generally contain a larger sample of items than a USM – Generally have a wider variety of items than a USM

74 Review of Important Points DT-A procedures may differ at Tiers II and III DT-B procedures may differ at Tiers II and III DT data are not the only data to consider when developing an intervention

75 Progress Monitoring Evaluating Intervention Effects

76 Purpose and Rationale Determine student responsiveness to intervention at any tier Ensure that students are receiving an appropriate level and type of instructional support Identify problems early if performance “slips” are observed

77 Characteristics of Progress Monitoring Tools Similar to USM: – Brief to administer – Allow for multiple administrations and repeated measurement of student performance – Simple to score and interpret Can often be administered to groups of students

78 Progress Monitoring Tools for Academics (PMT-A) Curriculum-Based Measurement (CBM) – Reading: DIBELS, AIMSweb, easyCBM – Math: AIMSweb, easyCBM Progress should be presented on a graph to all stakeholders (parent/guardian, student, teacher, principal)

79 Progress Monitoring Tools for Behavior (PMT-B) Completion of forms – Review data collection forms on topics related diagnostic testing Collection of observation data Progress should be presented on a graph to all stakeholders (parent/guardian, student, teacher, principal) These graphed data should be similar to baseline/diagnostic data

80 Frequency of Progress Monitoring: A Tiered Approach Tier I – Three times per year at grade level Tier II – Once per week on grade-level probe – Once per week on intervention effects Tier III – Once per week at grade level – Nearly daily monitoring of intervention effects Special Education – Once per week at grade level – Nearly daily monitoring of intervention effects

81 Data-Based Decision Making with Progress Monitoring Tools Evaluating Intervention Effectiveness

82 Rate of Improvement Relative to Peers Performing a gap analysis between target student(s) and same-grade peers Goal of the intervention is to decrease gap Minimal desired outcome is to maintain gap (i.e., keep student from falling farther behind) At least two measurements are needed

83

84 Gap Analysis The gap was maintaining (as shown on previous slide) We would prefer to see the gap decrease (as shown on next slide) We need a more potent intervention – More time – Different intervention

85

86 Rate of Improvement Relative to Criterion Focus on decreasing gap between student’s current performance a specific criterion – Example: Cut score that might predict student meeting AYP This may be higher than the average peer performance in low-functioning schools This may be lower than the average peer performance in high-functioning schools

87

88 Evaluating Intervention Outcomes Comparing Slopes

89 How long must an intervention be implemented before calling it quits? Whatever the manual says 10-15 data points Quarter system? Do not stop an intervention until a pre- specified date based on one of the above has been reached! – Doing so will result in a violation of treatment integrity of the scientifically based/empirically supported intervention being implemented

90 Slope Rules (“Changing Interventions”) Change means new or severely intensified Intervention Do not make any changes without having differences in slopes between rate of improvement (ROI) of target student(s) compared to average peer or criterion Three possible slope decision rules …

91 Slope Comparison Decision Rule #1 If the slope of the trend line is flatter than the slope of the aim/goal line (as shown on next slide), then a change should be made – Intensify the intervention or – Start a new intervention based on assessment data

92

93 Slope Comparison Decision Rule #2 If the slope of the trend line is steeper than the slope of the aim/goal line (as shown on next slide), then a change in intensity can be made – Decrease the frequency of the current intervention per week, or – Decrease the duration of the current intervention per week, or – Fade out the intervention, but do not stop it all together!

94

95 Slope Comparison Decision Rule #3 If the slope of the trend line is similar to the slope of the aim/goal line (as shown on next slide), then a change should be made – Intensify the intervention, or – Start a new intervention based on assessment data The intervention did not close the gap (the intervention was therefore ineffective) The student was unresponsive to the intervention

96

97 Monitoring Progress Along the Way Three-Point Decision Rules: Adjustments

98 Three-Point Decision Rules (Adjusting) Adjust does not mean change – Adjust: Accommodation (slight change in current Intervention) – Change: Modification (new intervention) Do not make any adjustments without having three consecutive data points above or below the goal/aim line. Three possible three-point decision rules …

99 Three Data-Point Decision Rule #1 If you have three data points below the aim/goal line (as shown on next slide), then you can do something different – Accommodations only – Accommodation must be left in place for three consecutive data points (above or below the line) before removing or adding additional accommodations

100

101 Three Data-Point Decision Rule #2 If you have three data points above the aim/goal line (as shown on next slide), then you can do something different – Accommodations only – Accommodation must be left in place for three consecutive data points (above or below the line) before removing or adding other accommodations – Keep in mind the goal is to facilitate growth. If you are above the line you might consider doing nothing because you are on track to meet criteria

102

103 Three Data-Point Decision Rule #3 If you do not have three data points above the aim/goal line (as shown on next slide), then do nothing different – Continue the intervention according to protocol – Changing something here will violate intervention integrity

104

105 0 = No 1= Good 2= Excellent Be Safe Be Respectful Be Your Personal BestTeacher initials Keep hands, feet, and objects to self Use kind words and actions Follow directions Working in class Class 0 1 2 Recess 0 1 2 Class 0 1 2 Lunch 0 1 2 Class 0 1 2 Recess 0 1 2 Class 0 1 2 Total Points = Points Possible = 50 Today ______________% Goal ______________% HAWK Report (Helping A Winning Kid) Date _________________Teacher_______________________ Student_______________ Parent ’ s signature______________________________ Comments: AU: we’ll need to include the permission statement here, in small print.

106 Monitoring Behavior with a Check-In/ Check-Out System

107 Analyzing Data from a Check-In/ Check-Out System

108 Evaluating the RTI Model Both formative and summative evaluation should be conducted – Annually for formative evaluation – Every three to five years for summative evaluation Process variables – Self-assessment – External assessment – Administrative feedback – Parent satisfaction Outcome Variables – High-stakes test scores, attendance, ODR – Percentage of students receiving services at each tier – Disaggregated data are important to AYP

109 Review of Important Points Progress monitoring is essential component of RTI – It is how you evaluate the effectiveness of the intervention and determine RTI Rate of improvement (ROI) – Relative to peers or to specific criterion are options Data-based decision making – Three data points required before deciding whether to adjust an intervention (i.e., make a small accommodation) – At least 10 to 15 data points often suggested as a minimum for decisions about making larger modifications

110 Review of Important Points Daily Behavior Report Cards – Typically used at Tier II – It is ideal to have the daily report card contain items that reflect established schoolwide expectations. Program Evaluation – Evaluated by team and by external observer – Evaluate process variables and outcome variables – Feedback should be provided to teams Parent/Guardian Involvement and Satisfaction – Often can be gathered in a questionnaire at the end of problem-solving team meetings and/or parent-teacher conferences

111 Questions? Ben Ditkowsky Ben@MeasuredEffects.com http://measuredeffects.com Gary Cates garycates@garycates.net http://www.garycates.net

112 Questions


Download ppt "Best Practices in Data-Based Decision Making Within an RTI Model Gary L. Cates, Ph.D. Illinois State University GaryCates.net Ben Ditkowsky, Ph.D. Lincolnwood."

Similar presentations


Ads by Google