Download presentation
Presentation is loading. Please wait.
Published byLoreen Parsons Modified over 9 years ago
1
A Closer look at Computer Adaptive Tests (CAT) and Curriculum-Based Measurement (CBM)— Making RTI progress monitoring more manageable and effective.
4
RTI Self-Assessment at School Level RTI and Assessment Components Universal Screening Progress Monitoring RTI and Curriculum-Based Measurement (CBM) RTI and Computer Adaptive Testing (CAT) Some case examples from CAT
5
Complete self-assessment at school level Complete self-assessment at school level Report out group readiness Next steps to implementation?
6
RTI aligns with the school improvement process RTI is: A dramatic redesign of general and special education A comprehensive service delivery system that requires significant changes in how a school serves all students NASDE, 2006
8
1,390 respondents (K-12 administrators) to survey (margin of error 3-4% AT 95% confidence interval) 94% of districts are in some stage of implementing RTI – up from 60% in 2008 and 44% in 2007 Only 24% of districts reached full implementation Primary implementation is elementary level with reading leading the way www.spectrumk12.com www.spectrumk12.com
9
www.spectrumk12.com www.spectrumk12.com
10
Tier 1 Benchmark and School Wide Interventions for Students on Target and All Students Tier 2 Strategic & Targeted Interventions for Students at Risk Tier 3 Intensive Interventions Preparation and Training Determination of Eligibility Determination of Eligibility Administrative Supports Parental Involvement Universal Screening Data Analysis Collaboration with the RtI Process Strategic & Targeted Interventions Strategic & Targeted Interventions Benchmark & School-wide Interventions Intensive Interventions Intensive Interventions Intensive Progress Monitoring Intensive Progress Monitoring
12
Universal Screening Progress Monitoring
13
Key elements of scientifically-based core programs includes explicit and systematic instruction in the following: Phonological Awareness Phonics Fluency Vocabulary Comprehension (National Reading Panel, 2000) 13
14
Key Ideas and Detail Craft and Structure Integration of Knowledge and Ideas Range of Reading and Text Complexity
15
Concept Standards; Numbers and Operations Measurement Geometry Algebraic Concepts Data Analysis and probability 15
16
Process Standards: Problem Solving Reasoning and Proof Communication Connections Representations ▪ (NCTM: National Council of Teachers of Mathematics) 16
17
The Five Strands of Mathematical Proficiency Conceptual Understanding Procedural Fluency Strategic Competence Adaptive Reasoning Procedural Disposition ▪ (NCTM: National Council of Teachers of Mathematics) 17
18
Operations and Algebraic Thinking Numbers and Operations in Base Ten Numbers and Operations – Fractions Measurement and Data Geometry Mathematical Practices
19
Wisconsin Balanced Assessment Recommendations within RTI Wisconsin Balanced Assessment Recommendations within RTI
20
20 Formative Assessment A planned process Used to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes Classroom-based Formal and Informal Measures Diagnostic - Ascertains, prior to and during instruction, each student’s strengths, weaknesses, knowledge, and skills to inform instruction.
21
21 Benchmark Assessment Provides feedback to both the teacher and the student about how the student is progressing towards demonstrating proficiency on grade level standards.
22
22 Summative Assessment Seeks to make an overall judgment of progress made at the end of a defined period of instruction. Often used for grading, accountability, and/or research/evaluation
23
What is Universal Screening? ▪ Administered to all students at all levels, K-12 ▪ Universal screening is a process that includes assessments, but also includes record review and historical information ▪ Brief measure ▪ Its use is primarily to determine who might be at-risk ▪ Some screeners can do more
24
Universal screening data are typically collected in the fall, winter, and spring. Key questions Identify how the group is doing as a whole Determine who is individually in need of intervention beyond core instruction Some screeners can give us info about how to focus instruction
25
National RTI Center Tools Chart National RTI Center Tools Chart Two types of measures Curriculum-Based Measurement ▪ Benchmark, Summative Computer Adaptive Tests ▪ Benchmark, Formative, Summative
26
CBM designed as INDEX of overall outcomes of academic skills in domain CBM is a General Outcomes Measure Tells you HOW student is doing OVERALL, not specifically what skills they have and don’t have (not formative or diagnostic)
27
General Outcomes Measure of company’s success What is the one item that tells the CEO and stock holders how they are doing?
31
The medical profession measures height, weight, temperature, and/or blood pressure. Companies report earnings per share. Wall Street measures the Dow-Jones Industrial Average. General Outcomes approach for reading measures Oral Reading Fluency
32
Standardized format for presentation Material chosen is controlled for grade level difficulty Material presented as brief, timed probes Rate of performance used as metric Results provide index of student progress in instructional materials over time Indexes growth toward long-term objectives Measures are not designed to be formative or diagnostic
33
Can be used in formative way through error analysis, but that was not their design Overall Reading Performance = Oral Reading Fluency (primary measure) Early Literacy Measures = Phonics/Alphabetic Principles Math = Computational objectives Math = Concepts/applications of mathematics
34
Early Literacy Phoneme Segmentation Fluency Initial Sound Fluency Nonsense Word Fluency Letter Identification Fluency Reading Oral Reading Fluency Maze Retell Fluency AIMSweb as example AIMSweb
35
M-COMP = Computation Skills Assesses many skills across the grade Samples the skills expected to be acquired Grade-based assessment Reflects performance across time M-CAP = Concepts/Applications Skills
36
Grade 3 MCOMP Example Grade 3 MCOMP Example Grade 5 MCOMP Example Grade 5 MCOMP Example Example of MCAP – Grade 3 Example of MCAP – Grade 3 Example of MCAP – Grade 5 Example of MCAP – Grade 5
39
MCOMP- Group Administered GradeTime (min) All Grades8 min Reading Measures- Time R-CBM- Individually Administered 1 min each X 3 Maze- Individual or Group Administered 1 min each X 3 Retell Fluency- Individually Administered 1 min each X 3 MCAP-Group Administered GradeTime (min) 2 -68 min 7-810 min
41
Instructional Recommendations Instructional Recommendations Link to Lexile Level and Instructional Level Book Recommendations (Gr 3, Lawnton- Scores & Percentiles) Link to Lexile Level and Instructional Level Book Recommendations (Gr 3, Lawnton- Scores & Percentiles) Prediction to state test also available Prediction to state test also available Links to Common Core also reported Links to Common Core also reported
42
At each grade, one identifies the distribution of students at each level of risk, as defined by the user Data used by data team to identify students in need of supplemental instruction Data reflects change in GROUPS over time
43
Show data for school- Use RCBM Show data for school- Use RCBM Have groups interpret the outcomes Use data from CD as example Extract grade 2 and 3 data, Winter only. Have the groups identify goals for winter. Then show the Winter to spring data and have groups draw conclusions about the data.
44
Change over time interpreted differently for reading and math Change from end of one year to start of next (summer decline?) Implications for instruction?
48
Within and across grade growth is evident for reading (RCBM) but not math Across grade growth in reading shows step wise improvements, after “summer decline” In math, within year change over the year can be very small Across grade growth in math not possible to determine from math CBM, i.e., each grade is not necessarily higher scoring than the previous grade Interpretation within grade rather than across grade is stronger Why? Due to nature of within grade measures- Math measures are more specific skills probes than general outcome measures
50
Based on IRT (Item Response Theory) method of test construction Adjusts items administered based on student responses and difficult of items Tests have huge item banks Items are not timed, based on accuracy of response Careful calibration, pinpoints skills acquired and in need of teaching in a skill sequence
51
Computer administered entirely Between 15-25 minutes per administration Skills focused within domains Not all students take same items, depends on which items are answered correctly and incorrectly Scaled Score is the KEY metric
52
Provides a student’s relative standing to peers in on a national distribution Provides student’s goals for growth Provides indication of group’s performance (grade, school, district) relative to what is expected nationally Example for today- STAR Assessment (Enterprise) from Renaissance Learning Other similar metrics exist, see NCRTI charts Study Island, SRI, MAP
53
STAR Early Literacy (pre-K - 3) STAR Reading (Gr 1 – 12) STAR Reading (Gr 1 – 12) STAR Math (Gr 1 – 12)
54
Metric that places student on a distribution from K through grade 12 Weight analogy STAR Scaled Score Early Literacy (PreK – 3) 300 – 900 Reading (K-12) – 0 to 1400 Math (1 – 12) – 0 to 1400 Note important difference in interpretation to CBM (AIMSweb) measures across grades and time
62
Show Use of STAR as Universal Screening in Math Exercise #2 Exercise #2 Use Lehighton Data as example across the year Have audience draw conclusions from the data Gr 2 – 3 data fall, draw conclusions about outcomes
63
STAR Math Fall Screening Report STAR Math Fall Screening Report STAR Math Winter Screening Report STAR Math Winter Screening Report
65
Students in need of tiered instruction are monitored on frequent basis Frequency of monitoring can vary but once every two weeks is recommended at minimum Monitor student toward grade level goals Reading R-CBM (Oral Reading Fluency) – after mid year grade 1 Math M-COMP & M-CAP (starting second grade)
66
Same measures used for progress monitoring Goals set for expected rate of change over the year Measures are used to determine outcomes of interventions General Outcomes Measures for overall progress Short term measurement might also be needed for skill development
67
All measures have error Change in performance over time must be interpreted by considering error If change from one point to next is within error, no big deal If change from one point to next is larger than error, need to check whether change is “real” or “accidental” Easier or harder passage than one before Student was physically ill Student just clicked away on the computer CBM ORF SEM = 10 wcpm (range 5-15) Christ, T. J.; Silberglitt, B., (2007) School Psychology Review, 36(1), 130-146.
73
Use of Ordinary Least Squares (OLS) regression is only valid trend estimator Number of weeks of monitoring is key and best predictor of outcomes Recommendation is 10-14 weeks with good passage set Increasing density of data collection (i.e., more in shorter amount of time) does not improve prediction) Need to use more data per assessment (i.e., 3 passages use median) over single passage
74
Measures are generally short and efficient (1 minute for Reading individually administered, 8 minutes for math that can be group administered) Reading is General Outcome Measure, cuts across reading skills, strong correlations to state assessments Math measures of both computation and concepts offer rich array of assessments across domains of skills Measures remain sensitive to growth within grades across the year
75
Measures are not designed to be formative (diagnostic) but some math measures can be (Yearly Progress Pro)Yearly Progress Pro Additional assessment needed for purposes of formative assessment and instructional linkages Math measures do not always show same growth patterns across grades Math measures cannot be easily used across grades Links to state and common core standards are not always clear, measures are designed to be broad growth indicators not specific skills assessments
76
Same measure can be used as progress monitoring device Frequency can be as often as once per week Standard Score measure is reflected in data
83
All CAT measures offer instructional links Tied to skill sequences and development Can be used to assist teachers in identifying instructional targets Example report from STAR Reading (Enterprise) Example reports from STAR Math Learning Progressions
84
Diagnostic Report – STAR Ex Diagnostic Report – STAR Ex Instructional Planning Report- Emily Instructional Planning Report- Emily Progress Monitoring Report- Emily Progress Monitoring Report- Emily Annual Report Emily Annual Report Emily
85
Question of growth is critical How much did the student grow this year? How was the growth made by the student compared to other students who started at the same point as this student? Student Growth Percentiles Innovative metric Tells you whether the GROWTH made by the student was as much, more, or less than expected
88
Grade 3 Screening Report Grade 3 Screening Report Two Students - All receive intervention TB (On Watch) MR (Needs Intervention)
89
TB – Diagnostic Report TB – Diagnostic Report Note the inclusion of scores directing you to specific levels and texts for reading TB – Instructional Planning Report TB – Instructional Planning Report TB – Progress Monitoring Report TB – Progress Monitoring Report TB – PSSA Estimate TB – PSSA Estimate TB – Common Core Estimate TB – Common Core Estimate TB – Core Progress Learning Progression Author’s Craft- Grade 3 (demo from logged in RL website)
90
MR – Instructional Planning Report MR – Instructional Planning Report MR – Progress Monitoring Report MR – Progress Monitoring Report MR – PSSA Estimate MR – PSSA Estimate MR – Common Core Estimate MR – Common Core Estimate MR – Core Progress Learning Progression Author’s Craft- Grade 3 (demo from logged in RL website)
91
Measures are efficient since they are administered by computer (15-20 minutes) and can be given to large groups at the same time Reading & Math serve as General Outcome Measures (looking at scaled scores and movement toward goals) Reading & Math serve as indicators of instructional foci with direct links to skills in need of instruction Reading & Math measures assess the domains consistent with common core and state standards, with strong correlations to state assessments Reading & Math measures remain sensitive to growth within AND across grades across the year
92
Measures can show more bounce in the data due to students not being carefully monitored in their taking of the tests on computers (pay attention to SEM rules) Measures are not direct measures of fluency Measures may be somewhat limited in sensitivity to small increments of growth over short periods of time (i.e. 4-6 weeks) Use of STAR (or any CAT) requires full understanding of the nature of CAT
93
CBM and CAT are both options for universal screening and progress monitoring Both measures provide summative and benchmark objectives CBM not designed for formative analysis CAT adds dimensions of formative assessment and instructional planning links Lots of options – both CBM and CAT
94
Dr. Edward S. Shapiro ed.shapiro@lehigh.edu
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.