Download presentation
Presentation is loading. Please wait.
Published byNicole Costello Modified over 10 years ago
1
Principal Investigators: Martha Thurlow & Deborah Dillon Introduction Assumptions & Research Questions Acknowledgments 1. What characteristics of current assessment practices hinder accessibility?2. What characteristics of students require more accessible assessments? 3. What characteristics would an accessible assessment have? 3. Assessment characteristics continued: PARA is a collaboration between the University of Minnesotas National Center on Educational Outcomes and the Department of Curriculum & Instruction (Literacy Program); CRESST, University of California, Davis; and Westat. U of Minnesota Researchers: OBrien, Galda, Moen, Liu, Scharber, Kelly, Lekwa, Scullin, Kato, and Cuthbert. U of CA Davis Researcher: Abedi. CRESST Researchers: Herman, Kao, Leon. This research was funded by a grant from the U.S. Department of Education Institute for Educational Science (H324F040002). Opinions expressed are those of the project and not of the funding agency. Motivation Study Purpose: To examine whether improving the motivational characteristics of a large-scale reading assessment increases its accessibility for students with disabilities, and in so doing provides a more valid assessment of these students reading proficiency due to their increased engagement. Research Questions: 1.Does the option of choice in the selection of reading comprehension passages produce significantly higher measured reading comprehension for all students? 2.Is there a significant difference in reading scores of students with disabilities versus general education students on large-scale reading assessments? 3.Is there a significant difference in student performance by text type (literary & expository) on large-scale reading assessments? 4.Is there an interaction effect between choice, type of text, and type of student? 5.Is there a correlation between students general motivation to read and their performance on a large-scale reading assessment? Design: The dependent measure is comprehension performance; the factors include choice condition (choice/no choice), disability status (youth with disabilities/youth without disabilities) and text type (literacy- fiction/informational-exposition). The design is a split-plot with two between-subjects factors (A = passage choice and B = disability status), one within-subjects factor (C = text type), one blocking variable (S = subject), and one covariate (X = motivation as assessed on the MRQ-Motivation to Read Questionnaire) at the between-subject level; A, B, C, and X are fixed effects, and S is a random effect. PARA is one of the National Accessible Reading Assessment Projects (NARAP). NARAP Goals: 1.Develop a definition of reading proficiency. 2.Research the assessment of reading proficiency. 3.Develop research-based principles and guidelines making large- scale assessments more accessible for students who have disabilities that affect reading. 4.Develop and field trial a prototype reading assessment. Focus of PARA: All disabilities that impact reading, particularly learning disabilities, speech or language impairments, mental retardation, and deafness or hard of hearing. Assumptions: We do not know everything about what goes into accessible reading assessment yet. Preliminary research must inform design of accessible assessment. Both preliminary and experimental research will inform development of Principles and Guidelines for future assessments. Research Questions: What characteristics of current assessment practices hinder accessibility? What characteristics of students require more accessible assessments? What characteristics would an accessible assessment have? Differential Item Functioning (DIF) Analysis Study & Differential Distractor Functioning (DDF) Analysis Study Analysis 1 Purpose: To examine differences between students with and without disabilities in grades 3 and 9 on their responses to items and distractors to see whether items functioned differently, and to see whether there was a differential pattern of selecting distractors. Response Curves for an item showing DIF and DDF D = Correct response A, B, C = Distractors Black = Students without disabilities (A0, B0,...) Red = Students with LD (A1, B1,...) Vertical axis = probability of choosing a response Horizontal axis = ability Segmenting Study Purpose: To examine the effects of segmenting* reading passages on the performance of students with disabilities, and to compare this effect to the effect on the performance of students without disabilities. Design: Grade 8 students with and without disabilities are randomly assigned to either Version A (standard) or Version B (chunked/segmented) of a 3-passage multiple choice test. All students are given background questions, feedback questions related to fatigue and mood, and a student motivation scale. Preliminary Findings: Segmenting improved the quality of the assessment substantially by improving the reliability of measurement; however, there was no significant improvement in the performance of students with disabilities due to segmenting Analysis 1 Findings: Several items showed DIF and DDF, with more for grade 9 students than for grade 3 students, and more for items at the end of the test. Analysis 1 Issues: The following issues limit the findings: NRT, no access to category of disability or accommodations information; concern about high omission rate. Analysis 2 Purpose: To examine DIF, DDF, and differential missing response functioning (DMRF) for students with speech, learning, and emotional disabilities in grades 3 and 5 on a state criterion referenced test of reading Analysis 2 Findings: Several items showed DIF and DDF simultaneously. The three disability categories (SP, LD, and EBD) showed different DIF/DDF and DMRF. Systematic effects of item location on DIF/DDF and DMRF is unclear. Student Characteristics Study Purpose: (1) To identify students whose reading skills are not accurately measured by state reading assessments as judged by teachers and verified by brief interviews and examinations; and (2) To check the prevalence of less accurately measured students (LAMS) with various characteristics. Possible sources of measurement inaccuracy: 1. Fluency limitations obscure comprehension skills. 2. Comprehension limitations obscure other reading skills. 3. The student has strengths outside of what most reading tests cover. 4. Responds poorly to standardized testing conditions. *Subjects include 280 students who are fluent in English --140 4th graders and 140 8th graders including targeted samples of students representing the range of disability groups that are the focus of the PARA grant work. Partnership for Accessible Reading Assessment (PARA) Preliminary data are based on survey responses obtained from 13 teachers. Teachers identified 47 students (LAMS); some students were classified under multiple categories. Preliminary Results:
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.