Presentation is loading. Please wait.

Presentation is loading. Please wait.

Discussion Comments June 22, 2016 Presented at:

Similar presentations


Presentation on theme: "Discussion Comments June 22, 2016 Presented at:"— Presentation transcript:

1 Test Performance can Tell Us about Problem Solving: Implications for Building Equitable Assessments
Discussion Comments June 22, 2016 Presented at: 2016 National Conference on Student Assessment Philadelphia, PA HumRRO Presenter: Lauress Wise

2 Assembling Data on Response Processes
Response process models tell us how students get to an answer, not just whether the answer is correct. Digitally Based Assessment (DBA) can generate a ton of ancillary data, recording mouse clicks and movements and logging keystrokes There are two general approaches to analyzing and using such data Blind empirical approaches (e.g., data mining) Theory-Driven approaches The presentations in this session describe very successful analyses of ancillary DBA data proceeding from good theories of what might be important

3 Response Time Information
Van der Linden1 has developed models for analysis of response times to: Measure speed, as separate from (but correlated with) ability Particularly valuable for constructs such as math fluency Correlation with target construct can be used to increase precision Sometimes, however, speed is a nuisance factor As in example of keyboarding skill differences in assessing Grade 4 writing skills. Although it might be useful to assess processing speed 1Van der Linden, W. J. and Fox, J.P. (in press). Joint hierarchical modeling of responses and response time. In W. J. van der Linden (Ed.) Handbook of Item Response Theory, Volume One: Models. Boca Raton, FL: Chapman & Hall/CRC Statistics in the Social and Behavioral Sciences.

4 Modeling Writing Processes
Based on theory of strong writers Fluency, natural pauses, editing and revision Builds on detailed work described in Zhang and Deane Downloadable from the internet Seeks to understand gender differences in essay scores in terms of indicators of strong writing processes Females wrote longer essays and got higher scores Each of the 4 process indicators were correlated with essay scores in the expected direction (negative for dysfluency) Gender difference in first three indicators (dysfluency and macro and local editing) significant and consistent with score differences Gender difference in planning marginal and in the opposite direction

5 Modeling Writing Processes (Continued)
How do the indicators relate to the scoring rubric levels? Dysfluency indicator differentiated all score levels Equally for both gender groups Macro-Editing indicator differentiated rubric scores 3, 4, and 5 Bigger gender gaps at levels 2 and 4 Local-Editing indicator differentiated levels 1 and 2 and 4 and 5 for males, but only between 3 and 4 for females Otherwise, relatively flat relationship to overall score Planning indicator differentiated levels 1 and 2 for both groups and levels 3 and 4 for males only.

6 Modeling Inquiry Processes
Some key findings Students who tried out multiple options took longer and gave better responses The more options tried out, the better No real gender differences No real SES (free lunch) differences when ability is controlled Asian students explored more options even controlling for ability Implications for instruction Encouraging students to explore and evaluate alternative options Understand why there are some group differences in this indicator

7 Next Steps Further analyses to understand indicators
Relationship to specific rubric levels Relationships to related measures (e.g., total score, reading score) Relationships to instructional components Cognitive labs to understand/document the meaning of the process indicators Implications for instruction For individuals For disadvantaged groups


Download ppt "Discussion Comments June 22, 2016 Presented at:"

Similar presentations


Ads by Google