Download presentation
Presentation is loading. Please wait.
Published byFrank Stone Modified over 9 years ago
1
Closing the loop: Providing test developers with performance level descriptors so standard setters can do their job Amanda A. Wolkowitz Alpine Testing Solutions, James C. Impara Psychometric Inquiries Chad W. Buckendahl Psychometric Consultant
2
What do standard setters do & how do they do it? Standard setters recommend cut scores An early step in the process is defining what the “borderline” examinee is expected to be able to do in terms of the test content. Specifically, they examine the performance level descriptors (PLDs) and define the borderline examinee at each performance level. Using modifications of the Angoff or Bookmark methods, they review test items and judge the difficulty of the item for examinees who are at the borderline of one or more performance categories. Typically they estimate item difficulty one or more times (rounds), sometimes with item or other data provided after their first round of item difficulty estimation.
3
How are tests developed? Item writers, typically content “experts,” draft items that are responsive to the test specifications (or test blueprint). The test blueprint may or, most often, may not include a description of the various performance levels. The test blueprint virtually never provides a description of the “borderline” examinee at each performance level.
4
Example Suppose you are a teacher who is writing an end of term test that has 12 questions and you will give the grades A, B, or C to your students. Typically, you write your 12 questions and grade based on some arbitrary score scale. Suppose that instead of using the arbitrary score scale you define the skills and knowledge of students at each mark. So you declare what a C student should know, what a B student should know and what an A students should know.
5
Example - Continued Moreover, knowing that each definition represents a range of skills, you also define the skills and knowledge for a borderline student at each level. Now you write four questions associated with each performance level. Some questions/items at the borderline and others above the borderline. Grading is easy. If a student answers four or fewer questions he/she gets a C, if the score is between 5 and 8, a B, and 9 or more is an A. Does it matter which questions the students answers correctly? Maybe, but that’s another paper.
6
How does the example fit? In the example the teacher is both the test developer and the standard setter. In the case of large scale assessments these are tasks done by different groups. If the test developers don’t know the PLDs, they may make the job of the standard setting panelists very difficult. Let’s see what happened.
7
The study design This was not a designed study, but an ad hoc study. That is, we developed the research question and looked for data that would provide some answers. Thus, there are some limitations. The first data collection was in 2009 and the second was in 2013. Both in the same southeastern state in the USA. Both related to the same assessment.
8
The study design - 2 2009 Performance level descriptors (PLDs) defined initially. Borderline performance described for each PLD Standard setting done for alternative assessments (for students with severe cognitive disabilities) in: English Language Arts (ELA) grades 4 – 8 Mathematics grades 3 – 8 All tests had 15 items scored dichotomously (0 or 2 for each item) Four performance levels were defined, thus three cut scores There were separate panels for each content area.
9
The study design - 3 2013 Standard Setting was the same as 2009 except: PLDs developed in 2009 were examined and refined. The original PLDs were known to test developers and drove the development process. Scoring was modified from dichotomous to three-point scoring for each item – partial credit was permitted, so scores were 0, 1, or 2. Slightly fewer panelists (17 – 20 for each grade span in 2009, 14 – 15 in 2013).
10
Study design - 4 A final difference in the two standard setting activities was the method used in the standard setting. 2009 used the Modified Angoff method as described by Impara & Plake (1997), often characterized as the Yes/No method. 2013 used the Extended Angoff method as described by Hambleton & Plake, 1995 and Plake & Hambleton, 2001 The reason for this difference was because of the change from dichotomous scoring to giving partial credit (3-points) Both methods rely on item level judgments.
11
PLDs There were four performance levels: Achievement Level 1 (limited command), Achievement Level 2 (partial command), Achievement Level 3 (solid command), and Achievement Level 4 (superior command). The PLDs further defined each level Level 1 students would need academic support, Level 2 students would likely need academic support, Level 3 students would be prepared, and Level 4 students would be well prepared to be successful in further studies in that content area. PLDs also contained specific abilities that students at that given level could demonstrate.
12
Study Expectations The principal research question was: Will the consistency of ratings at the end of round 1 of the standard setting process increase? That is: Will developing items with known PLDs help panelists be more consistent with their initial ratings and more congruent with the item p-values prior to any feedback.
13
Why? Why is this an important question? If panelists are more consistent in their round 1 ratings, then they may come to closure faster in subsequent rounds, perhaps reducing the number of rounds (sometimes 3 rounds are used), thus making the process more efficient. Panelists often become frustrated if there are no or too few items at a performance level, thus causing them to question the validity of the process.
14
How? How will we know if there is greater consistency among panelists? Distribution of students across levels will be consistent with expectations – most students will be classified at Levels 2 and 3. There should be greater congruence between actual item difficulty and the panelists’ estimate of item difficulty The correlation between actual item difficulty and panelists’ item difficulty estimate will be higher The range of panelists’ cut scores will be lower. Percentage of panelists who were within one point of the recommended cut score at the end of round 1 will be higher. The standard deviation of the panelists’ cut scores at each level will be lower.
15
Result – 1 Distribution of students across levels
16
Distribution of students - ELA
17
In virtually every grade in the 2009 standard setting many students were assigned to achievement level 4, the highest level. In 2013, the distribution was much more appropriate, with most students assigned to levels 2 and 3.
18
Distribution of students - Math
19
In 2009, several of the grades showed appropriate distributions, but many still have many students assigned to levels 1 and 4. In 2013, relatively few students were assigned to levels 1 and 4 and the preponderance of students were classified as level 2 or 3.
20
Congruence of actual and panelist’s item difficulty It was expected that the actual item difficulty value for an item (i.e., the percent of students in the population who get the item correct) would be greater than or equal to the corresponding Level 2 cut score rating and less than the corresponding Level 4 cut score rating. Hence, the actual p-value would be between the Level 2 cut score and the Level 4 cut score. Except there should be a relatively small number of items that have difficulties that are outside this range (those items that virtually all examinees answer (for the Level 1 targeted items) and those that virtually no one answers correctly (those targeted at Level 4).
21
Congruence of actual and panelist’s item difficulty
22
Congruence - summary
23
Correlation Analysis A correlation analysis compared the relationship between actual item difficulty values and the average item rating at each achievement level. Expectation: a direct relationship was expected between the item’s difficulty value and average item rating. As the item difficulty value increases (i.e., the item becomes easier), the greater the chance a borderline student will correctly respond to the item correctly. This trend was expected for all three cut scores.
24
Correlation Analysis Results – the reverse of expectations. 2009 item ratings generally had moderate to strong positive correlations with the corresponding item difficulty values. 2013 ratings tended to have only moderate correlations at best. The 2009 ratings correlated higher with the p-values than did the 2013 ratings
25
Correlation Analysis Why were 2009 correlations higher? One possible explanation: the 2009 panel only had to make Yes/No judgments whereas the 2013 panel had to make a judgment as to whether a student would score 0, 1, or 2 points on the item Another possible explanation: is that the items on the 2013 exams may have had more similar difficulty values around the intended PLDs than the 2009 items. Also, it was learned that in 2013 few students were assigned the 0 score, resulting in a restriction of range of p-values.
26
Internal Consistency – range Internal consistency was evaluated several ways First by comparing the range of recommended cut scores following round 1 for each level and panel. Thus, a smaller range would indicate that the given year’s panel was more consistent with their ratings than the other year’s panel.
27
Internal Consistency – Range Range –ELA
28
Internal Consistency - Range Range – Math
29
Internal Consistency – Proximity to the median Internal consistency was evaluated several ways: Second, calculating the percent of panelists whose ratings were within one point (plus or minus) of their panel’s recommended Round 1 median cut score (all median cut scores ended up as possible scores, i.e., no median cut score ended in “.5”). Thus, if the percent of panelists’ cut scores were all relatively close together, they would be close to the median. For example, the median Level 2 cut score recommendation for the “Math – 4” exam was 6 out of 30 points for 2009 and 7 out of 30 points for 2013.
30
Internal Consistency – Proximity to the median ELA
31
Internal Consistency – Proximity to the median Math
32
Internal Consistency – Standard Deviation Internal consistency was evaluated several ways: The third way to look at internal consistency is to compare the standard deviations of the panelists’ ratings across years. Thus, if the panelists are more consistent, then the standard deviations will be smaller.
33
Internal Consistency – Standard Deviation ELA
34
Internal Consistency – Standard Deviation Math
35
Recall this slide How will we know if there is greater consistency among panelists? Distribution of students across levels will be consistent with expectations – most students will be classified at Levels 2 and 3. There will be greater congruence between actual item difficulty and the panelists estimate of item difficulty The correlation between actual item difficulty and panelists’ item difficulty estimate will be higher The range of panelists’ cut scores will be lower. Percentage of panelists who were within one point of the recommended cut score at the end of round 1 will be higher. The standard deviation of the panelists’ cut scores at each level will be lower.
36
How did we do in terms of distribution of students? Expected result: Distribution of students across levels will be consistent with expectations – most students will be classified at Levels 2 and 3. Actual result: In both ELA and Math the results were as expected in virtually every grade and performance level. Thus, positive results
37
How did we do in terms of congruence of item difficulty? Expected result: There will be greater congruence between actual item difficulty and the panelists estimate of item difficulty Actual result: There was greater congruence between actual item difficulties and panelists’ estimation of item difficulty at all levels and grades. However, there were few cases in which the actual p-values were outside the cut score boundaries. Thus, somewhat positive results, but somewhat problematic.
38
How did we do in terms of correlation of actual and estimated item difficulty? Expected result: The correlation between actual item difficulty and panelists’ item difficulty estimate will be higher in 2013. Actual result: The 2009 ratings correlated higher with the p-values than did the 2013 ratings Thus, negative results
39
How did we do in terms of the ranges of panelists’ cut scores? Expected results: The range of panelists’ cut scores will be lower in 2013. Actual results: In the majority of grades and levels the range of cut scores was lower in 2013, particularly at Levels 3 and 4. Thus, mostly positive results
40
How did we do in terms of the proximity of panelists’ cut scores to the median? Expected result: Percent of panelists who were within one point of the recommended cut score at the end of round 1 will be higher in 2013. Actual result: In the majority of comparisons, the percent of panelists ratings that were within one point of the median was higher in 2013. Thus, mostly positive.
41
How did we do in terms of the standard deviations of panelists’ cut scores? Expected results: The standard deviation of the panelists’ cut scores at each level will be lower in 2013. Actual results: In the majority of comparisons the 2013 panels had lower standard deviations than the 2009 panels. Thus, mostly positive
42
Overall & Conclusion The results overall supported providing test developers with the PLDs. More specifically designed research is needed. Many limitations to this study. If future studies are supportive of providing the test developers with the PLDs and they are instructed to target item development to these PLDs, it could result in more efficiency in the standard setting process and in greater levels of satisfaction among panelists.
43
Questions? Thank you for your attention. Are there any questions?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.