Download presentation
Presentation is loading. Please wait.
Published byOswin Ryan Modified over 9 years ago
1
Session A: Psychometrics 101: The Foundations and Terminology of Quality Assessment Design Date: May 14 th from 3-5 PM Session B: Psychometrics 101: Test Blueprints (Standards Alignment, Working Backwards, and documentation) Date: May 16 th from 3-5 PM Session C: Psychometrics 101: Understanding and Decreasing Threats to Validity AKA What’s the purpose of my assessment? Date:: May 30 th from 3-5 PM Session D: Psychometrics 101: Understanding and Decreasing Threats to Reliability AKA What does it mean to have noisy assessments? Date June 4 th from 3-5 PM Session E: Designing Quality Qualitative Measures: Understanding, Interpreting, and Using Survey Data Date: June 18 th from 3-5 PM Session G: Putting the cart behind the horse: Designing action research and inquiry questions to inform teaching and learning Date: June 25 th from 3-5 PM
3
Reliability Indication of how consistently an assessment measures its intended target and the extent to which scores are relatively free of error. Low reliability means that scores cannot be trusted for decision making. Necessary but not sufficient condition to ensure validity.
4
How consistent are my assessment results? It applies to strength, or consistency, of the assessment when given at different times, scored by different teachers, or given in a different way.
5
There are multiple ways of assessing reliability alternate form reliability, split-halves reliability coefficients, Spearman-Brown double length formula, Kudar-Richardson Reliability Coefficient, Pearson Product-Moment Correlation Coefficient, etc.
6
and three general ways to collect evidence of reliability Stability: How consistent are the results of an assessment when given at two time-separated occasions? Alternate Form: How consistent are the results of an assessment when given in two different forms?; Internal Consistency: How consistently do the test’s items function?
7
Noise
8
1. Formatting Do students have enough space to write their response?
10
Text or features that pull the student out of the test create noise. Question stem on one page, choices on another Three choices on one page, fourth choice on second page
12
2. Typos Typos popped up in every department. They happen. “Final Eyes” are the best way to avoid them.
15
Test from Period 1 Test from Period 2
16
What accommodations can be made to ensure there is quality control?
18
3. Having to hunt for the right answer
19
Compare with...
20
4. Using the question to answer the question Two options in word bank were two word phrases – so I know they are the right answer for one of these two items
21
Don’t need to know the answer to know it’s not her... or her... and we can be pretty sure the president of France isn’t like Bono
22
5. Not having one clear answer
23
6. Unclear Questions As compared to what? If a student needs to infer what you want, there’s noise.
24
One assessment does not an assessment system make.
25
Fairness and Bias Fair tests are accessible and enable all students to show what they know. Bias emerges when features of the assessment itself impede students’ ability to demonstrate their knowledge or skills.
26
In 1876, General George Custer and his troops fought Lakota and Cheyenne warriors at the Battle of the Little Big Horn. In there had been a scoreboard on hand, at the end of that battle which of the following score-board representatives would have been most accurate? A.Soldiers > Indians B.Soldiers = Indians C.Soldiers < Indians D.All of the above scoreboards are equally accurate
28
My mother’s field is court reporting. Chose the sentence below in which the word field means the same as it does in the sentence above. A.The first basemen knew how to field his position. B.Farmer Jones added fertilizer to his field. C.What field will you enter when school is complete? D.The doctor checked my field of vision?
34
What are other attributes of quality assessments?
36
Implications Generally speaking, schools should perform at least two statistical tests to document evidence of reliability: a correlation coefficient and SEM. Nitko and Brookhart (2011) recommend 0.85 – 0.95 for only MC and 0.65-0.80 for extended response. In the example above, the user will need to understand that coefficient of 0.80 is low for MC but high for extended response.
37
Standard Error of Measurement An estimate of the consistency of a student’s score if the student had retaken the test innumerable times
38
How is the SEM calculated?: The SEM is calculated by dividing the SD by the square root of N. This relationship is worth remembering, as it can help you interpret published data. Calculating the SEM with Excel Excel does not have a function to compute the standard error of a mean. It is easy enough to compute the SEM from the SD, using this formula. =STDEV()/SQRT(COUNT()) For example, if you want to compute the SEM of values in cells B1 through B10, use this formula: =STDEV(B1:B10)/SQRT(COUNT(B1:B10)) The COUNT() function counts the number of numbers in the range. If you are not worried about missing values, you can just enter N directly. In that case, the formula becomes: =STDEV(B1:B10)/SQRT(10)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.