Download presentation
Presentation is loading. Please wait.
Published byJeffrey Kennedy Modified over 9 years ago
1
Measuring Student Growth
2
Feedback Loop
3
Key Concept All models are wrong, but some are useful. George Box
4
Two Conflicting Models Ptolemaic Copernican
5
Key Problems Theta Neither teachers nor students are randomly assigned Lack of Randomness
6
Topics What are the main types of growth models and their application What are the major problems presented by the use of growth models What does the actual application of growth models look like Assessment and scoring
7
Item Analysis Difficulty Index p=(number correct)/(number of responses) Discrimination Index d=(((number correct upper group)-(number correct lower group))/(number in each group))
8
Raw Score v. Scale Score Raw Score Total point value of correct responses to valid items Scale Score Equating adjustment to ensure that any given assessment is comparable to previous assessments Vertical Scale Equating adjustment that requires a higher score in higher grades
9
CRT v. NRT CRT Measure of performance relative to a delimited domain of learning tasks NRT Measure of performance relative to an individual’s standing in a group In Practice The primary difference is in the way the scores are interpreted
10
Three Measures Center Mean, Median, and Mode Spread Variance and Standard Deviation Distribution What does the data look like?
11
Distribution
12
The Mean Follows the Tail
13
Grade 6 Math 2010
15
Main Types of Growth Models Trajectory Value/Transition Table Projection/Linear
16
Trajectory (Growth to Proficiency) Begins with the current score and the needed score for proficiency in the future, dividing the required student gains needed to reach this score into annual targets Usually dependent on assessments with a vertical scale
17
Trajectory Model
18
Arkansas Example
19
Value Table/Transition Model Creates subdivisions of performance and awards credit for moving students to higher levels. A categorical approximation of a trajectory model, not dependent on a vertical scale
20
Year 1 Level Year 2 Level Level 1ALevel 1BLevel 2ALevel 2BProficient Level 1A0150225250300 Level 1B00175225300 Level 2A000200300 Level 2B0000300 Proficient0000300 Value Table/Transition Model
21
Previous Year Current Year 11.522.533.544.5 10.511.522.533.5 1.5-.50.511.522.53 2-.50.511.522.5 -1.5-.50.511.52 3-2-1.5-.50.511.5 3.5-2.5-2-1.5-.50.51 4-3-2.5-2-1.5-.50.5 4.5-3.5-3-2.5-2-1.5-.50 Arkansas Example
22
Projection (Linear) Model Uses current and past scores to predict performance in the future Such models can be quite complex and difficult for stakeholders to understand
23
Projection Model
24
EVAAS Example
25
SGP Example
26
GMPP Evaluation
27
GMPP Participation StateGrowth Model DelewareTransition Matrix IowaTransition Matrix AlaskaTrajectory ArizonaTrajectory ArkansasTrajectory FloridaTrajectory North CarolinaTrajectory OhioProjection TennesseeProjection
28
Results by State
29
Schools Meeting AYP Overall, the increase of schools making AYP when growth was included was 16% Biggest rates were in Ohio, 50%, Arkansas, 13%, and Tennessee, 10% If Ohio’s results are excluded, the overall rate is only 4%
30
Growth Model Comparisons
31
Two Viewpoints
32
Classification Errors False negative: an effective teacher is classified as a less-effective teacher False positive: a less-effective teacher is classified as an effective teacher
33
The Widget Effect The vast majority of school districts presently employ teacher evaluation systems that result in all teachers receiving the same (top) rating…. In districts that used binary ratings more than 99 percent of teachers were rated satisfactory. In districts using a broader range of ratings, 94 percent received one of the top two ratings and less than 1 percent received an unsatisfactory rating.
34
…student test scores alone are not sufficiently reliable and valid indicators of teacher effectiveness to be used in high-stakes personnel decisions, even when the most sophisticated statistical applications such as value-added modeling are employed. Teaching is Complex
35
The use of imprecise measures to make high stakes decisions that place societal or institutional interests above those of individuals is wide spread and accepted in fields outside of teaching…. nearly all selective colleges use SAT or ACT scores as a heavily weighted component of their admission decisions even though that produces substantial false negative rates (students who could have succeeded but are denied entry).
36
Perverse Incentives …research and experience indicate that approaches to teacher evaluation that rely heavily on test scores can lead to narrowing and over-simplifying the curriculum, and to misidentifying both successful and unsuccessful teachers. These and other problems can undermine teacher morale, as well as provide disincentives for teachers to take on the neediest students. When attached to individual merit pay plans, such approaches may also create disincentives for teacher collaboration.
37
Perverse Incentives Much of the fear concerning growth is about “use.” Take that fear away, and what is left is something that is very useful and which teachers are interested in knowing as well. At least in terms of SGP, the “neediest” students represent a teacher’s best chance to demonstrate superior growth.
38
Data Requirements All growth models are dependent upon the ability to track a large percentage of student over time This is especially difficult at the teacher level, where a host of issues will probably never fully be resolved
39
Terminology Is the term “value-add” loaded? Is the term “growth” more palatable and perhaps more descriptive?
40
Other Populations EthnicityCountSGP_Median A502860 B8229043 H3785750 N280851 P157548 T470752 W25969252 GTCountSGP_Median N34194948 Y5229963 SpedCountSGP_Median N35472351 Y3961938 ESICodeCountSGP_Median AU153943 DB842.5 ED46734 HI40837.5 MD21436.5 MR234528 OHI737434 OI13544 SI939644 SLD1751537 TBI8233 VI13641
41
G5 2008 Math G6 2009 Math Increase Noah 702 775 73 Ben 425 527 102 Who Had More Growth?
42
2Percentiles16508498 The Normal Distribution
44
Student Growth Percentiles
45
Quantile Regression Ben Noah
46
G5 2008 Math G6 2009 Math 2009 Math SGP Noah 702 775 54 Ben 425 527 44 Who Had More Growth?
47
Math Scale and Growth
48
Aggregated Growth Percentiles
50
Goodness of Fit
51
Distribution
52
Distribution by Cohort
53
Prior/Current Math Scale
54
Math Scale and Growth
55
Prior/Current Math Growth
56
Density Plot
57
Current Math/Literacy Scale
58
Current Math/Literacy Growth
59
Density Plot
60
THOMAS KANE http://metproject.org/ Wide variation among teachers Wide variation within schools Little (if any) differences in teacher preparation Teachers improve, but improvement flattens after the third year No schools where all teachers are highly effective Metproject
62
Evaluating Teachers
65
Feedback?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.