Download presentation
Presentation is loading. Please wait.
Published byRodger Bradford Modified over 9 years ago
1
Using School Choice Lotteries to Test Measures of School Effectiveness David Deming Harvard University and NBER
2
Measuring School Effectiveness School rankings, ratings, league tables Gain score or “value-added” modeling approach (VAMs) – School VAMs now in ~30 U.S. states (Blank 2010) – Teacher VAMs used in evaluation, retention Accuracy of VAMs is important for incentive design and student welfare (Baker 2002, Rothstein 2010)
3
VAM Research Large literature on measurement / technical issues First order issue mostly untested - is assignment of teachers to classes within schools as good as random? Kane and Staiger (2008), Kane et al (2013) randomly assign classes to teachers, test validity of VAMs – Chetty, Friedman and Rockoff (2013) use quasi-experimental design with teacher mobility School VAMs require conditionally exogenous sorting of students across schools – Would you consent to this experiment?
4
Using School Choice Lotteries Oversubscribed public schools in Charlotte- Mecklenburg – Random assignment, within a self-selected group of applicants Estimate VAMs using data from prior cohorts – Vary model specification, sample, outcome – Out-of-sample predictions of “school effects” Use VAM estimates to predict the treatment effect of winning the lottery
5
Data and VAMs
6
Lottery Data and Sample 2,599 students in 118 separate lotteries for “marginal” priority groups Top 3 choices but nearly all randomization was over 1 st choices
7
Empirical Strategy
8
4 Possible Explanations for Bias 1.Sorting on unobserved determinants of student achievement (Rothstein 2010) 2.Estimation error 3.“True” school effects may vary over time independent of estimation error 4.Lottery sample is self-selected, so treatment effect is different for them
9
No correlation between average test scores (in levels) and lottery impacts
10
1.Huge improvement from adding lagged scores (gains model) 2.Need 2+ years of data to fail to reject unbiasedness (triple negative!)
12
“Shrinkage”
13
1.Shrinkage improves prediction when only one year of prior data is used – with longer panel unshrunken more accurate 2.“Drift” adjustment is best here 3.Shrinkage overcompensates if there is true variation in school effects – CFR call this “teacher bias”
14
Concluding thoughts Despite sorting concerns, school VAMs are surprisingly accurate – Best fit was gains only on full panel, but that’s not a theorem – Large standard errors admit non-trivial bias – need to replicate in other settings Good news for policies that use VAMs, but: 1.Biases might be offsetting 2.Other outcomes are important too! 3.“School effects” may include contextual factors that are beyond school’s control (peers, neighborhoods)
15
Thanks!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.