Download presentation
Presentation is loading. Please wait.
Published byBeverly Hamilton Modified over 9 years ago
1
1
2
Statistical power in educational settings Workshop at Wellcome seminar on educational research, May 2008 Dylan Wiliam Institute of Education, University of London www.dylanwiliam.net
3
3 The argument… Premise 1 Learning is insensitive to instruction Measures of learning even more so So even small system-wide gains in learning are educationally important Premise 2 Education systems are inherently multi-levelled Taking account of clustering in data lowers statistical power Educational experiments are inherently weak Conclusion RCTs in education frequently need to be very large, and therefore expensive
4
4 Learning is slow… Source: Leverhulme Numeracy Research Programme 860+570=?
5
5 …especially for deep learning… Hart, 1981
6
6 …and measures are insensitive… Sequential tests of educational progress (ETS, 1957)
7
7 …and measures are insensitive… NAEP TIMSS
8
8 …so small gains in learning are worthwhile Average rate of progress of cohorts is 0.3 standard deviations per year Average cost of one year’s education for a cohort in England is £3bn An effect size of 0.05 sd might be regarded as “small” But system-wide, is worth £6bn
9
9 …but hard to detect… Statistical power: The likelihood that a statistical test will reject a false null hypothesis Depends on The level set for statistical significance The size of the difference between compared groups (effect size) The sensitivity of the measures Clustering reduces statistical power, but is an inherent feature of educational settings, and especially for school-wide interventions Teacher quality Ability grouping
10
10 …especially in educational settings (Konstantopoulos,2006) p = #students n = #classrooms = effect size c = classroom clustering s = school clustering
11
11 So… The most important question is not “Are RCTs good?” but “When are RCTs good?” How should we answer?
12
12 Institute of Education Sciences (USA) Five goals 1.identify existing programs, practices, and policies that may have an impact on student outcomes and the factors that may mediate or moderate the effects of these programs, practices, and policies; 2.develop programs, practices, and policies that are theoretically and empirically based; 3.evaluate the efficacy of fully developed programs, practices, and policies; 4.evaluate the impact of programs, practices, and policies implemented at scale; 5.develop and/or validate data and measurement systems and tools.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.