Download presentation
Published byPrudence Spencer Modified over 9 years ago
1
Assessment Tomorrow Conference Edinburgh, 22nd November 2012
Assessment Tomorrow Robert Coe @ProfCoe Centre for Evaluation and Monitoring (CEM) Durham University Assessment Tomorrow Conference Edinburgh, 22nd November 2012
2
Why are we here? CEM aims to Create the best assessments in the world
Empower teachers with information for self-evaluation Promote evidence-based practices and policies, based on scientific evaluation To help educators improve educational outcomes measurably
3
CEM activity The largest educational research unit in a UK university
1.1 million assessments are taken each year More than 50% of UK secondary schools use one or more CEM system CEM systems used in over 50 countries Largest provider of computerised adaptive tests outside US
4
Outline Assessment is the most powerful lever we have Quality matters
Technology can make assessment Efficient Diagnostic Embedded Fun Valid Standardised Secure Informative
5
Good Assessment Makes learning visible Makes us focus on learning
Allows us to evaluate What students do and don’t know Against appropriate norms Effectiveness of teaching Allows us to diagnose Specific learning needs
6
EEF Toolkit Effect Size (months gain) Cost per pupil Promising
May be worth it 10 Feedback Meta-cognitive Peer tutoring Pre-school 1-1 tutoring Homework Effect Size (months gain) Summer schools ICT Smaller classes AfL Parental involvement Individualised learning Sports Not worth it Learning styles After school Arts Performance pay Teaching assistants Ability grouping £0 £1000 Cost per pupil
7
Definition of a grade ‘An inadequate report of an inaccurate judgment by a biased and variable judge of the extent to which a student has attained an undefined level of mastery of an unknown proportion of an indefinite material.’ Dressell (1983) Dressel, P. (1983) "Grades: One more tilt at the windmill." in A.W. Chickering (Ed.), Bulletin. Memphis: Memphis State U. Center for the Study of Higher Education, Dec. 1983, p. 12.
8
Would you let this test into your classroom?
Does the test discriminate adequately between different levels of performance? How well do the test scores predict later performance? How long does the test (or each element of it) take each student? How clearly defined are the acceptable interpretations and uses of test scores? Do repeated administrations of the test give consistent results? Do the responses have to be marked? How much time is needed for this? How well do the test scores correlate with other measures of the same thing? What does the test claim to measure? Do the test items look appropriate? How well does the measure correspond with measures of the same and related constructs, using the same and other methods of assessment? Do test scores reflect factors other than the intended construct (such as gender, social class, race/ethinicity)?
9
Computer Adaptive Testing
Right answers harder questions Wrong answers easier questions Can give same information in half the time More accurate at the extremes More pleasant testing experience Need access to computers Development costs higher
10
PIPS Baseline : start of school
11
InCAS: diagnostic assessment through primary school
12
Computer Adaptive Baseline Test
12
13
In the future, technology allows
Teachers to author, share and evaluate test items ‘Home-made’ tests with standardised norms Adaptive presentation Automatic marking of complex responses Platforms for efficient and quality-controlled human judgement (marking) Cheat detection Sophisticated feedback to students and teachers
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.