Improving student learning and experience through changing assessment environments at programme level: a practical guide Graham Gibbs
Introductions Me TESTA project Carol My assumptions about you... You...and afterwards
Programme Coffee Presentation 1: The TESTA project and its focus Activity 1: The assessment problem in participants’ own contexts Coffee Presentation 2: Theory and evidence about effects of assessment Activity 2:Speculation about effects in participants’ own contexts Lunch Presentation 3:Interpreting case study data - examples Activity 3:Interpreting case study data Tea Presentation 4The Quality assurance environment Activity 4Discussion of QA in participants’ own institutions Presentation 5Implementing changes and evaluating their impact Closing discussionImplementation of the TESTA approach
TESTA project - background Conceptual framework: ‘Conditions under which assessment supports student learning’ Measuring the extent to which students experience these conditions: the Assessment Experience Questionnaire (AEQ) Changing assessment methods at module level to meet the conditions (FAST project, Leeds Met booklet) Discovering that programmes and whole institutions had ‘assessment patterns’ through wide use of the AEQ Auditing programme-level assessment regimes Discovering powerful links between features of assessment regimes to students’ learning responses (HEA, three institutions, three disciplines) TESTA as an R&D project – using the research to identify issues to address and measure the impact of changes to assessment – and changing institutional QA rules Four institutions and seven programmes (initially)
Assessment case study: what is going on? Committed and innovative teachers Lots of coursework, of very varied forms Very few exams Masses of written feedback on assignments Four week turn-round of feedback Learning outcomes and criteria clearly specified …looks like a ‘model’ assessment environment but students: Don’t put in a lot of effort and they distribute their effort across few topics Don’t think there is a lot of feedback or that it is very useful, and don’t make use of it Don’t think it is at all clear what the goals and standards are
Changes in assessment in the UK Formative to summative (often 1:10, Oxford 10:1) More summative (up to 95 times in three years)
Summative assessment that is redundant Most students can, in their first year, predict their final results with some accuracy As few as 5% of assessments are necessary to produce the same overall degree classification The high volume of summative assessment in UK universities has not resulted in students doing much work
Changes in assessment in the UK Formative to summative (often 1:10, Oxford 10:1) More summative (up to 95 times in three years) Less feedback (worst NSS scores) Exams to coursework (90%:10% to 10%:90%) Diversity of assessment methods (1 to 18) –innovation –fragmentation of modular courses –multiplicity of learning outcomes Strategic behaviour by students Resource constraints: large classes, no economies of scale in assessment Slow increase in computer based assessment Wide differences between universities
What is the ‘assessment problem’ in your context that led you to come to this workshop?
Student experience of assessment
“I just don’t bother doing the homework now. I approach the courses so I can get an ‘A’ in the easiest manner, and its amazing how little work you have to do if you really don’t like the course.”
“I am positive there is an examination game. You don’t learn certain facts, for instance, you don’t take the whole course, you go and look at the examination papers and you say ‘looks as though there have been four questions on a certain theme this year, last year the professor said that the examination would be much the same as before’, so you excise a good bit of the course immediately…”
“The feedback on my assignments comes back so slowly that we are already on the topic after next and I’ve already submitted the next assignment. I just look at the mark and throw it in the bin”
“The tutor likes to see the right answer circled in red at the bottom of the problem sheet. He likes to think you’ve got it right first time. You don’t include any workings or corrections – you make it look perfect. The trouble is when you go back to it later you can’t work out how you did it and you make the same mistakes all over again”
“One course I tried to understand the material and failed the exam. When I took the resit I just concentrated on passing and got 98%. My tutor couldn’t understand how I failed the first time. I still don’t understand the subject so it defeated the object, in a way”
Literature reporting assessment that improves learning The case of the Engineer The case of the Psychologist
The case of the engineer Weekly lectures, problem sheets and classes Marking impossible Problem classes large enough to hide in Students didn’t tackle the problems Exam marks: 45%
The case of the engineer Course requirement to complete 50 problems Peer assessed in six ‘lecture’ slots Marks do not count Lectures, problems, classes, exams unchanged
The case of the engineer Course requirement to complete 50 problems Peer assessed in six ‘lecture’ slots Marks do not count Lectures, problems, classes, exams unchanged Exam marks increased from 45% to 85% Why did it work?
The case of the engineer time on task social learning and peer pressure timely and influential feedback learning by assessing –error spotting –developing judgement –self-supervision ‘meta-cognitive awareness and control’
Literature reporting assessment that improves learning The case of the Engineer The case of the Psychologist
“Conditions under which assessment supports student learning”
Quantity and distribution of student effort 1 Assessed tasks capture sufficient student time and effort 2These tasks distribute student effort evenly across topics and weeks
Quality and level of student effort 3These tasks engage students in productive learning activity 4Assessment communicates clear and high expectations to students
Quantity and timing of feedback 5Sufficient feedback is provided, both often enough and in enough detail 6The feedback is provided quickly enough to be useful to students
Quality of feedback 7Feedback focuses on learning rather than on marks or students themselves 8Feedback is understandable to students, given their sophistication
Student response to feedback 9 Feedback is received by students and attended to, and is acted upon by students to improve their work or their learning
Which of these conditions are well met, and poorly met, in the context you work in? 1Assessed tasks capture sufficient student time and effort 2These tasks distribute student effort evenly across topics and weeks 3These tasks engage students in productive learning activity 4Assessment communicates clear and high expectations to students 5Sufficient feedback is provided, both often enough and in enough detail 6The feedback is provided quickly enough to be useful to students 7Feedback focuses on learning rather than on marks or students themselves 8Feedback is understandable to students, given their sophistication 9Feedback is received by students and attended to, and is acted upon by students to improve their work or their learning
Assessment Experience Questionnaire Measures extent to which the ‘conditions’ are perceived to be met –Quantity and distribution of effort –Quality, quantity and timeliness of feedback –Use of feedback –Impact of exams on quality of learning –Deep approach –Surface approach –Clarity of goals and standards –Appropriateness of assessment
Research question What are the characteristics of programme level assessment environments that are associated with positive student learning responses?
Research design Three contrasting universities Three contrasting programmes in each (Humanities, Science, Applied Social Science) Characterise assessment environments –Read documentation (all modules) –Interview programme leader, lecturers and students Administer AEQ Explore relationships between characteristics of programme level assessment design and qualities of student learning -with Harriet Dunbar-Goddet, Chris Rust and Sue Law -funded by the Higher Education Academy
Characteristics of programme level assessment environments % marks from examinations Volume of summative assessment Volume of formative only assessment Volume of (formal) oral feedback Volume of written feedback Timeliness: days after submission before feedback provided Explicitness of criteria and standards Alignment of goals and assessment
Range of characteristics of programme level assessment environments % marks from exams: 3% - 100% number of times work marked: 11 – 95 variety of assessment: number of times formative-only assessment: 0 – 134 number of hours of oral feedback: 1 – 68 number of words of written feedback: 2,700 – 15,412 turn-round time for feedback: 1 day – 28 days
Patterns of assessment features within programmes every programme that is low on the volume of summative assessment is high on the volume of formative assessment no examples of high volume of summative assessment and high volume of feedback there may be enough resources to mark student work many times, or to give feedback many times, but not enough resources to do both
Relationships between assessment characteristics and student learning Explicitness of criteria and standards, alignment of goals and assessment and variety of assessment are all associated with a negative learning experience: less of a deep approach, less coverage of the syllabus, less clarity about goals and standards, less use of feedback, …explicitness, alignment and variety and are also associated with more summative assessment, less formative-only assessment, less oral feedback and less prompt feedback
Relationships between assessment characteristics and student learning Formative only assessment, oral feedback and prompt feedback are all associated with a positive learning experience: more effort, more deep approach, more coverage of the syllabus, greater clarity about goals, more use of feedback, more overall satisfaction… …even when they are also associated with lack of explicitness of criteria and standards, lack of alignment of goals and assessment and a narrow range of assessment.
Why? being explicit does not result in students being clear …but ‘engagement in a community of practice’ does… explicitness helps students to be ‘selectively negligent’ students experience varied forms of assessment as confusing: ambiguity + anxiety = surface approach feedback improves learning most when there are no marks more time to give feedback when don’t have to mark possible to turn feedback round quickly when there are no QA worries about marks (or cheating)
Assessment case study: what is going on? Lots of coursework, of very varied forms (lots of innovation) Very few exams Masses of written feedback on assignments Four week turn-round of feedback Learning outcomes and criteria clearly specified …looks like a ‘model’ assessment environment But students: Don’t put in a lot of effort and distribute their effort across few topics Don’t think there is a lot of feedback or that it very useful, and don’t make use of it Don’t think it is at all clear what the goals and standards are
Assessment case study: what is going on? All assignments are marked and they are all students spend any time on. Not possible to mark enough assignments to keep students busy. No exams or other unpredictable demands to spread effort across topics. Almost no required formative assessment Teachers all assessing something interestingly different but this utterly confuses students. Far too much variety and no consistency between teachers about what criteria mean or what standards are being applied. Students never get better at anything because they don’t get enough practice. Feedback is no use to students as the next assignment is completely different. Four weeks is much too slow for feedback to be useful, and results in students focussing on marks.
Assessment case study: what to change? Reduce variety of assignments and plan progression across three years for each type, with invariant criteria, and many exemplars of different quality, for each type of assignment Increase formative assessment: dry runs at what will later be marked, sampling for marking. Reduce summative assessment: one per module is enough (24 in three years) or longer/bigger modules Separate formative assessment and feedback from summative assessment and marks … give feedback quickly, marks later, or feedback on drafts and marks on final submission Teachers to accept that the whole is currently less than the sum of the parts (and current feedback effort is largely wasted) and give up some autonomy within modules for the sake of students’ overall experience of the programme – so teachers’ effort is worthwhile
Case Studies 1Full data plus Programme Leader’s interpretation (Carol) 2Full data for groups to make sense of and decide what to do about
Quality Assurance issues QAA and Bologna requirements for specification of learning outcomes at module level Institutional requirements for all modules to be ‘free standing’ in terms of assessment regardless of issues of consistency, progression or alignment at programme level QAA audits critical of –inconsistent standards (markers and modules) –volume and timeliness of feedback
Quality Assurance issues Documentation about –outcomes but not feedback –no. of lectures but not volume of feedback or who provides it –criteria, but not how students come to understand them –module level tactics but not programme level strategy Regulations about –course requirements and pass/fail assignments –no feedback before return of approved marks –providing feedback on drafts Meetings with PVCs, separately and together –parallel changes in QA arrangements/QAA audit –changes to documentation and regulations –changes to evaluation data collected –pilots with programmes coming up for revalidation
What are the framing QA issues in your institution?
Implementing and evaluating changes Idiosyncratic nature of: Contexts Evidence Interpretation Parallel changes Proposals Time scales Need for matching before and after sets of data Same programme Equivalent cohort of students Audit AEQ Focus groups Same staff to interpret it?
Implementing TESTA in your institution Methodology on TESTA web site – free and updated Comparative data available – and updated. Send TESTA your data so it can be added to data base. Case studies published on TESTA web site over time Advice from TESTA staff available One day of Graham’s time available to five institutions: write to me with proposals. Criteria: likely scale of implementation and impact; logistics TESTA-arranged meeting between participating institutions (if felt useful) Free to publish your own studies independently, and get TESTA help with drafting and data analysis...
Good luck!