Presentation is loading. Please wait.

Presentation is loading. Please wait.

How is evidence-based literature informing e-Assessment practice: Findings from an HEA project Denise Whitelock d.m.whitelock@open.ac.uk CALRG 2012 - dmw.

Similar presentations


Presentation on theme: "How is evidence-based literature informing e-Assessment practice: Findings from an HEA project Denise Whitelock d.m.whitelock@open.ac.uk CALRG 2012 - dmw."— Presentation transcript:

1 How is evidence-based literature informing e-Assessment practice: Findings from an HEA project
Denise Whitelock CALRG dmw

2 Assessment drives learning (Rowntree, 1987)
The Challenge Assessment drives learning (Rowntree, 1987) How does e-assessment and feedback support student learning? CALRG dmw

3 I hate marking but want the tasks and feedback to assist student learning
CALRG dmw

4 HEA funded project to: Consult the academic community on useful references Seminar series Survey Advisors Invited contributors Prioritise evidence-based references Synthesise main points For readers: Academics using technology enhancement for assessment and feedback Learning technologists Managers of academic departments CALRG dmw

5 Evidence-based literature
142 references Technology-enhanced methods Use for assessment and feedback Type of evidence Ease of access (18 could not be retrieved) CALRG dmw

6 Categories of evidence used
Category Description 1a Peer-reviewed generalizable study providing effect size estimates and which includes (i) some form of control group or treatment (may involve participants acting as their own control, such as before and after), and / or (ii) blind or preferably double-blind protocol. 1b Peer-reviewed generalizable study providing effect size estimates, or sufficient information to allow estimates of effect size. 2 Peer-reviewed ‘generalizable’ study providing quantified evidence (counts, percentages, etc) short of allowing estimates of effect sizes. 3 Peer-reviewed study. 4 Other reputable study providing guidance.

7 Number of references recommended in each evidence category
Cumulative % 1a 15 12.1% 1b 8 18.5% 2 12 28.2% 3 49 67.7% 4 40 100.00% Total 124 CALRG dmw

8 MCQ & EVS use with large learning gains, Draper (2009)
Assertion – reason questions eg Pauli, Heisenberg, Planck, de Broglie Learners generate reasons for and against each answer Confidence marking, Gardner-Medwin Mazur’s method of brain teasing. Role of peers, cognitive conflict Students create MCQs for EVS, reflection on counter arguments before you hear them Students create MCQs for final exam. Will increase exam performance CALRG dmw

9 Mobile Technologies and Assessment
MCQs, PDAs (Valdiva & Nussbaum, 2009) Polls, instant surveys (Simpson & Oliver, 2007) EVS (Draper, 2009) CALRG dmw

10 CAP peer assessment system, BSc. Network Management & Security (Intl
CAP peer assessment system, BSc. Network Management & Security (Intl.), Glamorgan, Phil Davies CALRG dmw

11 Peer Assessment and the WebPA Tool
Loughborough (Loddington et al, 2009) Self assess and peer assess with given criteria Group mark awarded by tutor Students rated: More timely feedback Reflection Fair rewards for hard work Staff rated: Time savings Administrative gains Automatic calculation Students have faith in the administrative system CALRG dmw

12 Authentic assessments: e-portfolios
CALRG dmw

13 Candidate Assessment Records section, OCR IT Practitioner, EAIHFE, Robert Wilsdon
CALRG dmw

14 Building e-portfolios on a chef’s course
Evidence of food preparation for e-portfolio Modern Apprenticeship in Hospitality and Catering, West Suffolk College, Mike Mulvihill CALRG dmw

15 Sharing e-portfolios: The Netfolio concept
Social constructivism Connecting e-portfolios (Barbera, 2009) Share and build upon a joint body of evidence Trialled with 31 PhD students at a virtual university Control group used but Netfolio group obtained higher grades Greater visibility of revision process and peer assessment in the Netfolio system CALRG dmw

16 MCQs: Variation on a theme (1)
An example of a COLA assessment used at the Reid Kerr College, Paisley. It is a Multiple Response Question used in one of their modules. This question was developed using Questionmark Perception at the University of Dundee. It is part a set of formative assessment for medical students. CALRG dmw

17 MCQs: Variation on a theme (2)
Example of LAPT Certainty-Based Marking, UK cabinet ministers demo exercise showing feedback, University College, Tony Gardner-Medwin Drug Chart Errors and Omissions, Medicines Administration Assessment, Chesterfield Royal Hospital CALRG dmw

18 Basic IT skills, first year med students (Sieber, 2009)
Self-diagnosis Basic IT skills, first year med students (Sieber, 2009) Competency-based testing Repeating tests for revision Enables remedial intervention CALRG dmw

19 Students want more support with assessment
More Feedback Quicker Feedback Full Feedback User-friendly Feedback And National Students’ Survey CALRG dmw 19

20 Gains from Interactivity with Feedback: Formative Assessment
Mean effect size on standardised tests between 0.4 to 0.7 (Black & Wiliam, 1998) Particularly effective for students who have not done well at school Can keep students to timescale and motivate them How can we support our students to become more reflective learners and enter a digital discourse? CALRG dmw

21 Kent University ab-initio Spanish module
LISC: Aily Fowler Kent University ab-initio Spanish module Large student numbers Skills-based course Provision of sufficient formative assessment meant unmanageable marking loads Impossible to provide immediate feedback leading to fossilisation of errors CALRG dmw

22 The LISC solution: developed by Ali Fowler
Independently practise sentence translation Receive immediate (and robust) feedback on all errors Attend immediately to the feedback (before fossilisation can occur) A CALL system designed to enable students to: CALRG dmw

23 How is the final mark arrived at in the LISC System?
Best to give more weight to the first attempt since this ensures that students give careful consideration to the construction of their first answer but can improve their mark by refining the answer The marks ratio can vary (depending on assessment/feedback type) the more information given in the feedback, the lower the weight the second mark should carry The two submissions are unequally weighted CALRG dmw

24 Heuristics for the final mark
If the ratio is skewed too far in favour of the first attempt… ..students are less inclined to try hard to correct non-perfect answers If the ratio is skewed too far in favour of the second attempt… ..students exhibit less care over the construction of their initial answer CALRG dmw

25 Free text entry McFeSPA system (Kochakornjarupong & Brna, 2010)
IAT (Jordan & Mitchell, 2009) Open Comment (Whitelock & Watt, 2008) McFeSPA system (Kochakornjarupong & Brna, 2010) Supports teaching assistants to mark and give feedback on undergraduate computer programming assignments Support tool for semi-automated marking and scaffolding of feedback CALRG dmw

26 McFeSPA system Supports teaching assistants to mark and give feedback on undergraduate computer programming assignments Support tool for semi-automated marking and scaffolding of feedback Findings that feedback model would be helpful in training tutors .... Similar to Open Comment findings CALRG dmw

27 Audio Feedback (Middleton & Nortcliffe, 2010)
Timely and meaningful Manageable for tutors to produce and the learner to use Clear in purpose, adequately introduced and pedagogically embedded Technically reliable and not adversely determined by technical constraints or difficulties Targeted at specific students, groups or cohorts, addressing their needs with relevant points in a structured way Produced within the context of local assessment strategies and in combination, if appropriate, with other feedback methods using each medium to good effect Brief, engaging and clearly presented, with emphasis on key points that demand a specified response from the learner Of adequate technical quality to avoid technical interference in the listener’s experience Encouraging, promoting self esteem Formative, challenging and motivational CALRG dmw

28 Characteristics Descriptor Authentic Involving real-world knowledge and skills Personalised Tailored to the knowledge, skills and interests of each student Negotiated Agreed between the learner and the teacher Engaging Involving the personal interests of the students Recognises existing skills Willing to accredit the student’s existing work Deep Assessing deep knowledge – not memorization Problem oriented Original tasks requiring genuine problem solving skills Collaboratively produced Produced in partnership with fellow students Peer and self assessed Involving self reflection and peer review Tool supported Encouraging the use of ICT A d v i c e f o r A c t i o n Elliott’s characteristics of Assessment 2.0 activities CALRG dmw

29 Creating teaching and learning dialogues: towards guided learning supported by technology
Learning to judge Providing reassurance Providing a variety of signposted routes to achieve learning goals CALRG dmw

30 Key Messages Effective regular, online testing can encourage student learning and improve their performance in tests (JISC, 2008) Automated marking can be more reliable than human markers and there is no medium effect between paper and computerized exams (Lee and Weerakoon, 2001) The success of assessment and feedback with technology-enhancement lies with the pedagogy rather than the technology itself; technology is an enabler (Draper, 2009) CALRG dmw

31 Key Messages 2 Technology-enhanced assessment is not restricted to simple questions and clear-cut right and wrong answers, much more sophisticated questions are being used as well (Whitelock & Watt, 2008) The design of appropriate and constructive feedback plays a vital role in the success of assessment, especially assessment for learning (Beaumont, O’Doherty & Shannon, 2008) CALRG dmw

32 Key Messages 3 Staff development essential to the process (Warburton, 2009) Prepare students to take the assessments that use technology enhancement by practicing with similar levels of assessment using the same equipment and methods (Shepherd et al, 2006) The reports generated by many technology- enhanced assessment systems are very helpful in checking the reliability and validity of each test item and the test as a whole can be checked on commercial systems (McKenna and Bull, 2000) CALRG dmw

33 References Beaumont, C., O’Doherty, M., and Shannon, L. (2008). Staff and student perceptions of feedback quality in the context of widening participation, Higher Education Academy. Retrieved May 2010 from: Draper, S. (2009). Catalytic assessment: understanding how MCQs and EVS can foster deep learning. British Journal of Educational Technology, 40(2), JISC, HE Academy, and ALT (2008). Exploring Tangible Benefits of e-Learning. Retrieved in May 2010 from Lee, G. and Weerakoon, P. (2001). The role of computer-aided assessment in health professional education: a comparison of student performance in computer-based and paper-and-pen multiple-choice tests. Medical Teacher, Vol 23, No. 2, McKenna, C. and Bull, J. (2000). Quality assurance of computer-assisted assessment: practical and strategic issues. Quality Assurance in Education. 8(1), CALRG dmw

34 References 2 Middleton, A. and Nortcliffe, A. (2010). Audio feedback design: principles and emerging practice. In D.Whitelock and P.Brna (eds), Special Issue ‘Focusing on electronic feedback: feasible progress or just unfulfilled promises?’ Int. J. Continuing Engineering Education and Life-Long Learning, Vol. 20, No. 2, pp Shephard, K., Warburton, B., Maier, P. and Warren, A. (2006). Development and evaluation of computer-assisted assessment in higher education in relation to BS7988. Assessment & Evaluation in Higher Education, 31: 5, 583 — 595. Strang, K.D. (2010). Measuring self regulated e-feedback, study approach and academic outcome of multicultural university students. In D.Whitelock and P.Brna (eds) Special Issue ‘Focusing on electronic feedback: feasible progress or just unfulfilled promises?’ Int. J. Continuing Engineering Education and Life-Long Learning, Vol. 20, No. 2, pp Warburton, B. (2009). Quick win or slow burn: modelling UK HE CAA uptake. Assessment & Evaluation in Higher Education, 34: 3, 257 — 272. Whitelock, D. and Watt, S. (2008). Reframing e-assessment: adopting new media and adapting old frameworks. Learning, Media and Technology, Vol. 33, No. 3, September 2008, pp.153–156, Routledge, Taylor & Francis Group. ISSN CALRG dmw

35 Four Assessment Special Issues
Whitelock, D. and Warburton, W. (In Press). Special Issue of International Journal of e-Assessment (IJEA) ‘Computer Assisted Assessment: Supporting Student Learning’ Brna, P. & Whitelock, D. (Eds.) (2010). Special Issue of International Journal of Continuing Engineering Education and Life-long Learning, ‘Focusing on electronic feedback: Feasible progress or just unfulfilled promises?’, Volume 2, No. 2 Whitelock, D. (Ed.) (2009). Special Issue on e-Assessment: Developing new dialogues for the digital age. Volume 40, No. 2 Whitelock, D. and Watt, S. (Eds.) (2008). Reframing e- assessment: adopting new media and adapting old frameworks. Learning, Media and Technology, Vol. 33, No. 3 CALRG dmw


Download ppt "How is evidence-based literature informing e-Assessment practice: Findings from an HEA project Denise Whitelock d.m.whitelock@open.ac.uk CALRG 2012 - dmw."

Similar presentations


Ads by Google