Small group consensus discussion tasks: CA driven criteria Chris Heady INTO Newcastle University
task example
What is the task? ……. candidates must: .. Give their views Be able to respond to the task and the materials Be able to respond appropriately to others in the group Discuss their views and the views of other Work towards a consensus (not achieve?) Show they can work collaboratively Be able to listen actively to others Show they can make useful, relevant and sensible points And more ……..
conceptual assumptions Construct – small group study discussions (in class or outside) Weir (2005) model of test validity – criterion, scoring, consequential Learning on Foundation programme Learning in School(s) at Newcastle University Co-construction – c.f. IELTS, FCE, CAE? University – co-construction common place
Process 1: dissertation ‘FEATURES OF SPOKEN INTERACTION IN A PEER-GROUP ORAL ENGLISH TEST AND EVIDENCE OF DIFFERENTIAL PERFORMANCE’ Foundation Architecture EAP module Video recordings middle vs upper: 5.0-5.5 vs 6.5- 8.0 CA Transcription and analysis = criterial features which support evidence of discrimination
summary Listenership (McCarthy, 2003) clarifications, conformations, back channelling, overlaps, turn completion, comments Mix of long and short turns Able to pick up and develop other group members points – from turn to turn and across turns Collaborative points (Galaczi,2008) Complexity of points Listenership: fewer overlaps, back- channelling, rare turn completions Shorter turns Responds to others but little development of points Agrees and disagrees: responding (but not always developing) Separate opinions = parallel (Galaczi, 2008) Some task management
Process 2: criteria development Collaborative process What should we rate? What can we rate? Watch, observe and identify and reflect - narrowing of features Consensus Balance between interactional features and linguistic features Trialled with old and new set of criteria - three examiners Standardisation and on-task moderation All assessment video-ed for EE consideration User guide, student-facing guide
Criteria v1
Rater feedback Concerns about listenership and task preparation Language effectiveness sometimes problematic Positive about ease of use and a movement away from adverbs! Positive about use of criteria as teaching aid Listen and respond is an excellent addition and really differentiates this assessment from the presentation It disadvantages our higher level students who really strive to reach top marks of 90. They dislike ending with a lower score than what they came with at entry. Accuracy of language is missing (links with point above) Further development: more clarification for assessors needed for ‘specialist or topic vocabulary’ some disagreement amongst teachers with the overlapping/finishing turns section The language aspects can be difficult to keep track of i.e. emphatic language and longer noun phrases
Opportunities / limitations Multimodality Paralinguistic Washback Consequential validity – evidence collection Task achievement? Reference outside test context? Scoring debates: should turn completion mean over-ride others? Can students prepare for backchannelling etc?
Very selected bibliography Bachman, L. (1990) Fundamental Considerations in Language Testing. Oxford, Oxford University Press Bonk W. J. and J. G. Ockey (2003) A many facet Rasch analysis of the second language group oral discussion, Language Testing 20 89-110 Brooks, L. (2009) Interacting in pairs in a test of oral proficiency: Co-constructing a better performance, Language Testing 26:3 341-366 Galaczi, E. (2008) Peer-Peer Interaction in a speaking test; the case of the First Certificate in English examination, Language Assessment Quarterly, 5:2 89-119 Gan, Z. (2010) Interaction in group oral assessment: a case study of higher and lower scoring students, Language Testing 27:4 585-602 Lazaranton, A. (1998) An analysis of differences in linguistic features of candidates at different levels of the IELTS Speaking Test. Report prepared for the EFL Division, University of Cambridge Local Examinations Syndicate, Cambridge
McCarthy, M. (2003) Talking back; small interactional response tokens in everyday conversation, Research on Language and Social Interaction 36;1, 33-63 May, L. (2011) Interactional Competence in a Paired Speaking Test, Features Salient to Raters, Language Assessment Quarterly, 8:2, 127-145, published on line at http://dx.doi.org/10.1080/15434303.2011.565845, accessed 06 June 2014 Seedhouse, P. (2012) What kind of interaction receives high and low ratings in Oral Proficiency Interviews? English Profile Journal Volume 3 August 2012 available at http:///journals.cambridge.org/EPJ last accessed 23/05/14 Van Moere, A. (2006) Validity evidence in a university group oral test, Language Testing 23:411 available at http://ltj.sagepub.com , last accessed 06/06/2014 Van Moere, A. and M. Kobayashi (2003) who speaks most in this group? Does that matter? Paper presented at the Language Testing Research Colloquium Weir, C.J. (2005) Language Testing and Validation. Basingstoke, UK: Palgrave Macmillan