Download presentation
Presentation is loading. Please wait.
Published bySusan Brown Modified over 9 years ago
1
World Englishes, English as a Lingua Franca and Language Testing: change on the horizon? Lynda Taylor - Discussant Language Testing Research Colloquium Seoul, Korea – July 2013
2
Overview The background context An historical/evolutionary perspective? Key themes addressed and insights gained Where next… some controversial ‘hypotheses’?
3
Background Perceived challenges for language testing and language testers: issue of whose norms should take primacy in assessment standards (the WE challenge) issue of whether current models of language proficiency reflect sufficiently the communicative demands of speech communities where norms are fluid (the ELF challenge)
4
An evolutionary expectation? World Englishes (1993) - to present-day 2013 – a 20-year period seemingly long enough to expect some degree of change to take place? (‘change on the horizon’) growing awareness of linguistic/lingua-cultural variety/diversity and its (sociolinguistic) implications for assessment practices
5
An evolutionary expectation ? ‘despite these conceptual advances, in practical terms only small progress has been made in answering each challenge in language testing design’ why is that? what do we think could/should have happened? what might such test design look like? how do we think WE/ELF challenges might have been addressed in practical terms?
6
An evolutionary expectation? might we have expected to see by now: the inclusion of Hispanic or Indian English speakers in the iBTOEFL/IELTS listening tests? the development by the Council of Europe of tests of English as a Lingua Franca (ELF) designed for the European context (e.g. European Parliament)? would these have moved us forward? or something else even more innovative? how far can we expect research provide us with solutions?
7
Key themes addressed 1. the complex area of listener/speaker attitudes and behaviours 2. the complex issues surrounding purpose and context of communication 3. the complex endeavour of construct definition for assessment purposes
8
1. Attitudes and behaviours reality of the individual listener experience of listening and (not) understanding (clarity?) reality of individual listener perceptions regarding accentedness (fluency+pronunciation?) the importance of distinguishing between ‘comprehensibility’ and ‘intelligibility’
9
1. Attitudes and behaviours the impact of listener/speaker attitudes on behaviours (though not necessarily in predictable ways?) the likely interaction between individual attitudes and behaviours, and interactional contexts involving high- pressure/high-stakes communicative demands
10
2. Communicative contexts the communicative reality and demands in a multilingual assessment context (generally high- stakes for at least one of the participants) the communicative reality and demands in a multilingual professional context (high-pressure – due to urgency, safety, distress – and therefore high-stakes for a wide range of stakeholders)
11
2. Communicative contexts the complexity of decision-making processes – especially when made under time pressure, heavy cognitive load and/or in an emotionally charged situation, e.g. as a rater, a pilot, an air traffic controller (but also in healthcare, military) complexities independent of L1/L2 distinctions professional competence/knowledge/experience procedural/convention compliance linguistic competence, including accommodation and listener/speaker effort
12
3. Construct definition what should be included when defining the construct for assessment purposes? how does linguistic variety interface with construct representativeness for listening test material? how is accentedness associated with construct underrepresentation and construct-irrelevant variance? could rater variability regarding perceptions of accentedness somehow be ‘embraced’ as construct- relevant?
13
3. Construct definition potential additional content for inclusion in a broader listening/speaking construct: ‘problematising’ accentedness (positively or negatively?) ‘compensating’ in response to accent-related difficulty, e.g. inferencing embracing a stronger social dimension to interactional competence (tolerance of ‘the other/stranger’, dynamic of co-construction) but not just ELF-driven construct/competencies?
14
World Englishes (1993) title of special issue = ‘Testing across cultures’ emphasised ‘distinctive’ linguistic differences across WE categories/groupings similarly, more recent ELF enterprise stresses the distinctive features of ELF but dangers of ever greater diversification? time to move in the opposite direction? refocusing on the features of linguistic competence shared by listener/speakers regardless of L1 status
15
What next….. ….some controversial ‘hypotheses’?
16
Hypothesis One The native speaker / non-native speaker paradigm has largely outlived its usefulness in language testing; this means that for our construct definition, test design and rating scale development we should instead move towards a paradigm premised upon notions of expertise (i.e. novice – expert user continuum) as discussed in the psycholinguistic literature.
17
Hypothesis Two Neither the WE construct nor the more recent ELF construct offer language testers much help in the practice of designing and constructing language tests. The first (WE) is too ideologically driven and the fixed categories have not evolved as the world has changed; the second (ELF) remains too underspecified, e.g. does not properly account for the role of suprasegmentals in comprehensibility.
18
Hypothesis Three The language proficiency construct underpinning English language tests should be reconceptualised to reflect the reality that all listener/speakers in English (regardless of L1) need to be able to cope with the challenges of intelligibility and the demands of co-construction, including accommodation and listener/speaker effort; proficiency tests should be redesigned to reflect this in accordance with test purpose and context.
19
Hypothesis Four Rater variability (bias) should no longer be perceived as a negative dimension when assessing spoken language proficiency; instead it should be reconceptualised as a potentially desirable component when evaluating spoken performance, contributing valuable information which can be incorporated into the measurement outcomes and the interpretation of scores.
20
Change on the horizon…..? ….time for some discussion!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.