Lecture 8 Teaching Writing in EFL/ESL Joy Robbins

Slides:



Advertisements
Similar presentations
Parts of a Lesson Plan Any format that works for you and your JTEs is ok… BUT! Here are some ideas that might help you set up your LP format. The ALTs.
Advertisements

Rubric Design Denise White Office of Instruction WVDE.
Assessment Adapted from text Effective Teaching Methods Research-Based Practices by Gary D. Borich and How to Differentiate Instruction in Mixed Ability.
PACT Feedback Rubric Pilot Results with UC Davis English Cohort.
Assessment Assessment should be an integral part of a unit of work and should support student learning. Assessment is the process of identifying, gathering.
The Network of Dynamic Learning Communities C 107 F N Increasing Rigor February 5, 2011.
I102 User Support Week Four – Lab. Objectives  Effective Listening  Assessing Learners  Exercise – Learning Outcomes.
© Cambridge International Examinations 2013 Component/Paper 1.
A framework to move from common core to classroom practice Scoring Student Work 1 K. Thiebes.
Consistency of Assessment
How to build effective WORD WALLS and PERFORMANCE TASKS
Essay Assessment Tasks
OCTOBER ED DIRECTOR PROFESSIONAL DEVELOPMENT 10/1/14 POWERFUL & PURPOSEFUL FEEDBACK.
ASSESSING WRITING Evaluation and Testing in Language Education
1 DEVELOPING ASSESSMENT TOOLS FOR ESL Liz Davidson & Nadia Casarotto CMM General Studies and Further Education.
Preparing our students for the EAP English Prompt.
Portfolios.
Marion Webb January  By the end of the session, participants will be able to:  Discuss the role of assessment for learning  Describe key assessment.
Connections paper Route J – Religious Ethics with New Testament 2792 About the paper & exam questions.
Asking the Right Questions Assessing Language Skills 2008 Presentation to ATESL Central Local Sheri Rhodes, Mount Royal College.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Classroom Assessment A Practical Guide for Educators by Craig A
How to Evaluate Student Papers Fairly and Consistently.
Four Basic Principles to Follow: Test what was taught. Test what was taught. Test in a way that reflects way in which it was taught. Test in a way that.
EDU 385 Education Assessment in the Classroom
Classroom Assessment A Practical Guide for Educators by Craig A. Mertler Chapter 7 Portfolio Assessments.
Cambridge Pre-U Getting Started In-service Training Liberating learning Developing successful students.
ASSESSMENT TECHNIQUES THE FOUR PART MODEL Presented by Daya Chetty 20 APRIL 2013.
Session Four: Assessment of learning in the classroom Short Course in Learning and Teaching in the Classroom Janet Holmshaw and Jeff Sapiro Middlesex University,
Second session of the NEPBE I in cycle Dirección de Educación Secundaria February 22, 2013.
EDU 385 CLASSROOM ASSESSMENT Week 1 Introduction and Syllabus.
OCTOBER ED DIRECTOR PROFESSIONAL DEVELOPMENT 10/1/14 POWERFUL & PURPOSEFUL FEEDBACK.
Fourth session of the NEPBE II in cycle Dirección de Educación Secundaria February 25th, 2013 Assessment Instruments.
Peer Review How to make it work for you 1. In your experience… What have you tried? ▫What worked? ▫What didn’t work? What were the students’ responses?
Introduction to the ERWC (Expository Reading and Writing Course)
FCE First Certificate in English. What is it ? FCE is for learners who have an upper- intermediate level of English, at Level B2 of the Common European.
SHOW US YOUR RUBRICS A FACULTY DEVELOPMENT WORKSHOP SERIES Material for this workshop comes from the Schreyer Institute for Innovation in Learning.
Effective Grading Strategies Alison Morrison-Shetlar Faculty Center for Teaching and Learning Adapted from the book Effective Grading by Barbara Walvoord.
© 2008 Gatsby Technical Education Projects. These slides may be used solely in the purchaser’s school or college. Evaluating scientific writing.
What is it and why is it important?  Used to convince reader of writer’s point of view relating to a debatable issues  One of the most used writing.
Assessment. Workshop Outline Testing and assessment Why assess? Types of tests Types of assessment Some assessment task types Backwash Qualities of a.
Introduction to the ERWC (Expository Reading and Writing Course)
Criterion-Referenced Testing and Curriculum-Based Assessment EDPI 344.
ACADEMIC LANGUAGE AND PERSPECTIVE TAKING EDC 448 WORKSHOP Building/Supporting Critical Thinking from Multiple Perspectives.
Assessment My favorite topic (after grammar, of course)
GCSE English Language 8700 GCSE English Literature 8702 A two year course focused on the development of skills in reading, writing and speaking and listening.
Writing Exercise Try to write a short humor piece. It can be fictional or non-fictional. Essay by David Sedaris.
Do not on any account attempt to write on both sides of the paper at once. W.C.Sellar English Author, 20th Century.
Chapter 7 Table of Contents Introduction Guidelines for Monitoring and Assessment Guidelines for Monitoring and Assessment Types of Monitoring and.
Monitoring and Assessment Presented by: Wedad Al –Blwi Supervised by: Prof. Antar Abdellah.
Grading based on student centred and transparent assessment of learning outcomes Tommi Haapaniemi
CERTIFICATE IN ASSESSING VOCATIONAL ACHIEVEMENT (CAVA) Unit 1: Understanding the principles and practices of assessment.
Assessing Writing. Approaches to Writing Assessment Indirect measures: Assesses correct usage in sentence level Assesses spelling and punctuation Main.
CLASSROOM ASSESSMENT TECHNIQUES Departmental Workshop Wayne State University English Department January 11, 2012.
Chapter 6 Assessing Science Learning Updated Spring 2012 – D. Fulton.
Assessing Speaking. Possible challenges in assessing speaking Effect of listening skill: Speaking without interaction is observable but very limited (telling.
If I hear, I forget. If I see, I remember. If I do, I understand. Rubrics.
Class Observer & Feedback Training Cass Breen & Marco Macchitella.
Fifth Edition Mark Saunders, Philip Lewis and Adrian Thornhill 2009 Research Methods for Business Students.
Assessment for Learning Centre for Academic Practice Enhancement, Middlesex University.
 1. optional (check to see if your college requires it)  2. Test Length: 50 min  3. Nature of Prompt: Analyze an argument  4. Prompt is virtually.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Classroom Assessment A Practical Guide for Educators by Craig A
Writing Rubrics Module 5 Activity 4.
Classroom Assessments Checklists, Rating Scales, and Rubrics
What Are Rubrics? Rubrics are components of:
CUTM 4012: Methods of Teaching English
Why do we assess?.
Presentation transcript:

Lecture 8 Teaching Writing in EFL/ESL Joy Robbins Assessing Writing (1) Lecture 8 Teaching Writing in EFL/ESL Joy Robbins

Today’s Session Your own experiences of assessment The purposes of assessment The concepts of reliability and validity in assessment 3 different approaches to the scoring of writing tests: 1. Holistic scoring 2. Analytic scoring 3. Primary and multiple trait scoring

Assessment: Introductory discussion What’s the point of assessing writing? How have your teachers at school and university assessed your writing in your 1st and 2nd languages? Do you think there was any point in assessing you? Why (not)? In what ways have the scores and grades you have received on your writing (in L1 and L2) helped you improve your writing? If you are an experienced language teacher, what do you feel are your greatest challenges in evaluating student writing? If you aren’t an experienced teacher, what makes you nervous about assessing student writing? Why? (Based on questions in Ferris & Hedgcock 1998: 227)

What’s the point of assessment? Brindley (2001) lists the following purposes of assessment: selection: e.g. to determine whether learners have sufficient language proficiency to be able to undertake tertiary study; certification: e.g. to provide people with a statement of their language ability for employment purposes; accountability: e.g. to provide educational funding authorities with evidence that intended learning outcomes have been achieved and to justify expenditure; diagnosis: e.g. to identify learners’ strengths and weaknesses; instructional decision-making: e.g. to decide what material to present next or what to revise; motivation: e.g. to encourage learners to study harder. (p.138)

2 key terms Two key terms in the literature on testing and assessment are reliability and validity. Let’s have a closer look at what each of these mean…

Reliability ‘reliability refers to the consistency with which a sample of student writing is assigned the same rank or score after multiple ratings by trained evaluators’ (Ferris & Hedgcock 1998: 230) For example: if we’re marking an essay out of 20, the test will be far more reliable if 2 markers both award an essay the same grade (or more or less the same grade), say 16 or 17. However, if 1 marker awards 10 and the other awards 15, the test isn’t reliable. The obvious way to try to achieve reliability is by designing criteria (e.g. for content, organization, grammar, etc.) which the markers refer to when they’re marking the essay

Validity Validity refers to whether the test actually measures what it is supposed to measure Researchers have talked about several types of validity, for example: face validity content validity

Face validity Face validity refers to how acceptable and credible a test is to its users (Alderson et al 1995) So if a test has high face validity, teachers and learners believe it tests what it is supposed to test A test would have low face validity among learners if they had been told a writing test was mainly assessing the quality of their ideas if they believed that teachers marked according to how good the students’ grammar was

Content validity If a test has content validity, we have enough language to make a judgement about the student’s ability. So if a writing test is to have content validity, we need to be confident we have asked the student to do enough writing to display their writing skills

2 approaches to scoring writing There are 2 main ways of scoring writing tests, the holistic approach and the analytic approach Let’s look at each of these in turn…

Let’s look at an example of holistic grading criteria... Holistic Scoring Holistic scoring means that the assessor assesses the text generally, rather than focusing on 2 or 3 specific aspects The idea is that the assessor quickly reads through a text, gets a global impression, and awards a grade accordingly The holistic approach is supposed to respond to the writing positively, rather than negatively focusing on the things the writer has failed to do Let’s look at an example of holistic grading criteria...

Holistic writing assessment: an example Have a look at the example of a holistic marking scheme I’ve given you on the handout, and discuss the questions… Afterwards, based on this example, make a list of pros and cons of using a holistic approach to assessing writing

Holistic scoring: advantages Quick and easy, because there are few categories for the teacher to choose from

Holistic scoring: disadvantages Holistic scoring can’t provide the writing teacher with diagnostic information about students’ writing, because it doesn’t focus on tangible aspects of writing (e.g. organization, grammar, etc.) The holistic approach only produces a single score, so it’s less reliable than the analytical approach, which produces several scores (e.g. content, organization, grammar, etc.)…unless more than 1 assessor marks the tests A single score can be difficult to interpret for both teachers and students (‘What does 70% actually mean?’ ‘What did I do well?’ ‘What did I do badly?’)

Holistic disadvantages (contd.) ‘…the same score assigned to two different texts may represent entirely distinct sets of characteristics even if raters’ scores reflect a strict and consistent application of the rubric. This can happen because a holistic score compresses a range of interconnected evaluations about all levels of the texts in question (i.e., content, form, style, etc.)’. (Ferris & Hedgcock 1998: 234) Even though assessors are supposed to assess a range of features in holistic scoring (e.g. style, content, organization, grammar, spelling, punctuation, etc.), this isn’t easy to do. So some assessors may (consciously or unconsciously) value 1 or 2 of these criteria as more important than the others, and give more weighting to these in their scores (Lumley & McNamara 1995; McNamara 1996).

Let’s look at an example of analytic grading criteria... Analytic scoring Analytic scoring separates different aspects of writing (e.g. organization, ideas, spelling) and grades them separately Let’s look at an example of analytic grading criteria...

Analytic writing assessment: an example Have a look at the example of an analytic marking scheme I’ve given you on the handout, and discuss the questions… Afterwards, based on this example, make a list of pros and cons of using an analytic approach to assessing writing

Analytic scoring: advantages Analytic schemes provide learners with much more meaningful feedback than holistic schemes. Teachers can hand students’ essays back with the criteria (e.g. marks out of 10 for organization, spelling, etc.) circled which the writing was awarded Analytic schemes can be designed to reflect the priorities of the writing course. So, for instance, if you have stressed the value of good organization on your course, you can weight the analytic criteria so that organization is worth 60% of the marks Because assessors are assessing specific criteria, it’s easier to train them than assessors who are using holistic schemes (Cohen 1994; McNamara 1996; Omaggio Hadley 1993; Weir 1990) Analytic assessment is more dependable than holistic assessment (Jonsson & Svingby, 2007: 135)

Analytic scoring: disadvantages Surely a piece of good writing can’t be judged on 3 or 4 criteria? Each of the scales may not be used separately (even though they should be). So, for instance, if the assessor gives a student a very high mark for the ‘ideas’ scale, this may influence the rest of the marks they award the student on the other scales Descriptors for each scale may be difficult to use (e.g. ‘What does ‘adequate organization’ mean?’)

Primary and multiple trait scoring We’ve seen how the analytic approach can be criticized for trying to assess a piece of writing on just 3 or 4 criteria… Although primary and multiple trait scoring also use specific criteria to assess writing, the advantage of this approach is that the criteria assessed depend on what kind of writing the student is doing So primary and multiple trait scoring involves ‘devising and deploying a scoring guide that is unique to each prompt and the student writing that it generates’. (Ferris & Hedgcock 1998: 241)

Primary and multiple trait scoring: examples If the writing exam consisted of persuasive writing (e.g. Justify the case for the legalization of drugs), we might design a scoring scheme based exclusively on the ability to develop an argument If we were using primary trait scoring, just 1 trait would be assessed; if we were using multiple trait scoring, two or more traits would be assessed So in the example of the persuasive writing exam described above, we might design a scoring scheme which not only assessed the student’s ability to develop an argument, but also assessed the student’s use of counterargument, and the credibility of the sources they use to support their own argument, etc.

Sample multiple trait scoring guide (Ferris & Hedgcock 2005: 317) Timed writing #3 – Comparative Analysis In their respective essays, Chang (2004) and Hunter (2004) express conflicting perspectives on how technology has influenced the education and training of the modern workforce. You will have 90 minutes in which to explain which author presents the most persuasive argument and why. On the basis of a brief summary of each author’s point of view, compare the two essays and determine which argument is the strongest for you. State your position clearly, giving each essay adequate coverage in your discussion.

Sample multiple trait scoring guide (Ferris & Hedgcock 2005: 317)

Multiple trait scoring: advantages Multiple trait scoring doesn’t treat all writing as the same: it assesses (or should assess) the really important skills involved in different types of writing Providing the teacher has discussed the scoring criteria with the class before the exam, the students know exactly what they are being assessed on

Multiple trait scoring: disadvantages Can be extremely time consuming to design specific assessment criteria for each type of writing (Perkins 1983) Scoring criteria would need to be extensively piloted to ensure they really are assessing the writing fairly Having discussed the holistic, analytic, and primary/multiple trait approaches, we’re now going to try scoring an assignment using the holistic approach…

Application and discussion: holistic scoring Use Ferris & Hedgcock’s holistic marking scheme to assess a paper written by a student on a pre-master’s academic English course at a UK university You need to do 2 things: 1. Give the paper a score based on the holistic criteria; 2. Write on the paper, making specific comments on the writing

Application and discussion (contd.) In a pairs or groups, compare your score and comments with those of your colleagues. On what points did you agree or disagree? Why? If you disagreed, try to arrive at a consensus evaluation of the essay. After identifying the sources of your agreement and disagreement, formulate a list of future suggestions for using holistic scoring rubrics. (Ferris & Hedgcock 1998: 261)

References Alderson JC et al (1995) Language Test Construction and Evaluation. Cambridge: Cambridge University Press. Brindley G (2001) Assessment. In R. Carter & D. Nunan (eds.), The Cambridge Guide to Teaching English to Speakers of Other Languages. Cambridge: Cambridge University Press, pp.137-143. Cohen A (1994) Assessing Language Ability in the Classroom (2nd ed.). Boston: Heinle & Heinle. Ferris D & Hedgcock JS (1998) Teaching ESL Composition: Purpose, Process, and Practice. Mahwah: Lawrence Erlbaum. Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144. Lumley T & McNamara T (1995) Rater characteristics and rater bias: implications for training. Language Testing 12: 54-71. McNamara T (1996) Measuring Second Language Performance. London: Longman. Omaggio Hadley A (1994) Teaching Languages in Context (2nd ed.). Boston: Heinle & Heinle. Perkins K (1983) On the use of composition scoring techniques, objective measures, and objective tests to evaluate ESL writing ability. TESOL Quarterly 17: 651-671. Weir CJ (1990) Communicative Language Testing. New York: Prentice Hall.

This week’s reading Chapters 5 and 6 of: Ferris D & Hedgcock JS (2005) Teaching ESL Composition: Purpose, Process, and Practice. Mahwah: Lawrence Erlbaum. Min H-T (2005) Training students to become successful peer reviewers. System 33: 293-308.

Then work through the following questions: Homework task Use the analytic scoring scale to grade the pre- sessional piece of writing you graded holistically earlier today… Then work through the following questions: How well do your analytic ratings match your holistic ratings? Where do the two sets of scores and comments differ? Why? Given the nature of the writing tasks you evaluated, which of the two scales do you feel is most appropriate? Why? How might you modify one or both of the scales to suit the students you teach? (Adapted from Ferris & Hedgcock 1998: 261-2)