Download presentation
Presentation is loading. Please wait.
Published byStephanie Bishop Modified over 6 years ago
1
Criterial features If you have examples of language use by learners (differentiated by L1 etc.) at different levels, you can use that to find the criterial features associated with each level of proficiency Those criterial features can be used in two ways: As a measure of expected proficiency and as a guide to the focus of language teaching As input to an automatic testing of the level of learners. (We return to this topic in a later class)
2
Criterial features There are different ways you can gather criterial features – for example: Lexicogrammatical features – perhaps using an n-gram analysis A large set of grammatical features. You could count the following sorts of features: passives, relative pronouns, participial clauses, nominalisations A grammatical analysis -- parsing
3
Criterial features The reading opts for the latter – note that this differs from the lexico-grammatical view I have described Step 1 is to tag the words with part-of-speech tags. There are about 50 or so POS tags, not just V, N, P, etc. E.g., CLAWS7 see
4
Criterial features Next stage is parsing – analysing sentences in terms of verb frames NP V PP (Noun Phrase – Verb – Prepostional Phrase) the cat sat on the mat Parsing in this case is based on a probabilistic grammar and so is corpus-based In addition, the grammatical relations are extracted – thus the cat above is identified as SUBJECT
5
Analysis of L2 language
6
Criterial features Having done this analysis, it is possible to analyse and compare students at different levels of proficiency, as discussed in the paper
7
Automatic Writing Evaluation
8
AWE Why might it be impossible? Why might it be possible?
9
AWE Why is it needed Use for testing and for feedback
Shift from product to process approaches to writing Use of editing software (Word) and hence multiple drafts rather than typewriters (one or two drafts) Writing as a way to demonstrate proficiency Use for testing and for feedback
10
Warschauer & Page
11
Folz, Lochbaum & Rosenstein (Pearson)
12
What kinds of features potentially distinguish writing proficiency levels
Ideas from previous classes and readings. ??
13
Samples Levels 1 to 5 Can you assess them correctly
Are there features that can be used in an automatic system
14
New tools We will probably need to make use of a variety of tools and types of analysis Basic methodology create a large database of manually evaluated writing samples categorized by relevant features type of writing set prompts L1 (possibly) Discipline Concept – patterns can be extracted from large corpora (patterns may not be identifiable by humans)
15
General resources for language analysis
Taggers (POS) Parsers/Tree taggers Semantic analysis Wordnet Framenet Semantic tagging
16
Language resources We can consult these resources manually
In computer-based systems, they can be accessed automatically
17
Wordnet
18
Wordnet
22
Hypernym
23
Another search
24
Wordnet Here we checked the webpages
Computational systems access Wordnet directly Use levels and “distances” (in tree structure) to assess different kinds and degrees of semantic relatedness
25
Framenet
26
Search for give
28
Semantic Tags Different ways of attempting semantic tags
Give “sense” definitions of a word - run a company run [10] – sense 10 in COBUILD dictionary Provide semantic type information
29
Semantic tags Similar to a thesaurus organisation
30
Semantic tagging
31
Semantic tag list
32
Semantic tags Examples doubted_A7- UNLIKELY vague_A4.2- GENERAL
best_A policy_X7+ EVALUATION GOOD WANTED truth_A EVALUATION TRUE lies_A EVALUATION FALSE
33
AWE Features Different kinds of features and methods are used in AWE
Surface features: word structure, lexical items, POS, sentence structure (latter two require tagging and parsing and so not overtly “surface”) Typically scores are higher with more words longer words subordination passives nominalizations prepositions
34
More sophisticated analyses
Cohesion analysis Latent Semantic Analysis (LSA) Move analysis
35
AWE features Cohesive features
Crossley and McNamara J. of Research in Reading 35, 2, Lexical diversity – more word “types” Word overlap – to increase cohesion Synonyms Connectives
36
Coh-Metrix
37
Coh-Metrix
41
Crossley and McNamara 2012
43
Latent Semantic Analysis
Large corpora are used to assess meaning similarity among words or passages The meaning of a word relates to all the passages in which the word appears, which can be represented as a matrix matrix (vector) for bus will more similar to car than biscuit
44
Latent Semantic Analysis
The meaning of a passage relates to all the words that appear in the passage These vectors are the input to the processing that is something like a factor analysis Number of dimensions are reduced to give a more abstract and deeper analysis of word-context relations (maybe similar to human cognition)
45
Latent Semantic Analysis
Initial analysis -- LSA space created by instructional materials or something similar plus student essays Different methods Some essays are graded by instructors and test essays are compared using LSA with these Exemplary essay is written and closeness of the test essays are judged Distance between each sentence of a standard text and each sentence of the student essay. A cumulative score is given All the student essays are compared and then ranked
46
Latent Semantic Analysis
Depth is based on word choice and content Not syntax etc Students choose from a pre-existing set of prompts Typically requires a set of texts against which the test essays are judged
47
Move analysis Elena Cotos project at Iowa State
Database of articles in different disciplines Examined “moves” in different sections of papers Created Intelligent Academic Discourse Evaluator (IADE) which is being used across the university
48
What is a Move? Move is a term used by Swales, referring to the function of each rhetorical unit (Swales 1990).
49
Using CorpusTools software
Annotate or tag a text using an annotation scheme you devise: Examples For Errors For Evaluative terms For Moves
50
Corpus and layers Layer Corpus
51
Create a tagging scheme
52
Corpus and layers
53
Assign Tags to each unit
Move: describe characteristics of cases/informants
54
Search Instances Tag Instances
55
kristopherkyle.com
56
AWE For testing For feedback what are the aims of an AWE system
what kind of feedback should be given
57
Warschauer & Page
58
E-rater Identifies discourse elements: background, thesis, main ideas, supporting ideas, and conclusion – leads to a “development” score and a length score (number of words) Lexical complexity type/token ratio vocabulary level average word length
59
E-rater Prompt-specific vocabulary – based on content analysis (vector analysis comparing the words of the essay with the reference corpus) Essay length (number of words)
60
Weighting features All the assessment procedures are based on a sets of features – average word length, pronoun use, passive, … Statistical analyses will allow a weighting of each feature to produce a system that gives the best results – the most accurate scoring of the essays Features for feedback purposes are likely to be different – (Write more!)
61
Intelligent Essay Assessor (Pearson)
62
Pearson System
63
Holt Online Essay Scoring
65
Move feedback IADE – Elena Cotos
66
Move-annotated corpus:
20 articles × 50 disciplines = 1000 IADE is based on a move-annotated corpus.
67
Main Functions of IADE Look for DEFINITIONS of each step
Check STEP STATISTICS Search ANNOTATED CORPUS Get REVISION TIPS
68
DEFINITIONS of each step:
Examples (left), definition (right)
69
STEP STATISTICS
70
ANNOTATED CORPUS
71
REVISION TIPS
72
Criterion A commercial AWE system from ETS
73
AWE What is writing? What is the relation of AWE and learning?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.