Download presentation
Presentation is loading. Please wait.
1
Automatic Writing Evaluation
2
AWE Why might it be impossible? Why might it be possible?
3
AWE Why is it needed Use for testing and for feedback
Shift from product to process approaches to writing Use of editing software (Word) and hence multiple drafts rather than typewriters (one or two drafts) Writing as a way to demonstrate proficiency Use for testing and for feedback
4
Warschauer & Page
5
Folz, Lochbaum & Rosenstein (Pearson)
6
What kinds of features potentially distinguish writing proficiency levels
Ideas from previous classes and readings. ??
7
Samples Levels 1 to 5 Can you assess them correctly
Are there features that can be used in an automatic system
8
New tools We will probably need to make use of a variety of tools and types of analysis Basic methodology create a large database of manually evaluated writing samples categorized by relevant features type of writing set prompts L1 (possibly) Discipline Concept – patterns can be extracted from large corpora (patterns may not be identifiable by humans)
9
General resources for language analysis
Taggers (POS) Parsers/Tree taggers – (last week) Semantic analysis Wordnet Framenet Semantic tagging
10
Wordnet
11
Wordnet
15
Hypernym
16
Another search
17
Wordnet Here we checked the webpages
Computational systems access Wordnet directly Use levels and “distances” to assess different kinds of semantic relatedness
18
Framenet
19
Search for give
21
Semantic Tags Different ways of attempting semantic tags
Give “sense” definitions of a word - run a company run [10] – sense 10 in COBUILD dictionary Provide semantic type information
22
Semantic tags Similar to a thesaurus organisation
23
Semantic tagging
24
Semantic tag list
25
Semantic tags Examples doubted_A7- UNLIKELY vague_A4.2- GENERAL
best_A policy_X7+ EVALUATION GOOD WANTED truth_A EVALUATION TRUE lies_A EVALUATION FALSE
26
AWE Features Different kinds of features and methods are used in AWE
Surface features: word structure, lexical items, POS, sentence structure (latter two require tagging and parsing and so not overtly “surface”) Typically scores are higher with more words longer words subordination passives nominalizations prepositions
27
More sophisticated analyses
Cohesion analysis Latent Semantic Analysis (LSA) Move analysis
28
AWE features Cohesive features
Crossley and McNamara J. of Research in Reading 35, 2, Lexical diversity – more word “types” Word overlap – to increase cohesion Synonyms Connectives
29
Coh-Metrix
30
Coh-Metrix
34
Crossley and McNamara 2012
36
Latent Semantic Analysis
Large corpora are used to assess meaning similarity among words or passages The meaning of a word relates to all the passages in which the word appears, which can be represented as a matrix matrix (vector) for bus will more similar to car than biscuit
37
Latent Semantic Analysis
The meaning of a passage relates to all the words that appear in the passage These vectors are the input to the processing that is something like a factor analysis Number of dimensions are reduced to give a more abstract and deeper analysis of word-context relations (maybe similar to human cognition)
38
Latent Semantic Analysis
Initial analysis -- LSA space created by instructional materials or something similar plus student essays Different methods Some essays are graded by instructors and test essays are compared using LSA with these Exemplary essay is written and closeness of the test essays are judged Distance between each sentence of a standard text and each sentence of the student essay. A cumulative score is given All the student essays are compared and then ranked
39
Latent Semantic Analysis
Depth is based on word choice and content Not syntax etc Students choose from a pre-existing set of prompts Typically requires a set of texts against which the test essays are judged
40
Move analysis Elena Cotos project at Iowa State
Database of articles in different disciplines Examined “moves” in different sections of papers Created Intelligent Academic Discourse Evaluator (IADE) which is being used across the university
41
What is Move? Move is a term used by Swales, referring to the function of each rhetorical unit (Swales 1990).
42
Using CorpusTools Annotate or tag a text using an annotation scheme you devise: Examples For Errors For Evaluative terms For Moves
43
Corpus and layers Layer Corpus
44
Create a tagging scheme
45
Corpus and layers
46
Assign Tags to each unit
Move: describe characteristics of cases/informants
47
Search Instances Tag Instances
48
AWE For testing For feedback what are the aims of an AWE system
what kind of feedback should be given
49
Warschauer & Page
50
E-rater Identifies discourse elements: background, thesis, main ideas, supporting ideas, and conclusion – leads to a “development” score and a length score (number of words) Lexical complexity type/token ratio vocabulary level average word length
51
E-rater Prompt-specific vocabulary – based on content analysis (vector analysis comparing the words of the essay with the reference corpus) Essay length (number of words)
52
Weighting features All the assessment procedures are based on a sets of features – average word length, pronoun use, passive, … Statistical analyses will allow a weighting of each feature to produce a system that gives the best results – the most accurate scoring of the essays Features for feedback purposes are likely to be different – (Write more!)
53
Intelligent Essay Assessor (Pearson)
54
Pearson System
55
Holt Online Essay Scoring
57
Move feedback IADE – Elena Cotos
58
Move-annotated corpus:
20 articles × 50 disciplines = 1000 IADE is based on a move-annotated corpus.
59
Main Functions of IADE Look for DEFINITIONS of each step
Check STEP STATISTICS Search ANNOTATED CORPUS Get REVISION TIPS
60
DEFINITIONS of each step:
Examples (left), definition (right)
61
STEP STATISTICS
62
ANNOTATED CORPUS
63
REVISION TIPS
64
Criterion A commercial AWE system from ETS
65
AWE What is writing? What is the relation of AWE and learning?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.