Download presentation
Presentation is loading. Please wait.
Published byOctavia Kennedy Modified over 9 years ago
1
ACL01 Workshop on Collocation1 Identifying Collocations for Recognizing Opinions Janyce Wiebe, Theresa Wilson, Matthew Bell University of Pittsburgh Office of Naval Research grant N00014-95-1-0776
2
ACL01 Workshop on Collocation2 Introduction Subjectivity: aspects of language used to express opinions and evaluations (Banfield 1982) Relevant for many NLP applications, such as information extraction and text categorization This paper: identifying collocational clues of subjectivity
3
ACL01 Workshop on Collocation3 Outline Subjectivity Data and annotation Unigram features N-gram features Generalized N-gram features Document classification
4
ACL01 Workshop on Collocation4 Subjectivity Tagging Recognizing opinions and evaluations (Subjective sentences) as opposed to material objectively presented as true (Objective sentences) Banfield 1982, Fludernik 1993, Wiebe 1994, Stein & Wright 1995
5
ACL01 Workshop on Collocation5 Examples At several different levels, it’s a fascinating tale. subjective Bell Industries Inc. increased its quarterly to 10 cents from 7 cents a share. objective
6
ACL01 Workshop on Collocation6 Subjectivity “Complained” “You Idiot!” “Terrible product” “Speculated” “Maybe” “Enthused” “Wonderful!” “Great product”
7
ACL01 Workshop on Collocation7 Examples Strong addressee-oriented negative evaluation Recognizing flames (Spertus 1997) Personal e-mail filters (Kaufer 2000) I had in mind your facts, buddy, not hers. Nice touch. “Alleges” whenever facts posted are not in your persona of what is “real.”
8
ACL01 Workshop on Collocation8 Examples Opinionated, editorial language IR, text categorization (Kessler et al. 1997) Do the writers purport to be objective? Look, this is a man who has great numbers. We stand in awe of the Woodstock generation’s ability to be unceasingly fascinated by the subject of itself.
9
ACL01 Workshop on Collocation9 Examples Belief and speech reports Information extraction, summarization, intellectual attribution (Teufel & Moens 2000) Northwest Airlines settled the remaining lawsuits, a federal judge said. “The cost of health care is eroding our standard of living and sapping industrial strength”, complains Walter Maher.
10
ACL01 Workshop on Collocation10 Other Applications Review mining (Terveen et al. 1997) Clustering documents by ideology (Sack 1995) Style in machine translation and generation (Hovy 1987)
11
ACL01 Workshop on Collocation11 Potential Subjective Elements "The cost of health care is eroding standards of living and sapping industrial strength,” complains Walter Maher. Sap: potential subjective element Subjective element
12
ACL01 Workshop on Collocation12 Subjectivity Multiple types, sources, and targets We stand in awe of the Woodstock generation’s ability to be unceasingly fascinated by the subject of itself. Somehow grown-ups believed that wisdom adhered to youth.
13
ACL01 Workshop on Collocation13 Annotations Three levels: expression level sentence level document level Manually tagged + existing annotations
14
ACL01 Workshop on Collocation14 Expression Level Annotations [Perhaps you’ll forgive me] for reposting his response They promised [e+ 2 yet] more for [e+ 3 really good][e? 1 stuff]
15
ACL01 Workshop on Collocation15 Expression Level Annotations Difficult for manual and automatic tagging: detailed no predetermined classification unit To date: used for training and bootstrapping Probably the most natural level
16
ACL01 Workshop on Collocation16 Expression Level Data 1000 WSJ sentences (2J) 462 newsgroup messages (2J) 15413 words newsgroup data (1J) Single round of tagging; results promising Used to generate features, not for evaluation
17
ACL01 Workshop on Collocation17 Sentence Level Annotations “The cost of health care is eroding our standard of living and sapping industrial strength,’’ complains Walter Maher. “What an idiot,’’ the idiot presumably complained. A sentence is labeled subjective if any significant expression of subjectivity appears
18
ACL01 Workshop on Collocation18 Document Level Annotations This work: Opinion Pieces in the WSJ: editorials, letters to the editor, arts & leisure reviews + Free source of data + More directly related to applications Other work: flames 1-star to 5-star reviews
19
ACL01 Workshop on Collocation19 Document Level Annotations Opinion pieces contain objective sentences Non-opinion pieces contain subjective sentences Editorials contain facts supporting the argument News reports present reactions (van Dijk 1988) “Critics claim …” “Supporters argue …” Reviews contain information about the product
20
ACL01 Workshop on Collocation20 Class Proportions in WSJ Sample Non-Opinion Pieces Subjective sentences 43% Objective 57% Noise Opinion Pieces Objective 30%Subjective sentences 70% Noise
21
ACL01 Workshop on Collocation21 Word Distribution 13-17% of words are in opinion pieces 83-87% of words are in non-opinion pieces
22
ACL01 Workshop on Collocation22 Evaluation Metric for Feature S with Respect to Opinion Pieces Baseline for comparison # words in opinions / total # words Precision(S) = # instances of S in opinions / total # instances of S Given the distributions, precisions of even perfect subjectivity clues would be low Improvement over baseline taken as evidence of promising PSEs
23
ACL01 Workshop on Collocation23 Data Opinion Pieces Non-Opinion Pieces
24
ACL01 Workshop on Collocation24 Document Level Data Existing opinion-piece annotations used for training Manually refined classifications used for testing Identified editorials not marked as such 3 hours/edition Kappa =.93 for 2 judges 3 WSJ editions, each more than 150K words
25
ACL01 Workshop on Collocation25 Automatically Generated Unigram Features Adjective and verb features were generated using distributional similarity (Lin 1998, Wiebe 2000) Existing opinion-piece annotations used for training Manually refined annotations used for testing
26
ACL01 Workshop on Collocation26 Unigram Feature Results WSJ-10 WSJ-33 baseline 17% baseline 13% +prec/freq +prec/freq Adjs +21/373 +09/2137 Verbs +16/721 +07/3193
27
ACL01 Workshop on Collocation27 Example Adjective Feature conclusive, undiminished, brute, amazing, unseen, draconian, insurmountable, unqualified, poetic, foxy, vintage, jaded, tropical, distributional, discernible, adept, paltry, warm, reprehensible, astonishing, surprising, commonplace, crooked, dreary, virtuoso, trashy, sandy, static, virulent, desolate, ours, proficient, noteworthy, Insistent, daring, unforgiving, agreeable, uncritical, homicidal, comforting, erotic, resonant, ephemeral, believable, epochal, dense, exotic, topical, …
28
ACL01 Workshop on Collocation28 Unique Words hapax legomena More than expected single-instance words in subjective elements Unique-1-gram feature: all words that appear once in the test data Precision is 1.5 times baseline precision Frequent feature!
29
ACL01 Workshop on Collocation29 Unigram Feature Results WSJ-10 WSJ-33 baseline 17% baseline 13% Adjs +21/373 +09/2137 Verbs +16/721 +07/3193 Unique-1-gram +10/6065 +06/6048 Results are consistent, even with different identification procedures (similarly for WSJ-22)
30
ACL01 Workshop on Collocation30 Collocational PSEs get out what a for the last time just as well here we go again Started with the observation that low precision words often compose higher precision collocations
31
ACL01 Workshop on Collocation31 Identifying Collocational PSEs Searching for 2-grams, 3-grams, 4-grams No grammatical generalizations or constraints yet Train on the data annotated with subjective elements (expression level) Test on the manually-refined opinion-piece data (document level)
32
ACL01 Workshop on Collocation32 Identifying Collocational PSEs: Training Data (reminder) 1000 WSJ sentences (2J) 462 newsgroup messages (2J) 15413 words newsgroup data (1J) [Perhaps you’ll forgive me] for reposting his response They promised [e+ 2 yet] more for [e+ 3 really good] [e? 1 stuff]
33
ACL01 Workshop on Collocation33 N-Grams Each position is filled by a word POS pair in|prep the|det air|noun
34
ACL01 Workshop on Collocation34 Identifying Collocational PSEs: Training, Step 1 Precision(n-gram) = # subjective instances of n-gram / total # instances of n-gram Precision with respect to subjective elements calculated for all 1,2,3,4-grams in the training data An instance of an n-gram is subjective if each word in the instance is in a subjective element
35
ACL01 Workshop on Collocation35 Identifying Collocational PSEs: Training [Perhaps you’ll forgive me] for reposting his response They promised [e+ 2 yet] more for [e+ 3 really good] [e? 1 stuff] An instance of an n-gram is subjective if each word in the instance is in a subjective element
36
ACL01 Workshop on Collocation36 Identifying Collocational PSEs: Training, Step 2 N-gram PSEs selected based on their precisions, using two criteria: 1. Precision >= 0.1 2. Precision >= maximum precision of its constituents
37
ACL01 Workshop on Collocation37 Identifying Collocational PSEs: Training, Step 2 prec (w1,w2) >= max (prec (w1), prec (w2)) prec (w1,w2,w3) >= max(prec(w1,w2),prec(w3)) or prec (w1,w2,w3) >= max(prec(w1),prec(w2,w3)) Precision >= maximum precision of its constituents
38
ACL01 Workshop on Collocation38 Results WSJ-10 WSJ-33 baseline 17% baseline 13% Adjs +21/373 +09/2137 Verbs +16/721 +07/3193 Unique-1-gram +10/6065 +06/6048 2-grams +07/2182 +04/2080 3-grams +09/271 +06/262 4-grams +05/32 -03/30
39
ACL01 Workshop on Collocation39 Generalized Collocational PSEs Replace each single-instance word in the training data with “UNIQUE” Rerun the same training procedure, finding collocations such as highly|adverb UNIQUE|adj To test the new collocations on test data, first replace each single-instance word in the test data with “UNIQUE”
40
ACL01 Workshop on Collocation40 Results WSJ-10 WSJ-33 baseline 17% baseline 13% Adjs +21/373 +09/2137 Verbs +16/721 +07/3193 Unique-1-gram +10/6065 +06/6048 2-grams +07/2182 +04/2080 3-grams +09/271 +06/262 4-grams +05/32 - 03/30 U-2-grams +24/294 +14/288 U-3-grams +27/132 +13/144 U-4-grams +83/3 +15/27
41
ACL01 Workshop on Collocation41 Example highly|adverb UNIQUE|adj highly unsatisfactory highly unorthodox highly talented highly conjectural highly erotic
42
ACL01 Workshop on Collocation42 Example UNIQUE|verb out|IN farm out chuck out ruling out crowd out flesh out blot out spoken out luck out
43
ACL01 Workshop on Collocation43 Examples UNIQUE|adj to|TO UNIQUE|verb impervious to reason strange to celebrate wise to temper UNIQUE|noun of|IN its|pronoun sum of its usurpation of its proprietor of its they|pronoun are|verb UNIQUE|noun they are fools they are noncontenders
44
ACL01 Workshop on Collocation44 How do Fixed and U-Collocations Compare? Started with the observation that low precision words often compose higher precision collocations Recall the original motivation for investigating fixed n-gram PSEs: But unique words are probably not low precision Are we finding the same collocations two different ways? Or are we finding new PSEs?
45
ACL01 Workshop on Collocation45 Comparison WSJ-10 2-grams 3-grams 4-grams Intersecting instances 4 2 0 %overlap 0.0016 0.0049 0 WSJ-33: all 0s
46
ACL01 Workshop on Collocation46 Opinion-Piece Recognition using Linear Regression %correct TP FP Adjs,verbs.896 5 4 Ngrams.899 5 3 Adjs,verbs,ngrams.909 9 4 All features (+ max density).912 11 4 Max density: the maximum feature count in an 11-word window
47
ACL01 Workshop on Collocation47 Future Work Methods for recognizing non-compositional phrases (e.g., Lin 1999) Mutual bootstrapping (Rilof and Jones 1999) to alternatively recognize sequences and subjective fillers
48
ACL01 Workshop on Collocation48 Sentence Classification Binary Features: pronoun, adjective, number, modal ¬ “will “, adverb ¬ “not”, new paragraph Lexical feature: good for subj; good for obj; good for neither Probabilistic classifier 10-fold cross validation; 51% baseline 72% average accuracy across folds 82% average accuracy on sentences rated certain
49
ACL01 Workshop on Collocation49 Test for Bias: Marginal Homogeneity Worse the fit, greater the bias C1 C2 C4 C1 C3 C2C3C4 4+ = X4 3+ = X3 2+ = X2 1+ = X1 X1 +1 = X2 +2 = X3 +3 = X4 +4 = for all i
50
ACL01 Workshop on Collocation50 Test for Symmetric Disagreement: Quasi-Symmetry C1 C2 C4 C1 C3 C2C3C4 * * *** *** ** ** Tests relationships among the off-diagonal counts Better the fit, higher the correlation
51
ACL01 Workshop on Collocation51 Unigram PSEs Adjectives and Verbs identified using Lin’s distributional similarity (Lin 1998) Distributional similarity is often used in NLP to find synonyms Motivating hypothesis: words may be similar because they have similar subjective usages
52
ACL01 Workshop on Collocation52 Unigram Feature Generation AdjFeature = {} For all Adjectives A in the training data: S = A + N most similar words to A P = precision(S) in the training data if P > T: AdjFeature += S Many runs with various settings for N and T Choose values of N and T on a validation set Evaluate on a new test set
53
ACL01 Workshop on Collocation53 Lin’s Distributional Similarity Lin 1998 Ihaveabrowndog R1 R3 R2 R4 Word R W I R1 have have R2 dog brown R3 dog...
54
ACL01 Workshop on Collocation54 Filtering Seed Words Words+ Clusters Filtered Set Word + cluster removed if precision on training set < threshold
55
ACL01 Workshop on Collocation55 Parameters Seed Words Words+ Clusters Cluster size Threshold
56
ACL01 Workshop on Collocation56 Lin’s Distributional Similarity R W R W R W R W Word1 Word2 Pairs statistically correlated with Word1 Sum over RWint: I(Word1,RWint) + I(Word2,RWint) / Sum over RWw1: I(Word1,RWw1) + Sum over RWw2: I(Word2,RWw2)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.