Download presentation
Presentation is loading. Please wait.
Published bySophia Willis Modified over 9 years ago
1
Creating Subjective and Objective Sentence Classifiers from Unannotated Texts Ellen Riloff University of Utah (Joint work with Janyce Wiebe at the University of Pittsburgh)
2
What is Subjectivity? Subjective language includes opinions, rants, allegations, accusations, suspicions, and speculation. Distinguishing factual information from subjective information could benefit many applications, including: –information extraction –question answering –summarization –spam filtering
3
Previous Work on Subjectivity Classification Document-level subjectivity classification (e.g., [Turney 2002; Pang et al. 2002; Spertus 1997]) But most documents contain subjective and objective sentences. [Wiebe et al. 01] reported that 44% of sentences in their news corpus were subjective! Sentence-level subjectivity classification [Dave et al. 2003; Yu et al. 2003; Riloff, Wiebe, & Wilson 2003]
4
Goals of our research Create classifiers that label sentences as subjective or objective. Learn subjectivity and objectivity clues from unannotated corpora. Use information extraction techniques to learn subjective nouns. Use information extraction techniques to learn subjective and objective patterns.
5
Outline of Talk Learning subjective nouns with extraction patterns Automatically generating training data with high-precision classifiers Learning subjective and objective extraction patterns Naïve Bayes classification and self-training
6
Information Extraction Information extraction (IE) systems identify facts related to a domain of interest. Extraction patterns are lexico-syntactic expressions that identify the role of an object. For example: was killed assassinated murder of
7
Learning Subjective Nouns Goal: to learn subjective nouns from unannotated texts. Method: applying IE-based bootstrapping algorithms that were designed to learn semantic categories. Hypothesis: extraction patterns can identify subjective contexts that co-occur with subjective nouns. Example: “expressed ” concern, hope, support
8
Extraction Examples expressed condolences, hope, grief, views, worries indicative of compromise, desire, thinking inject vitality, hatred reaffirmed resolve, position, commitment voiced outrage, support, skepticism, opposition, gratitude, indignation show of support, strength, goodwill, solidarity was sharedanxiety, view, niceties, feeling
9
Meta-Bootstrapping [Riloff & Jones 99] Unannotated Texts Best Extraction Pattern Extractions (Nouns) Ex: hope, grief, joy, concern, worries Ex: expressed Ex: happiness, relief, condolences
10
Basilisk [Thelen & Riloff 02] extraction patterns and their extractions corpus seed words semantic lexicon 5 best candidate words Pattern Pool best patterns Candidate Word Pool extractions
11
Subjective Seed Words cowardiceembarrassment hatred outrage crapfool hell slander delightgloom hypocrisy sigh disdaingrievance love twit dismayhappiness nonsense virtue
12
Subjective Noun Results Bootstrapping corpus: 950 unannotated FBIS documents (English-language foreign news) We ran each bootstrapping algorithm for 400 cycles, generating ~2000 words. We manually reviewed the words and labeled them as strongly subjective or weakly subjective. Together, they learned 1052 subjective nouns (454 strong, 598 weak).
13
Examples of Strong Subjective Nouns anguish exploitation pariah antagonism evil repudiation apologist fallacies revenge atrocities genius rogue barbarian goodwill sanctimonious belligerence humiliationscum bully ill-treatment smokescreen condemnation injustice sympathy denunciation innuendo tyranny devil insinuation venom diatribe liar exaggeration mockery
14
Examples of Weak Subjective Nouns aberration eyebrowsresistant allusion failuresrisk apprehensions inclinationsincerity assault intrigue slump beneficiary liabilityspirit benefit likelihoodsuccess blood peacefultolerance controversy persistenttrick credence plaguetrust distortion pressureunity drama promise eternity rejection
15
Outline of Talk Learning subjective nouns with extraction patterns Automatically generating training data with high-precision classifiers Learning subjective and objective extraction patterns Naïve Bayes classification and self-training
16
Initial Training Data Creation rule-based subjective sentence classifier rule-based objective sentence classifier subjective & objective sentences unlabeled texts subjective clues
17
Subjective Clues entries from manually developed resources [Levin 93; Ballmer & Brennenstuhl 81] Framenet lemmas with frame element experiencer [Baker et al. 98] adjectives manually annotated for polarity [Hatzivassiloglou & McKeown 97] n-grams learned from corpora [Dave et al. 03; Wiebe et al. 01] words distributionally similar to subjective seed words [Wiebe 00] subjective nouns learned from extraction pattern bootstrapping [Riloff et al. 03]
18
Creating High-Precision Rule-Based Classifiers subjectivea sentence is subjective if it contains 2 strong subjective clues a sentence is objective if: –it contains no strong subjective clues –the previous and next sentence contain 1 strong subjective clue –the current, previous, and next sentence together contain 2 weak subjective clues GOAL: use subjectivity clues from previous research to build a high-precision (low-recall) rule-based classifier
19
Data Set The MPQA Corpus contains 535 FBIS texts that have been manually annotated for subjectivity. Our test set consisted of 9,289 sentences from the MPQA corpus. We consider a sentence to be subjective if it has at least one private state of strength medium or higher. 54.9% of the sentences in our test set are subjective.
20
Accuracy of Rule-Based Classifiers SubjRecSubjPrecSubjF Subj RBC 34.2 90.4 46.6 ObjRecObjPrecObjF Obj RBC 30.7 82.444.7
21
Generated Data We applied the rule-based classifiers to 298,809 sentences from (unannotated) FBIS documents. 52,918 were labeled subjective 47,528 were labeled objective training set of over 100,000 labeled sentences!
22
Outline of Talk Learning subjective nouns with extraction patterns Automatically generating training data with high-precision classifiers Learning subjective and objective extraction patterns Naïve Bayes classification and self-training
23
Representing Subjective Expressions with Extraction Patterns Extraction patterns can represent linguistic expressions that are not fixed word sequences. drove [NP] up the wall - drove him up the wall - drove George Bush up the wall - drove George Herbert Walker Bush up the wall step on [modifiers] toes - step on her toes - step on the mayor’s toes - step on the newly elected mayor’s toes gave [NP] a [modifiers] look - gave his annoying sister a really really mean look
24
The Extraction Pattern Learner Used AutoSlog-TS [Riloff 96] to learn extraction patterns. AutoSlog-TS needs relevant and irrelevant texts as input. Statistics are generated measuring each pattern’s association with the relevant texts. The subjective sentences were called relevant, and the objective sentences were called irrelevant.
25
passive-vp was satisfied active-vp complained active-vp dobj dealt blow active-vp infinitive appears to be passive-vp infinitive was thought to be auxiliary dobj has position active-vp endorsed infinitive to condemn active-vp infinitive get to know passive-vp infinitive was meant to show subject auxiliary fact is passive-vp prep opinion on active-vp prep agrees with infinitive prep was worried about noun prep to resort to
26
RelevantIrrelevant [The World Trade Center], [an icon] of [New York City], was intentionally attacked very early on [September 11, 2001]. Parser Extraction Patterns: was attacked icon of was attacked on Syntactic Templates AutoSlog-TS (Step 1)
27
AutoSlog-TS (Step 2) RelevantIrrelevant Extraction PatternsFreqProb was attacked100.90 icon of 5.20 was attacked on 80.79 Extraction Patterns: was attacked icon of was attacked on
28
Identifying Subjective and Objective Patterns AutoSlog-TS generates 2 statistics for each pattern: F = pattern frequency P = relevant frequency / pattern frequency We call a pattern subjective if F 5 and P .95 (6364 subjective patterns were learned) We call a pattern objective if F 5 and P .15 (832 objective patterns were learned)
29
Examples of Learned Extraction Patterns Subjective Patterns believes was convinced aggression against to express support for Objective Patterns increased production took effect delegation from occurred on plans to produce
30
Patterns with Interesting Behavior PATTERNFREQP(Subj | Pattern) asked 128.63 was asked 11 1.0 was expected 45.42 was expected from 5 1.0 talk 28.71 talk of 10.90 is talk 5 1.0 put 187.67 put end 10.90 is fact 38 1.0 fact is 12 1.0
31
Augmenting the Rule-Based Classifiers with Extraction Patterns SubjRecSubjPrecSubjF Subj RBC 34.2 90.4 46.6 Subj RBC 58.6 80.9 68.0 w/Patterns ObjRecObjPrecObjF Obj RBC 30.7 82.444.7 Obj RBC 33.5 82.147.6 w/Patterns
32
Outline of Talk Learning subjective nouns with extraction patterns Automatically generating training data with high-precision classifiers Learning subjective and objective extraction patterns Naïve Bayes classification and self-training
33
Naïve Bayes Classifier We created an NB classifier using the initial training set and several set-valued features: –strong & weak subjective clues from RBCs –subjective & objective extraction patterns –POS tags (pronouns, modals, adjectives, cardinal numbers, adverbs) –separate features for each of the current, previous, and next sentences
34
Naïve Bayes Training extraction pattern learner training set objective patterns subjective patterns Naïve Bayes training POS features subjective clues
35
Naïve Bayes Results SubjRecSubjPrecSubjF Naïve Bayes70.679.474.7 ObjRecObjPrecObjF Naïve Bayes77.668.472.7 RWW03 74 70 72 (supervised) RWW03 77 81 79 (supervised)
36
Self-Training Process best N sentences Naïve Bayes classifier unlabeled sentences extraction pattern learner training set objective patterns subjective patterns Naïve Bayes training POS features subjective clues
37
Self-Training Results SubjRecSubjPrecSubjF Subj RBC w/Patts 1 58.6 80.968.0 Subj RBC w/Patts 2 62.4 80.470.3 ObjRecObjPrecObjF Obj RBC w/Patts 1 33.5 82.147.6 Obj RBC w/Patts 2 34.8 82.649.0 Naïve Bayes 1 70.6 79.4 74.7 Naïve Bayes 2 86.3 71.3 78.1 Naïve Bayes 1 77.6 68.4 72.7 Naïve Bayes 2 57.6 77.5 66.1 RWW03 (supervised) 77 81 79 RWW03 (supervised) 74 70 72
38
Conclusions We can build effective subjective sentence classifiers using only unannotated texts. Extraction pattern bootstrapping can learn subjective nouns. Extraction patterns can represent richer subjective expressions. Learning methods can discover subtle distinctions between very similar expressions.
39
THE END Thank you!
40
Related Work Genre classification (e.g., [Karlgren and Cutting 1994; Kessler et al. 1997; Wiebe et al. 2001]) Learning adjectives, adj. phrases, verbs, and N-grams [ Turney 2002; Hatzivassiloglou & McKeown 1997; Wiebe et al. 2001] Semantic lexicon learning [Hearst 1992; Riloff & Shepherd 1997; Roark & Charniak 1998; Caraballo 1999] –Meta-Bootstrapping [Riloff & Jones 99] –Basilisk [Thelen & Riloff 02]
41
What is Information Extraction? Extracting facts relevant to a specific topic from narrative text. Example Domains Terrorism: perpetrator, victim, target, date, location Management succession: person fired, successor, position, organization, date Infectious disease outbreaks: disease, organism, victim, symptoms, location, date
42
Information Extraction from Narrative Text Role relationships define the information of interest …keywords and named entities are not sufficient. Researchers have discovered how anthrax toxin destroys cells and rapidly causes death... Troops were vaccinated against anthrax, cholera, …
43
Ranking and Manual Review The patterns are ranked using the metric: A domain expert reviews the top-ranked patterns and assigns thematic roles to the good ones. RlogF (pattern i ) = FiFi NiNi * log 2 (F i ) F i is the # of instances of pattern i in relevant texts N i is the # of instances of pattern i in all texts
44
Semantic Lexicons A semantic lexicon assigns categories to words. Semantic dictionaries are hard to come by, especially for specialized domains. WordNet [Miller 90] is popular but is not always sufficient. [Roark & Charniak 98] found that 3 of every 5 words learned by their system were not present in WordNet. politician human truckvehicle grenadeweapon
45
The Bootstrapping Era Unannotated Texts + = KNOWLEDGE !
46
Meta-Bootstrapping Unannotated Texts Best Extraction Pattern Extractions (Nouns) Ex: anthrax, ebola, cholera, flu, plague Ex: outbreak of Ex: smallpox, tularemia, botulism
47
Semantic Lexicon (NP) Results IterCompanyLocationTitleLocationWeapon (Web)(Web)(Web)(Terror)(Terror) 15/5 (1.0)5/5 (1.0) 0/1 (0) 5/5(1.0) 4/4(1.0) 10 25/32 (.78) 46/50 (.92) 22/31 (.71) 32/50 (.92) 31/44 (.70) 20 52/65 (.80) 88/100 (.88) 63/81 (.78)66/100 (.66) 68/94 (.72) 30 72/113 (.64) 129/150 (.86) 86/131 (.66) 100/150 (.67) 85/144 (.59)
48
Basilisk extraction patterns and their extractions corpus seed words semantic lexicon 5 best candidate words Pattern Pool best patterns Candidate Word Pool extractions
49
The Pattern Pool RlogF (pattern i ) = FiFi NiNi * log 2 (F i ) F i is the number of category members extracted by pattern i N i is the total number of nouns extracted by pattern i where: Every extraction pattern is scored and the best patterns are put into a Pattern Pool. The scoring function is:
50
Scoring Candidate Words 1. collecting all patterns that extracted it 2. computing the average number of category members extracted by those patterns. Each candidate word is scored by: j=1 NiNi AvgLog (word i ) = log 2 (F j + 1) NiNi
52
Bootstrapping a Single Category
53
Bootstrapping Multiple Categories
54
A Smarter Scoring Function diff (w i,c a ) = AvgLog (w i,c a ) - max (AvgLog(w i,c b )) b a We incorporated knowledge about competing semantic categories directly into the scoring function. The modified scoring function computes the difference between the score for the target category and the best score among competing categories.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.