Download presentation
Presentation is loading. Please wait.
Published byEustacia Preston Modified over 9 years ago
1
COLING 2012 Extracting and Normalizing Entity-Actions from Users’ comments Swapna Gottipati, Jing Jiang School of Information Systems, Singapore Management University, Singapore
2
COLING 2012 ► Introduction ► ► Task Definition ► ► Nature of actionable comments Sentence level study Phrase level study ► ► Solution Method Entity-action extraction Entity-action normalization ► ► Dataset ► ► Experiments ► ► Conclusion
3
COLING 2012 Introduction ► ► Defining an actionable comment as an expression with an entity such as person or organization and a suggestion that can be acted upon.
4
COLING 2012 ► ► Following comments are in response to the news about a car accident. [C1]The government should lift diplomatic immunity of the ambassador. [C2]Govt must inform the romanian government of what happened immediately. [C3]SG government needs to cooperate closely with romania in persecuting this case. [C4]Hope the government help the victims by at least paying the legal fees. [C5]I believe that goverment will help the victims for legal expenses. ► ► All sentences consist of an action and the corresponding entity who should take the action.
5
COLING 2012 ► ► Observing entities in all the above sentences refer to the same entity, Government, but expressed in various forms. EntityAction governmentlift diplomatic immunity of the ambassador and get him to face.. governmentinform the romanian government of what happened immediately.. governmentcooperate closely with romania in persecuting.. governmenthelp victims by at least paying the legal fees
6
COLING 2012 ► ► Introduction ► Task Definition ► ► Nature of actionable comments Sentence level study Phrase level study ► ► Solution Method Entity-action extraction Entity-action normalization ► ► Dataset ► ► Experiments ► ► Conclusion
7
COLING 2012 Task Definition ► ► Goal: Extract and normalize actionable comments from user generated content in response to a news article.
8
COLING 2012 ► ► The actionable comments will be represented as an entity-action pair. ► ► Giving a news article A and corresponding candidate comments C = {c 1, c 2,..., c n } extracted using the keywords, our goal is to detect pairs of {ne i, na i } where ne i is a normalized entity and na i is a normalized action.
9
COLING 2012 ► ► Introduction ► ► Task Definition ► Nature of actionable comments Sentence level study Phrase level study ► ► Solution Method Entity-action extraction Entity-action normalization ► ► Dataset ► ► Experiments ► ► Conclusion
10
COLING 2012 Sentence level study ► ► To understand how frequently a user writes an actionable comment, we randomly selected 500 sentences from AsiaOne.com ► ► 13.6% of the sentences are actionable comments.
11
COLING 2012 ► ► 88.3% of the actionable comments use the keywords listed in the right Table KeywordFrequency Should hope believe may be have to ought to be suggest suppose to need to must advise needs to request 54.24% 8.47% 3.39% 5.08% 1.69% 3.39% 1.69% 3.39% 1.69%
12
COLING 2012 ► ► Introduction ► ► Task Definition ► Nature of actionable comments Sentence level study Phrase level study ► ► Solution Method Entity-action extraction Entity-action normalization ► ► Dataset ► ► Experiments ► ► Conclusion
13
COLING 2012 Phrase level study ► ► Entity extraction: Identify the correct entity in the actionable comment. ► ► Normalization: Normalize the entity mentions to their canonical form. ► ► Redundancy: Normalize similar actions to aid in redundancy elimination.
14
COLING 2012 ► ► Introduction ► ► Task Definition ► ► Nature of actionable comments Sentence level study Phrase level study ► Solution Method Entity-action extraction Entity-action normalization ► ► Dataset ► ► Experiments ► ► Conclusion
15
COLING 2012 Solution Method ► ► Entity-action extraction Based on CRF model ► ► Normalization model Based on the clustering techniques for entity and action normalization.
16
COLING 2012 Entity-action extraction ► ► A comment sentence x = (x 1, x 2,..., x n ) where each x i is a single token. We need to assign a sequence of labels or tags y = ( y 1, y 2,..., y n ) to x. We define our tag set as {BE, IE, BA, IA,O}
17
COLING 2012 Features ► ► POS features ► ► Positional features ► ► Dependency tree features
18
COLING 2012 POS features ► ► POS tags using the Stanford POS tagger, combine POS features of neighboring words in [-2, +2] window.
19
COLING 2012 Positional features ► ► Finding the position of each word, x i with respect to the keyword in the given sentence. ► ► Positive numbers for words preceding the keyword and negative numbers for words succeeding the keyword in the sentence. We do the same for neighboring words in [- 2, +2] window.
20
COLING 2012 Dependency tree features ► ► For each word x i, we check if it is nominal subject in the sentence and represent it by nsubj. The dependency tree features can be extracted using Stanford dependencies tool. ► ► The output is S = {e i, a i }, a set of entity- action pairs.
21
COLING 2012 ► ► Introduction ► ► Task Definition ► ► Nature of actionable comments Sentence level study Phrase level study ► Solution Method Entity-action extraction Entity-action normalization ► ► Dataset ► ► Experiments ► ► Conclusion
22
COLING 2012 Entity-action normalization ► ► Given S = {e i, a i }, a set of entity-action pairs, the goal is to generate NS = {ne i, na i }, a set of normalized entity-action pairs.
23
COLING 2012 Entity normalization ► ► Using agglomerative clustering which is a hierarchical clustering method which works bottom-up (Olson, 1995). ► ► Expanding the entity with the features from Google and Semantic-Similarity Sieves adopted from Stanford coreference algorithm (Raghunathan et al.2010).
24
COLING 2012 Features ► ► Alias features ► ► Semantic-similarity features
25
COLING 2012 Alias features ► ► Giving an entity mention, it is first expanded with the title of the news article and this query is fed to the Google API. Google outputs the ranked matching outputs. EX: Alias features for “ Ionescu + title ” are Dr.Ionescu, Silvia Ionescu, Romanian Diplomat Ionescu etc.
26
COLING 2012 Semantic-similarity features ► ► Following steps from Stanford coreference resolution tool for both named and unnamed entities: (a) Remove the text following the mention head word. (b) Select the lowest noun phrase (NP) in the parse tree that includes the mention head word. (c) Using the longest proper noun (NNP*) sequence that ends with the head word. (d) Select the head word.
27
COLING 2012 Action normalization ► ► To remove the redundant actions. We choose clustering same as above to normalize the actions associated with same normalized entity. ► ► The feature set for this task is simply bag- of-words with stop word removal. The representative action is also chosen similar to the above.
28
COLING 2012 ► ► Introduction ► ► Task Definition ► ► Nature of actionable comments Sentence level study Phrase level study ► ► Solution Method Entity-action extraction Entity-action normalization ► Dataset ► ► Experiments ► ► Conclusion
29
COLING 2012 Dataset ► ► Dataset consists of 5 contentious news articles and comments from Asiaone.com. ► ► Using the keywords listed to extract the candidate sentences from all the comments (each comment has 1 or more sentences) in 5 news articles. ► ► We use randomly 110 candidate sentences from each article and in total 550 candidates for experiments. ► ► Agreement level using Cohen ’ s kappa is 0.7679
30
COLING 2012 ► ► Introduction ► ► Task Definition ► ► Nature of actionable comments Sentence level study Phrase level study ► ► Solution Method Entity-action extraction Entity-action normalization ► ► Dataset ► Experiments ► ► Conclusion
31
COLING 2012 Experiments ► ► To prepare the ground truth, we engaged two annotators to label 550 candidate sentences. ► ► Label the entity with BE (beginning of an entity) IE (inside an entity). ► ► Label the action with: BA (beginning of an action) IA (inside an action) The others are labeled as O (other). ► ► If both entity and action are found, sentence is a valid suggestion. Label it as 1. Otherwise, label it as 0.
32
Baseline ► Performing 10-fold cross validation for all our experiments, and use this pattern matching technique as a baseline. COLING 2012
33
Actionable knowledge detection results ► ► Precision of 88.26%, recall of 93.12% and F-score of 90.63% in classifying actionable comments. ► ► failed in detecting the actionable comments when the sentences have poor grammatical structure. “ Don ’ t need to call the helpline.. ”
34
COLING 2012 Entity extraction results Exact MatchOverlap Match MetricsBaselineCRFBaselineCRF Recall0.87990.83520.90320.9306 Precision0.58660.68490.95970.8578 F-score0.70390.75090.93060.8927
35
COLING 2012 Action extraction results Exact MatchOverlap Match MetricsBaselineCRFBaselineCRF Recall0.89470.89440.92000.9169 Precision0.55190.67410.74680.7544 F-score0.68270.76430.82440.8270
36
COLING 2012 Experiments on entity-action normalization ► ► Single Link and complete link, which technique is more suitable for this problem? ► ► How does the clustering-based solution perform in normalizing the entity-action pairs?
37
COLING 2012 Single Link Vs Complete Link Single LinkComplete Link ArticlePreRecallFPreRecallF A10.51610.50390.51000.84620.69290.7619 A21.00000.33330.50000.71430.52380.6044 A30.73680.32180.44800.56640.73560.6400 A40.62580.45670.52800.53280.66890.5931 A50.96610.45600.61960.72820.60000.6579
38
COLING 2012 Entity-Action Normalization Results ► ► We asked a human judge to validate the normalized entity-action pairs. ► ► Only if both entity and action are normalized, the pair is labeled as valid.
39
COLING 2012 ► ► Introduction ► ► Task Definition ► ► Nature of actionable comments Sentence level study Phrase level study ► ► Solution Method Entity-action extraction Entity-action normalization ► ► Dataset ► ► Experiments ► Conclusion
40
COLING 2012 Related Work ► ► Actionable content: More focussed towards manufacturing applications in which the problems are identified to aid the designers in the product design improvements. ► ► To the best of our knowledge, our problem of extracting and normalizing entity-action pairs from users ’ comments is not studied.
41
COLING 2012 Conclusion ► ► EX: Obama ’ s state union address ► ► Apart from political and news forums, the public was asked to express opinions on Twitter using specific hashtags. This triggers the need for gathering actionable content in micro blogs. In the same line, diagnostic opinion detection that talks about what could have happened, who should be blamed, etc., is also an interesting problem.
42
THE END COLING 2012
43
F-score ► Precision = tp / (tp + fp) ► Recall = tp / (tp + fn) ► F-score=2 * (pre * recall) / (pre +recall) F-score COLING 2012
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.