LOGO Summarizing Conversations with Clue Words Giuseppe Carenini, Raymond T. Ng, Xiaodong Zhou (WWW ’07) Advisor : Dr. Koh Jia-Ling Speaker : Tu Yi-Lang Date :
2 Outline Introduction. Building The Fragment Quotation Graph. Creating nodes Creating edges Summarization Methods. CWS MEAD RIPPER Result 1 : User Study. Result 2 : Evaluation Of CWS.
3 Introduction With the ever increasing popularity of s, overload becomes a major problem for many users. In this paper, proposing a different form of support - summarization. The goal is to provide a concise, informative summary of s contained in a folder, thus saving the user from browsing through each one by one.
4 Introduction The summary is intended to be multi-granularity in that the user can specify the size of the concise summary. summarization can also be valuable for users reading s with mobile devices, given the small screen size of handheld devices, efforts have been made to re-design the user interface.
5 Introduction Many s are asynchronous responses to some previous messages and as such they constitute a conversation, which may be hard to reconstruct in detail. A conversation may involve many users, many of whom may have different writing styles. A hidden is an quoted by at least one in the folder but is not present itself in the user’s folders, and hidden may carry important information to be part of the summary.
6 Building The Fragment Quotation Graph Here assuming that if one quotes another , they belong to the same conversation. Using a fragment quotation graph to represent conversations, and the graph G = (V,E) is a directed graph, where each node u ∈ V is a text unit in the folder, and an edge (u, v) means node u is in reply to node v.
7 Building The Fragment Quotation Graph Identifying quoted and new fragment : Here we assume that there exist one or more quotation markers (e.g., “>”) that are used as a prefix of every quoted line. Quotation depth : the number of quotation markers “>” in the prefix, it reflects the number of times that this line has been quoted since the original message containing this line was sent.
8 Building The Fragment Quotation Graph Identifying quoted and new fragment : (cont.) Quoted fragment : is a maximally contiguous block of quoted lines having the same quotation depth. New fragment : is a maximally contiguous block of lines that are not prefixed by the quotation markers. An can be viewed as an alternating sequence of quoted and new fragments.
9 Building The Fragment Quotation Graph Creating nodes : Given an folder Fdr = {M 1,...,M n }, the first step is to identify distinct fragments, each of which will be represented as a node in the graph. Quoted and new fragments from all s in Fdr are matched against each other to identify overlaps. There is an overlap between two fragments if there is a common substring that is sufficiently long. (given an overlap threshold)
10 Building The Fragment Quotation Graph
11 Building The Fragment Quotation Graph Creating edges : Assuming that any new fragment is a potential reply to neighboring quotations – quoted fragments immediately preceding or following it. Because of possible differences in quotation depth, a block may contain multiple fragments. For the general situation when QS p precedes NS, which is then followed by QS f, we create an edge (v, u) for each fragment u ∈ (QS p ∪ QS f ) and v ∈ NS.
12 Building The Fragment Quotation Graph Creating edges : (cont.) For a hidden fragment, additional edges are created within the quoted block, following the same neighboring quotation assumption. Using the minimum equivalent graph as the fragment quotation graph, which is transitively equivalent to the original graph. In the fragment quotation graph, each of these conversations will be reflected as weakly connected components.
13 Summarization Methods Clue words : A clue word in node (fragment) F is a word which also appears in a semantically similar form in a parent or a child node of F in the fragment quotation graph. In this paper, we only apply stemming to the identification of clue words, using the Porter’s stemming algorithm to compute the stem of each word, and use the stems to judge the reoccurrence.
14 Summarization Methods Fragments (a) and (b) are two adjacent nodes with (b) as the parent node of (a).
15 Summarization Methods Clue words : (cont.) Here observing 3 major kinds of reoccurrence : The same root (stem) with different forms, e.g., “settle” vs. “settlement” and “discuss” vs. “discussed” as in the example above. Synonyms/antonyms or words with similar/contrary meaning, e.g., “talk” vs. “discuss” and “peace” vs. “war”. Words that have a looser semantic link, e.g., “deadline” with “Friday morning”.
16 Summarization Methods Algorithm CWS : Algorithm ClueWordSummarizer (CWS) uses clue words as the main feature for summarization. The assumption is that if those words reoccur between parent and child nodes, they are more likely to be relevant and important to the conversation.
17 Summarization Methods
18 Summarization Methods Algorithm CWS : (cont.) To evaluate the significance of the clue words : To the sentence level :
19 Summarization Methods 1 21 ClueScore(s) = 3
20 Summarization Methods MEAD : A centroid-based multi-document summarizer. MEAD computes the centroid of all s in one conversation, a centroid is a vector of words’ average TFIDF values in all documents. MEAD compares each sentence s with the centroid and assigns it a score as the sum of all the centroid values of the common words shared by the sentence s and the centroid. .
21 Summarization Methods MEAD : (cont.) All sentences are ranked based on the MEADScore, and the top ones are included in the summary. Compared with MEAD, CWS may appear to use more “local” features, and MEAD may capture more “global” salience related to the whole conversation.
22 Summarization Methods Hybrid ClueScore with MEADScore : ClueScore and MEADScore tend to represent different aspects of the importance of a sentence, so it is natural to combine both methods together. Using the linear combination of ClueScore and MEADScore : LinearClueMEAD(s) = α ∗ ClueScore(s) + (1 − α) ∗ MEADScore(s), where α ∈ [0, 1].
23 Summarization Methods RIPPER : Using the RIPPER system to induce a classifier for determining if a given sentence should be included in the summary. RIPPER is trained on a corpus in which each sentence is described by a set of 14 features and annotated with the correct classification (i.e., whether it should be included in the summary or not). 8 “basic” linguistic features 2 “basic+” features 4 “basic++” features
24 Result 1 : User Study Dataset setup : Here collected 20 conversations from the Enron dataset as the testbed and recruited 25 human summarizers to review them. The threads could be divided into two types, one is the single chain type, and the other is the thread hierarchy type. We randomly select 4 single chains and 16 trees.
25 Result 1 : User Study Dataset setup : (cont.) Each summarizer reviewed 4 distinct conversations in one hour, each conversation were reviewed by 5 different human summarizers. The generated summary contained about 30% of the original sentences. The human summarizers were asked to classify each selected sentence as either essential or optional.
26 Result 1 : User Study Result of the user study : We assign a GSValue for each sentence to evaluate its importance according to human summarizers’ selection, for each sentence s, one essential selection has a score of 3, one optional selection has a score of 1. Out of the 741 sentences in the 20 conversations, 88 are overall essential sentences which is about 12% of the overall sentences.
27 Result 1 : User Study Result of the user study : (cont.) Sorting all sentences by their GSValue and about 17% sentences in the top-30% sentences are from hidden s, among the 88 overall essential sentences, about 18% sentences are from hidden s. Comparing the average ClueScore of overall essential sentences in the gold standard with the average of all other sentences.
28 Result 2 : Evaluation Of CWS Comparing CWS with MEAD and RIPPER : Sentence-level training. Conversation-level training.
29 Result 2 : Evaluation Of CWS Comparing CWS with MEAD :
30 Result 2 : Evaluation Of CWS Hybrid methods : 100 % CWS
31 Conclusion In this paper, we study how to generate accurate summaries, and build a novel structure : the fragment quotation graph, to represent the conversation structure. The experiments with the Enron dataset not only indicate that hidden s have to be considered for summarization, but also shows that CWS can generate more accurate summaries when compared with other methods.