Download presentation
Presentation is loading. Please wait.
Published byJordan Simpson Modified over 9 years ago
1
LexRank: Graph-based Centrality as Salience in Text Summarization
Yu-Mei, Chang National Taiwan Normal University Journal of Artificial Intelligence Research 22 (2004) G¨une¸s Erkan , Dragomir R. Radev
2
Abstract They consider a new approach, LexRank, for computing sentence importance based on the concept of eigenvector centrality in a graph representation of sentences. Salience typically defined in terms of the presence of particular important words similarity to a centroid pseudo-sentence They discuss several methods to compute centrality using the similarity graph. The results show that degree-based methods (including LexRank) outperform both centroid-based methods and other systems participating in DUC in most of the cases. the LexRank with threshold method outperforms the other degree-based techniques including continuous LexRank. Their approach is insensitive to the noisy in the data 1.他們提出一個新ㄉ方法 教作lexrank ,他是基於一個中心概念 的特徵向量 在計算用圖示表達這些句子關係 2.重點可以被定義為兩方面來看 -以 出現重要的字 而論 -以 與虛擬的文件的中心 而論 3.他們運用了”相似圖”討論了幾個方法來計算中心 -結果顯示系統做在DUC上面大部分的CASE上 ,degree-based的方法(包含lexrank)都比centroid based以及其他系統來的好 -LexRank with 門檻值的方法 又比其他degree- based (包含continue的LexRank)的成效來的好 4.並且他們也去驗證了他們的方法對於雜訊資料上比較不那麼sensitive
3
Sentence Centrality and Centroid-based Summarization
Centrality of a sentence is often defined in terms of the centrality of the words that it contains. A common way of assessing word centrality is to look at the centroid of the document cluster in a vector space. The centroid of a cluster is a pseudo-document which consists of words that have tf×idf scores above a predefined threshold. In centroid-based summarization, the sentences that contain more words from the centroid of the cluster are considered as central (Algorithm 1). This is a measure of how close the sentence is to the centroid of the cluster. 1.一個句子的中心常被定義為這句子他包含了多少中心重要的字 -一個較常用的評定重要的字的方式,就是去看這文件群在這ㄍ向量空間中的中心 -一個cluster的中心就是一個包含一些字並且這些字他們的tf*idf分數在我們先前定義的門檻值之上的一個虛擬文件 2.在這個centroid-based的summarization中,這些包含更多從這ㄍcluster的中心的字的句子被視為是重要的 -這是在計算這句子他有多靠近這個cluster中心
4
Algorithm 1-Centroid scores
***這就是演算法1 S 是n個句子 cosine 的門檻值 是 t C 是中心分數centroid score 4-8行 : 計算每個word的tfidf 分數 9-18: 這些字要超過一個門檻值,建這個cluster的centroid scores 19-26 : 計算每個句子的分數
5
Centrality-based Sentence Salience
They propose several other criteria to assess sentence salience. All approached are based on the concept of “prestige” in social networks, which has also inspired many ideas in computer networks and information retrieval. A cluster of documents can be viewed as a network of sentences that are related to each other. They hypothesize that the sentences that are similar to many of the other sentences in a cluster are more central (or salient) to the topic. To define similarity, they use the bag-of-words model to represent each sentence as an N-dimensional vector, where N is the number of all possible words in the target language. A cluster of documents may be represented by a cosine similarity matrix where each entry in the matrix is the similarity between the corresponding sentence pair. 1.他們提出幾種其他標準去評估句子的重要性 -所有的方法都是base on 這個 在社會網路的“prestige(名聲)”的概念,同樣他也給了在計算networks以及IR上一些想法的啟發 2.文件的Cluster可以被視為一個彼此相關聯的句子網 -他們假設在cluster裡面這些句子他們跟很多其他句子相似會更靠近這個主題的中心 3.去定義這個相似,他們用這個bag-of-word model 去表示每個句子就像是個n維度的向量,包含了N個所有可能出現的字 4.一個文件群組被用COSINE 相似度來表示,每一個Matrix的每一筆都是一組一組句子之間的相似度
6
Centrality-based Sentence Salience (cont.)
Sentence ID dXsY indicates the Y th sentence in the Xth document. 這邊(左邊)可以看到像這個d3s1 就是文件三的第一句 這邊是(右邊)文件內容 右邊這個figure 1 :他是剛剛提到的 句子根句子之間的相似度的matrix Figure 1: Intra-sentence cosine similarities in a subset of cluster d1003t from DUC 2004.
7
Centrality-based Sentence Salience (cont.)
That matrix can also be represented as a weighted graph where each edge shows the cosine similarity between a pair of sentence (Figure 2). Figure 2: Weighted cosine similarity graph for the cluster in Figure 1. 而剛剛那個matrix 也可以用weighted圖表示,邊線的寬度就是表示每個句子pair之間的相似程度
8
Degree Centrality In a cluster of related documents, many of the sentences are expected to be somewhat similar to each other since they are all about the same topic. Since they are interested in significant similarities, they can eliminate some low values in this matrix by defining a threshold so that the cluster can be viewed as an (undirected) graph each sentence of the cluster is a node, and significantly similar sentences are connected to each other They define degree centrality of a sentence as the degree of the corresponding node in the similarity graph. 1.在相關文件組成的cluster中 ,我們希望很多句子會因為他講同樣主題而彼此相似 2.因為他們對顯著的相似度有興趣,所以在那個matrix中他們藉由定義一些門檻值而刪除了一些值很小的部分,所以他可以用(沒有方向性的圖)表示 -在每個cluster中美ㄍ句子都是一個node,顯著相似的句子會彼此相連 3.在這個相似度的圖中,我們用有多少degree跟這個node相連來定義這句子的degree centrality 表一: 可以看到document 4 sentence a 他在門檻值0.1 以及0.2中他是最重要的句子 Table 1: Degree centrality scores for the graphs in Figure 3. Sentence d4s1 is the most central sentence for thresholds 0.1 and 0.2
9
Degree Centrality(cont.)
Figure 3: Similarity graphs that correspond to thresholds 0.1, 0.2, and 0.3, respectively, for the cluster in Figure 1. The choice of cosine threshold dramatically influences the interpretation of centrality. Too low thresholds may mistakenly take weak similarities into consideration while too high thresholds may lose many of the similarity relations in a cluster. 1.圖三: 相似圖是符合門檻值個別為 2.我們可以看到 門檻值會戲劇性的影響他的centrality的表達跟解釋 -門檻值設太低會導致將相似度太weak拿來考慮會造成我的mistake,但是當們看設太高的時候又會失去太多一個cluster中的相似關係 0.1 0.2 0.3
10
Eigenvector Centrality and LexRank
When computing degree centrality, they have treated each edge as a vote to determine the overall centrality value of each node. This is a totally democratic method where each vote counts the same. In many types of social networks, not all of the relationships are considered equally important. The prestige of a person does not only depend on how many friends he has, but also depends on who his friends are. Considering where the votes come from and taking the centrality of the voting nodes into account in weighting each vote. A straightforward way of formulating this idea is to consider every node having a centrality value and distributing this centrality to its neighbors. 1.當計算這個degree centrality的時候,他們將每個edge當作是一個vote ,去決定每個node的中心值 -這是完全平等的方法去設每個vote counts 是一樣的 2.在好多種社會網路關係中,並不是所有關係都被視為一樣重要的 -這個一個人的prestige並不僅僅是去看他有多少朋友,而且還要去看他朋友是誰 3.考慮votes是從哪來還有把voting nodes 的中心也納入我在weighting 這個vote 中 -一個簡單的公式去考慮每個node的centrality 值 並且 (發部分配)這個centrality 給他的鄰居(跟他關聯的) p(u) is the centrality of node u, adj[u] is the set of nodes that are adjacent to u, and deg(v) is the degree of the node v.
11
Eigenvector Centrality and LexRank(cont.)
A Markov chain is irreducible if any state is reachable from any other state, i.e. for all i, j there exists an n such that gives the probability of reaching from state i to state j in n transitions. A Markov chain is aperiodic . If a Markov chain has reducible or periodic components, a random walker may get stuck in these components and never visit the other parts of the graph. To solve this problem, Page et al. (1998) suggest reserving some low probability for jumping to any node in the graph. If we assign a uniform probability for jumping to any node in the graph, they are left with the following modified version of Equation 3, which is known as PageRank 1. 這Markov Chain 是不能縮減的(如果任何一個state是可以從任何一個state來的話) 2.這Markov Chain 是沒有週期性的 3. 如果 Markov Chain 他有(可還原)或者是有周期的components, 這樣的話random walker 就會因此而困住,而且從沒有拜訪過其他部分的圖 -為解決這問題 有學者提出就是給一些低ㄉ機率為了說可以跳到任何node 4.如果assign一個uniform機率為了讓他可以跳到任何node,就是下面公式三這個式子,這是其實是 pageRank的延伸 N is the total number of nodes in the graph, and d is a “damping factor”, which is typically chosen in the interval [0.1, 0.2]
12
Eigenvector Centrality and LexRank(cont.)
The convergence property of Markov chains also provides us with a simple iterative algorithm, called power method, to compute the stationary distribution (Algorithm 2). Unlike the original PageRank method, the similarity graph for sentences is undirected since cosine similarity is a symmetric relation. 1.用這個power method來計算這個stationary distribution(固定的分配) 2.不像原來的pageRank方法,因為cosine他是對稱的所以句子的相似圖胎是沒方向性的 **下面這個是power method的演算法 - Algorithm 2: Power Method for computing the stationary distribution of a Markov chain.
13
Eigenvector Centrality and LexRank(cont.)
They call this new measure of sentence similarity lexical PageRank, or LexRank. Table 2: LexRank scores for the graphs in Figure 3. All the values are normalized so that the largest value of each column is 1. Sentence d4s1 is the most central page for thresholds 0.1 and 0.2 Setting the damping factor to 0.85 1.他們給這個新的計算句子相似度的方法教做 字彙的pageRank 或者LexRank **左圖: 計算那個LexRank 分數的演算法 -就是也要大於一個設定的門檻值 再除以degree **右圖 是LexRank scores 這個有normalized過了 所以最大的值是1, Algorithm 3: Computing LexRank scores.
14
Continuous LexRank The similarity graphs they have constructed to compute Degree centrality and LexRank are unweighted. This is due to the binary discretization they perform on the cosine matrix using an appropriate threshold.(information loss) They multiply the LexRank values of the linking sentences by the weights of the links. Weights are normalized by the row sums, and the damping factor d is added for the convergence of the method. 1.他們建構的相似度圖去計算degree 中心 還有 LexRank 是沒有weight的 -是因為他是binary 離散的 用適當的門檻值在cosine matrix上 2.他們把相連句子的 LexRank score 乘上 link 的 weight -weight 已經被正規化 也加入了 damping factor d
15
Experimental Setup Data set and evaluation method Task2
DUC 2003: 30 clusters DUC 2004: 50 clusters Task 4a composed of Arabic-to-English machine translations of 24 news clusters. Task 4b the human translations of the same clusters. All data sets are in English. Evaluation ROUGE
16
MEAD Summarization Toolkit
They implemented their methods inside the MEAD summarization system MEAD is a publicly available toolkit for extractive multi-document summarization. Although it comes as a centroid-based summarization system by default, its feature set can be extended to implement any other method. The MEAD summarizer consists of three components. the feature extraction each sentence in the input document (or cluster of documents) is converted into a feature vector using the user-defined features. the feature vector is converted to a scalar value using the combiner. Combiner outputs a linear combination of the features by using the predefined feature weights. the reranker the scores for sentences included in related pairs are adjusted upwards or downwards based on the type of relation between the sentences in the pair. Reranker penalizes the sentences that are similar to the sentences already included in the summary so that a better information coverage is achieved. MEAD 這個Summarization的工具 1.他們 在MEAD 摘要系統裡面 實做他們的方法 -MEAD是一個專門運用在多文件摘要截取的工具 ---即是他已經預設是Centroid-based的摘要系統,他的 feature set 可以被延伸到其他方法上 2.MEAD 摘要包含三個組成部分 ---1.feature 擷取 每個在input文件(文件群組)裡面的每句子都被使用者定義的features轉換成為feature vector ---2.這些feature vector 被藉由 combiner 轉成量化的值 藉由是先定義的weight 去線性組合這些features ---3. reranker 相關pair的句子分數 可以被前後相鄰的所調整 他也給一些相似於已經是summary句子一些懲罰 比較達到更好的資 訊涵蓋
17
MEAD Summarization Toolkit(cont.)
Three default features that come with the MEAD distribution are Centroid, Position and Length. Position the first sentence of a document gets the maximum Position value of 1, and the last sentence gets the value 0. Length Length is not a real feature score, but a cutoff value that ignores sentences shorter than the given threshold. Several rerankers are implemented in MEAD default reranker of the system based on Cross-Sentence Informational Subsumption(CSIS) (Radev, 2000) Centroid 三個預設的features 1.位置 -文章中的第一ㄍ句子得到最高的位置值1 2.長度 -判對句子小於多少字 我ㄇ該去忽略 --好幾種reranks 也被再MEAD中實做 3.中心
18
MEAD Summarization Toolkit(cont.)
A MEAD policy is a combination of three components: (a) the command lines for all features (b) the formula for converting the feature vector to a scalar (c) the command line for the reranker. A sample policy might be the one shown in Figure 4. which is a precomputed list of idf ’s for English words. Relative weight The three default MEDA features The reranker in the example is a word-based MMR reranker with a cosine similarity threshold, 0.5 the number 9 indicates the threshold for selecting a sentence based on the number of the words in the sentence.
19
Results and discussion
They have implemented Degree centrality, LexRank with threshold and continuous LexRank as separate features in MEAD. They have used Length and Position features of MEAD as supporting heuristics in addition to our centrality features. Length cutoff value is set to 9 all the sentences that have less than 9 words are discarded The weight of the Position feature is fixed to 1 in all runs. Other than these two heuristic features, they used each centrality feature alone without combining with other centrality methods to make a better comparison with each other. They have run 8 different MEAD features by setting the weight of the corresponding feature to 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 5.0, 10.0, respectively. 他們個別實做Degree centrality , LexRank with 門檻值 ,還有連續的lexRank 在 MEDA中 -他們除了用他們 centrality features 之外 還有用長度以及位置 -句子長度小於9的則忽略掉 -位置的weight每一次run都設1 -除了這兩種heuristic features 之外,他ㄇ還個別單獨使用每個centrality feature 沒有combine 彼此方法 --為了比較好單獨的去彼此比較 -他們跑了8次不同的feature 藉由個別設定不同的 weight
20
Effect of Threshold on Degree and LexRank Centrality
They have demonstrated that very high thresholds may lose almost all of the information in a similarity matrix (Figure 3). To support our claim , they have run Degree and LexRank centrality with different thresholds for our data sets. 他們也宣稱了高的門檻值會讓資訊流失 為了證實這ㄍ宣稱,他ㄇ去跑在不同門檻值下的 degree 以及LexRank centrality Figure 5: ROUGE-1 scores for (a) Degree centrality and (b) LexRank centrality with different thresholds on DUC 2004 Task 2 data.
21
Comparison of Centrality Methods
Table 3 shows the ROUGE scores for our experiments on DUC 2003 Task 2, DUC 2004Task 2, DUC 2004 Task 4a, and DUC 2004 Task 4b, respectively. They also include two baselines for each data set. extracting random sentences from the cluster, We have performed five random runs for each data set. The results in the tables are for the median runs. lead-based : is using only the Position feature without any centrality method. 他ㄇ還對每個資料set 去做了兩格baseline -隨機的從cluster去擷取句子 去中位數 -lead-based 是單單只使用位置這個feature
22
Comparison of Centrality Methods(cont.)
Table 4: Summary of official ROUGE scores for DUC 2003 Task 2. Peer codes: manual summaries [A-J] and top five system submissions 表 4 : A-J 是人工的Summaries 以及 top 5的 system 在DUC 2003 Task 2上的ROUGE結果 表5 : A-Z是人工Summaries 以及 top 5 的system 在DUC 2004 Task2 和 task 4上的 ROUGE 結果 而144 是LexRank 跟 centroid 兩種的結合 145 是只用centroid 的結果 Table 5: Summary of official ROUGE scores for DUC 2004 Tasks 2 and 4. Peer codes: manual summaries [A-Z] and top five system submissions. Systems numbered 144 and 145 are University of Michigan’s submission. 144 uses LexRank in combination with Centroid whereas 145 uses Centroid alone.
23
Experiments on Noisy Data
The graph-based methods they have proposed consider a document cluster as a whole. The centrality of a sentence is measured by looking at the overall interaction of the sentence within the cluster rather than the local value of the sentence in its document. except for lead-based and random baselines are more significantly affected by the noise. 他們提出的 The graph-based 方法 是把一個文件cluster 當作一個整體 句子的centrality 是看句子在cluster內的整體交互作用而不是看句子在文件中 他們的 local value 除了lead-based 還有random 這兩個baseline 被noisy影響較顯著 之外 其他倒是還好 沒有受noisy影響很大
24
Conclusions They have presented a new approach to define sentence salience based on graph-based centrality scoring of sentences. Constructing the similarity graph of sentences provides us with a better view of important sentences compared to the centroid approach, which is prone to over-generalization of the information in a document cluster. They have introduced three different methods for computing centrality in similarity graphs. The results of applying these methods on extractive summarization are quite promising. Even the simplest approach they have taken, degree centrality, is a good enough heuristic to perform better than lead-based and centroid-based summaries. 他們提出一個新的方法基於graph-base centrality 計算分數的方式去定義出重要的句子 建造句子的相似度圖 提供給我們把重要句子比做centroid 方法的一個更好的view,centroid 方法他有傾向於綜合這個cluster的所有資訊 他們介紹三種不同方是在計算相似度圖的centrality -而且結果也相當不錯 4. 即使他們提出的很簡單的方法 degree centrality 都已經達到比lead-based以及centroid-base更好的結果
25
Conclusions (cont.) In LexRank, they have tried to make use of more of the information in the graph, and got even better results in most of the cases. Lastly, they have shown that their methods are quite insensitive to noisy data that often occurs as a result of imperfect topical document clustering algorithms. In traditional supervised or semi-supervised learning, one could not make effective use of the features solely associated with unlabeled examples. An eigenvector centrality method can then associate a probability with each object (labeled or unlabeled). 在LexRank 他們嘗試去利用更多圖的資訊 而且在大部份的case上也都有更好ㄉ結果 最後 他ㄇ也說明了他們的方法在noisy資料上也能夠不那麼容易受noisy影響 在傳統supervised或者semi-supervised learning上,在unlabel上 不太能有效利用單單只有features 而eigenvector centrality 方法卻可以達到
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.