NUS at DUC 2007: Using Evolutionary Models of Text Ziheng Lin, Tat-Seng Chua, Min-Yen Kan, Wee Sun Lee, Long Qiu and Shiren Ye Department of Computer Science.

Slides:



Advertisements
Similar presentations
Product Review Summarization Ly Duy Khang. Outline 1.Motivation 2.Problem statement 3.Related works 4.Baseline 5.Discussion.
Advertisements

Query Chain Focused Summarization Tal Baumel, Rafi Cohen, Michael Elhadad Jan 2014.
Analysis and Modeling of Social Networks Foudalis Ilias.
Comparing Twitter Summarization Algorithms for Multiple Post Summaries David Inouye and Jugal K. Kalita SocialCom May 10 Hyewon Lim.
Graph-based Text Summarization
More on Rankings. Query-independent LAR Have an a-priori ordering of the web pages Q: Set of pages that contain the keywords in the query q Present the.
DATA MINING LECTURE 12 Link Analysis Ranking Random walks.
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
CSE 522 – Algorithmic and Economic Aspects of the Internet Instructors: Nicole Immorlica Mohammad Mahdian.
1 CS 430 / INFO 430: Information Retrieval Lecture 16 Web Search 2.
Zdravko Markov and Daniel T. Larose, Data Mining the Web: Uncovering Patterns in Web Content, Structure, and Usage, Wiley, Slides for Chapter 1:
ISP 433/633 Week 7 Web IR. Web is a unique collection Largest repository of data Unedited Can be anything –Information type –Sources Changing –Growing.
Web Projections Learning from Contextual Subgraphs of the Web Jure Leskovec, CMU Susan Dumais, MSR Eric Horvitz, MSR.
HCC class lecture 22 comments John Canny 4/13/05.
HCC class lecture 14 comments John Canny 3/9/05. Administrivia.
Prestige (Seeley, 1949; Brin & Page, 1997; Kleinberg,1997) Use edge-weighted, directed graphs to model social networks Status/Prestige In-degree is a good.
18th International Conference on Database and Expert Systems Applications Journey to the Centre of the Star: Various Ways of Finding Star Centers in Star.
Using Social Networking Techniques in Text Mining Document Summarization.
State of Connecticut Core-CT Project Query 4 hrs Updated 1/21/2011.
Query session guided multi- document summarization THESIS PRESENTATION BY TAL BAUMEL ADVISOR: PROF. MICHAEL ELHADAD.
1 Announcements Research Paper due today Research Talks –Nov. 29 (Monday) Kayatana and Lance –Dec. 1 (Wednesday) Mark and Jeremy –Dec. 3 (Friday) Joe and.
Estimating Importance Features for Fact Mining (With a Case Study in Biography Mining) Sisay Fissaha Adafre School of Computing Dublin City University.
Automated Social Hierarchy Detection through Network Analysis (SNAKDD07) Ryan Rowe, Germ´an Creamer, Shlomo Hershkop, Salvatore J Stolfo 1 Advisor:
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
Processing of large document collections Part 7 (Text summarization: multi- document summarization, knowledge- rich approaches, current topics) Helena.
CS315 – Link Analysis Three generations of Search Engines Anchor text Link analysis for ranking Pagerank HITS.
Diversity in Ranking via Resistive Graph Centers Avinava Dubey IBM Research India Soumen Chakrabarti IIT Bombay Chiranjib Bhattacharyya IISc Bangalore.
LexRank: Graph-based Centrality as Salience in Text Summarization
The PageRank Citation Ranking: Bringing Order to the Web Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd Presented by Anca Leuca, Antonis Makropoulos.
LexRank: Graph-based Centrality as Salience in Text Summarization
Keyword Searching and Browsing in Databases using BANKS Seoyoung Ahn Mar 3, 2005 The University of Texas at Arlington.
2010 © University of Michigan 1 DivRank: Interplay of Prestige and Diversity in Information Networks Qiaozhu Mei 1,2, Jian Guo 3, Dragomir Radev 1,2 1.
CS 533 Information Retrieval Systems.  Introduction  Connectivity Analysis  Kleinberg’s Algorithm  Problems Encountered  Improved Connectivity Analysis.
LexPageRank: Prestige in Multi- Document Text Summarization Gunes Erkan and Dragomir R. Radev Department of EECS, School of Information University of Michigan.
Effects of overlaying ontologies to TextRank graphs Project Report By Kino Coursey.
Lecture #10 PageRank CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
1 FollowMyLink Individual APT Presentation Third Talk February 2006.
Ch 14. Link Analysis Padmini Srinivasan Computer Science Department
Link Analysis Rong Jin. Web Structure  Web is a graph Each web site correspond to a node A link from one site to another site forms a directed edge 
DOCUMENT UPDATE SUMMARIZATION USING INCREMENTAL HIERARCHICAL CLUSTERING CIKM’10 (DINGDING WANG, TAO LI) Advisor: Koh, Jia-Ling Presenter: Nonhlanhla Shongwe.
1 1 COMP5331: Knowledge Discovery and Data Mining Acknowledgement: Slides modified based on the slides provided by Lawrence Page, Sergey Brin, Rajeev Motwani.
Information Retrieval and Web Search Link analysis Instructor: Rada Mihalcea (Note: This slide set was adapted from an IR course taught by Prof. Chris.
Database Management Systems, R. Ramakrishnan 1 Algorithms for clustering large datasets in arbitrary metric spaces.
- Murtuza Shareef Authoritative Sources in a Hyperlinked Environment More specifically “Link Analysis” using HITS Algorithm.
Timestamped Graphs: Evolutionary Models of Text for Multi-document Summarization Ziheng Lin and Min-Yen Kan Department of Computer Science National University.
Event-Based Extractive Summarization E. Filatova and V. Hatzivassiloglou Department of Computer Science Columbia University (ACL 2004)
LexPageRank: Prestige in Multi-Document Text Summarization Gunes Erkan, Dragomir R. Radev (EMNLP 2004)
1 CS 430: Information Discovery Lecture 5 Ranking.
The P YTHY Summarization System: Microsoft Research at DUC 2007 Kristina Toutanova, Chris Brockett, Michael Gamon, Jagadeesh Jagarlamudi, Hisami Suzuki,
哈工大信息检索研究室 HITIR ’ s Update Summary at TAC2008 Extractive Content Selection Using Evolutionary Manifold-ranking and Spectral Clustering Reporter: Ph.d.
Meta-Path-Based Ranking with Pseudo Relevance Feedback on Heterogeneous Graph for Citation Recommendation By: Xiaozhong Liu, Yingying Yu, Chun Guo, Yizhou.
CS791 - Technologies of Google Spring A Web­based Kernel Function for Measuring the Similarity of Short Text Snippets By Mehran Sahami, Timothy.
A Survey on Automatic Text Summarization Dipanjan Das André F. T. Martins Tolga Çekiç
Dynamic Network Analysis Case study of PageRank-based Rewiring Narjès Bellamine-BenSaoud Galen Wilkerson 2 nd Second Annual French Complex Systems Summer.
1 CS 430 / INFO 430: Information Retrieval Lecture 20 Web Search 2.
GRAPH BASED MULTI-DOCUMENT SUMMARIZATION Canan BATUR
PageRank & Random Walk “The important of a Web page is depends on the readers interest, knowledge and attitudes…” –By Larry Page, Co-Founder of Google.
The PageRank Citation Ranking: Bringing Order to the Web
Lecture #11 PageRank (II)
PageRank & Random Walk “The important of a Web page is depends on the readers interest, knowledge and attitudes…” –By Larry Page, Co-Founder of Google.
Applying Key Phrase Extraction to aid Invalidity Search
An Efficient method to recommend research papers and highly influential authors. VIRAJITHA KARNATAPU.
John Frazier and Jonathan perrier
Spectral methods for Global Network Alignment
Resource Recommendation for AAN
Junghoo “John” Cho UCLA
Relevance and Reinforcement in Interactive Browsing
GhostLink: Latent Network Inference for Influence-aware Recommendation
Presented by Nick Janus
Presentation transcript:

NUS at DUC 2007: Using Evolutionary Models of Text Ziheng Lin, Tat-Seng Chua, Min-Yen Kan, Wee Sun Lee, Long Qiu and Shiren Ye Department of Computer Science National University of Singapore, Singapore

NUS at DUC 2007: Using Evolutionary Models of Text 2DUC 2007 Workshop Summarization Traditionally, weighted heuristics to select sentences With the advent of machine learning, heuristic weights can be tuned In last few years, graphical representations of text have shed new light on the summarization problem TextRank and LexRank allow us to naturally incorporate context as a continuum How can we enhance this representational model?

NUS at DUC 2007: Using Evolutionary Models of Text 3DUC 2007 Workshop Prestige as sentence selection One motivation of using graphical methods was to model the problem as finding prestige of nodes in a social network PageRank used random walk to smooth the effect of non-local context Lead to TextRank and LexRank Contrast with previous graphical approaches (Salton et al. 1994) Did we leave anything out of our representation for summarization? Yes, the notion of an evolving network

NUS at DUC 2007: Using Evolutionary Models of Text 4DUC 2007 Workshop Social networks change! Natural evolving networks (Dorogovtsev and Mendes, 2001) – Citation networks: New papers can cite old ones, but the old network is static – The Web: new pages are added with an old page connecting it to the web graph, old pages may update links

NUS at DUC 2007: Using Evolutionary Models of Text 5DUC 2007 Workshop Evolutionary models for summarization Writers and readers often follow conventional rhetorical styles - articles are not written or read in an arbitrary way Consider the evolution of texts using a very simplistic model – Writers write from the first sentence onwards in a text – Readers read from the first sentence onwards of a text A simple model: sentences get added incrementally to the graph

NUS at DUC 2007: Using Evolutionary Models of Text 6DUC 2007 Workshop Timestamped Graph Construction Approach – These assumptions suggest us to iteratively add sentences into the graph in chronological order. – At each iteration, consider which edges to add to the graph. – For single document: simple and straightforward: add 1 st sentence, followed by the 2 nd, and so forth, until the last sentence is added – For multi-document: treat it as multiple instances of single documents, which evolve in parallel; i.e., add 1 st sentences of all documents, followed by all 2 nd sentences, and so forth Doesn’t really model chronological ordering between articles, fix later

NUS at DUC 2007: Using Evolutionary Models of Text 7DUC 2007 Workshop Timestamped Graph Construction Model: Documents as columns – d i = document i Sentences as rows –s j = j th sentence of document

NUS at DUC 2007: Using Evolutionary Models of Text 8DUC 2007 Workshop Timestamped Graph Construction A multi document example doc1 doc2 doc3 sent1 sent2 sent3

NUS at DUC 2007: Using Evolutionary Models of Text 9DUC 2007 Workshop An example TSG: DUC 2007 D0703A-A

NUS at DUC 2007: Using Evolutionary Models of Text 10DUC 2007 Workshop Timestamped Graph Construction Formalization of TSGs: – The example is just one instance of TSG – Let’s generalize and formalize the TSG algorithm – A timestamped graph algorithm tsg(M) is a 9-tuple: (d, e, u, f,σ, t, i, s,τ) that specifies a resulting algorithm that takes as input the set of texts M and outputs a graph G Salient parameters for TSGs: e - # edges to add per vertex per time step u - unweighted or weighted edges σ- vertex selection function σ(u, G) s - skew degree For description of other parameters: see our TextGraphs-2 paper

NUS at DUC 2007: Using Evolutionary Models of Text 11DUC 2007 Workshop Timestamped Graph Construction Vertex selection function σ(u, G) – One strategy: among those nodes not yet connected to u in G, choose the one that has the highest similarity with u – Similarity functions: Jaccard, cosine, concept links (Ye et al )

NUS at DUC 2007: Using Evolutionary Models of Text 12DUC 2007 Workshop Timestamped Graph Construction Skew degree s – Models how nodes in multi-document graphs are added – Skew degree = how many iterations to wait before adding the 1 st sentence of the next document Motivation – Up to now, the TSG models assume that the authors start writing the documents at the same time – In reality, some documents are authored later than others, giving updates or reporting changes – Infer information from timestamps of articles or from date extraction on articles themselves.

NUS at DUC 2007: Using Evolutionary Models of Text 13DUC 2007 Workshop Skew Degree Examples time(d1) < time(d2) < time(d3) < time(d4) d1 d2 d3 d4 Skewed by 1Skewed by 2 Freely skewed d1 d2 d3 d4 Freely skewed = Only add a new document when it would be linked by some node using vertex function σ

NUS at DUC 2007: Using Evolutionary Models of Text 14DUC 2007 Workshop Timestamped Graph Construction Representations – We can model a number of different algorithms using this 9-tuple formalism: (d, e, u, f,σ, t, i, s,τ) – The given toy example: (f, 1, 0, 1, max-cosine-based, sentence, 1, 0, null) – LexRank graphs: (u, N, 1, 1, cosine-based, sentence, L max, 0, null) N = total number of sentences in the cluster; L max = the max document length i.e., all sentences are added into the graph in one timestep, each connected to all others, and cosine scores are given to edge weights

Summarization using TSGs

NUS at DUC 2007: Using Evolutionary Models of Text 16DUC 2007 Workshop System Overview Sentence splitting –Detect and mark sentence boundaries –Annotate each sentence with the doc ID and the sentence number –E.g., XIE : 4 March 1998 from Xinhua News; XIE : the 14 th sentence of this document Graph construction –Construct TSG in this phase

NUS at DUC 2007: Using Evolutionary Models of Text 17DUC 2007 Workshop System Overview Sentence Ranking – Apply topic-sensitive random walk on the graph to redistribute the weights of the nodes Sentence extraction – Extract the top-ranked sentences – Two different modified MMR re- rankers are used, depending on whether it is main or update task

NUS at DUC 2007: Using Evolutionary Models of Text 18DUC 2007 Workshop Differences for main and update task processing Main task: 1.Construct a TSG for input cluster 2.Run topic-sensitive PageRank on the TSG 3.Apply first modified version of MMR to extract sentences Update task: Cluster A: – Construct a TSG for cluster A – Run topic-sensitive PageRank on the TSG – Apply the second modified version of MMR to extract sentences Cluster B: – Construct a TSG for clusters A and B – Run topic-sensitive PageRank on the TSG; only retain sentences from B – Apply the second modified version of MMR to extract sentences Cluster C: – Construct a TSG for clusters A, B and C – Run topic-sensitive PageRank on the TSG; only retain sentences from C – Apply the second modified version of MMR to extract sentences

NUS at DUC 2007: Using Evolutionary Models of Text 19DUC 2007 Workshop Sentence Ranking Once a timestamped graph is built, we want to compute an prestige score for each node PageRank: use an iterative method that allows the weights of the nodes to redistribute until stability is reached Similarities as edges → weighted edges; query → topic-sensitive Topic sensitive (Q) portion Standard random walk term

NUS at DUC 2007: Using Evolutionary Models of Text 20DUC 2007 Workshop Sentence Extraction – Main task Original MMR: integrates a penalty of the maximal similarity of the candidate document and one selected document Ye et al. (2005) introduced a modified MMR: integrates a penalty of the total similarity of the candidate sentence and all selected sentences Score(s) = PageRank score of s; S = selected sentences This is used in the main task Penalty: All previous sentence similarity

NUS at DUC 2007: Using Evolutionary Models of Text 21DUC 2007 Workshop Sentence Extraction – Update task Update task assumes readers already read previous cluster(s) – implies we should not select sentences that have redundant information with previous cluster(s) Propose a modified MMR for the update task: – consider the total similarity of the candidate sentence with all selected sentences and sentences in previously-read cluster(s) P contains some top-ranked sentences in previous cluster(s) Previous cluster overlap

Evaluation and Analysis

NUS at DUC 2007: Using Evolutionary Models of Text 23DUC 2007 Workshop Macroscopic Evaluation Main task parameterization –Graph construction: (u, 1, 1, 1, concept-link-based, sentence, 1, 0, null) –Sentence extraction: λ= 0.8 and δ= 6, tuned from DUC 05 and 06 datasets –12 th for ROUGE-2 and 10 th for ROUGE-SU4 among 32 systems

NUS at DUC 2007: Using Evolutionary Models of Text 24DUC 2007 Workshop Update task parameterization –Graph construction: (u, 1, 1, 1, concept-link-based, sentence, 1, 0, null) –Sentence extraction: λ= 0.8, δ= 3 and γ= 6, based on our experience –3 rd in ROUGE-2, 4 th in ROUGE-SU4 and 6 th in average pyramid scores among 24 systems Macroscopic Evaluation

NUS at DUC 2007: Using Evolutionary Models of Text 25DUC 2007 Workshop What do we think? Better performance in update task TSG is better tailored to deal with update summaries The second modified version of MMR works better at distilling redundant information that is shown in previously-read cluster(s)

NUS at DUC 2007: Using Evolutionary Models of Text 26DUC 2007 Workshop Conclusion Proposed a timestamped graph model for text understanding and summarization – Adds sentences one at a time Parameterized model with nine variables – Several important variables to the iterative TSG formalism explained MMR reranking modified for fit with update task Future Work Freely skewed model Empirical and theoretical properties of TSGs (e.g., in-degree distribution)

Backup Slides 15 Minutes total talk 3:15-3:30

NUS at DUC 2007: Using Evolutionary Models of Text 28DUC 2007 Workshop References Günes Erkan and Dragomir R. Radev LexRank: Graph-based centrality as salience in text summari-zation. Journal of Artificial Intelligence Research, (22). Rada Mihalcea and Paul Tarau TextRank: Bring-ing order into texts. In Proceedings of EMNLP S.N. Dorogovtsev and J.F.F. Mendes Evolution of networks. Submitted to Advances in Physics on 6th March Sergey Brin and Lawrence Page The anatomy of a large-scale hypertextual Web search engine. Com-puter Networks and ISDN Systems, 30(1-7). Jon M. Kleinberg Authoritative sources in a hy-perlinked environment. In Proceedings of ACM-SIAM Symposium on Discrete Algorithms, Shiren Ye, Long Qiu, Tat-Seng Chua, and Min-Yen Kan NUS at DUC 2005: Understanding docu-ments via concepts links. In Proceedings of DUC 2005.