Generating Impact-Based Summaries for Scientific Literature Qiaozhu Mei, ChengXiang Zhai University of Illinois at Urbana-Champaign 1
Motivation Fast growth of publications – >100k papers in DBLP; > 10 references per paper Summarize a scientific paper –Author’s view: Abstracts, introductions May not be what the readers received May change over time –Reader’s view: impact of the paper Impact Factor: numeric Summary of the content? Author’s view: Proof of xxx; new definition of xxx; apply xxx technique State-of-the-art algorithm; Evaluation metric Reader’s view 20 years later 2
What should an impact summary look like?
Citation Contexts Impact, but… Describes how other authors view/comment on the paper –Implies the impact Similar to anchor text on web graph, but: Usually more than one sentences (informative). Usually mixed with discussions/comparison about other papers (noisy). … They have been also successfully used in part of speech tagging [7], machine translation [3, 5], information retrieval [4, 20], transliteration [13] and text summarization [14].... For example, Ponte and Croft [20] adopt a language modeling approach to information retrieval. … 4
Our Definition of Impact Summary Solution: Citation context infer impact; Original content summary Abstract:…. Introduction: ….. Content: …… References: …. … Ponte and Croft [20] adopt a language modeling approach to information retrieval. … … probabilistic models, as well as to the use of other recent models [19, 21], the statistical properties … Author picked sentences: good for summary, but doesn’t reflect the impact Reader composed sentences: good signal of impact, but too noisy to be used as summary Citation Contexts Target: extractive summary (pick sentences) of the impact of a paper 5
Rest of this Talk An Feasibility study: A Language modeling based approach –Sentence retrieval Estimation of impact language models Experiments Conclusion 6
Language Modeling in Information Retrieval d1d1 d2d2 dNdN Doc LM Documents q Query LM Rank with neg. KL Divergence Smooth using collection LM 7
Impact-based Summarization as Sentence Retrieval s1s1 s2s2 sNsN Sent LM Sentences D Impact LM Rank with neg. KL Divergence D c1c1 c2c2 cMcM Use top ranked sentences as a summary Key problem: estimate θ I 8
Estimating Impact Language Models Interpolation of document language model and citation language model D c1c1 c2c2 cMcM Constant coefficient: Dirichlet smoothing: Set λ j with features of c j : f 1 (c j ) = |c j |, and… 9
Specific Feature – Citation-based Authority Assumption: High authority paper has more trustable comments (citation context) Weight more in impact language model Authority pagerank on the citation graph d1d1 d2d2 10
Specific Feature – Citation Context Proximity Weight citation sentences according to the proximity to the citation label k distance to the citation label … There has been a lot of effort in applying the notion of language modeling and its variations to other problems. For example, Ponte and Croft [20] adopt a language modeling approach to information retrieval. They argue that much of the difficulty for IR lies in the lack of an adequate indexing model. Instead of making prior parametric assumptions about the similarity of documents, they propose a non-parametric approach to retrieval based probabilistic language modeling. Empirically, their approach significantly outperforms traditional tf*idf weighting on two different collections and query sets. … 11
Experiments Gold standard: –human generated summary –14 most cited papers in SIGIR Baselines: –Random; LEAD (likely to cover abs/intro.); –MEAD – Single Doc; –MEAD – Doc + Citations; (multi-document) Evaluation Metric: –ROUGE-1, ROUGE-L (unigram cooccurrence; longest common sequence) 12
Basic Results LengthMetricRandomLEADMEAD- Doc MEAD- Doc +Cite LM (KL-Div) 3R (+7.3%) 3R-L (+12.8%) 5R (+16.5%) 5R-L (+22.7%) 10R (+12.9%) 10R-L (+16.2%) 15R (+6.6%) 15R-L (+8.5%) 13
Component Study Impact language model: –Document LM << Citation Context LM << Interpolation (Doc LM, Cite LM) –Dirichlet interpolation > constant coefficient 14 MetricImpact LM = Doc LM Impact LM = Citation LM Interpolation ConstCoefDirichlet ROUGE ROUGE-L
Component Study (Cont.) Authority and Proximity –Both Pagerank and Proximity improves –Pagerank + Proximity improves marginally –Q: How to combine pagerank and proximity? 15 PageRankProximity = OffPr(s) = 1/α k Off On
Non-impact-based Summary Paper = “A study of smoothing methods for language models applied to ad hoc information retrieval” 1. Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. 2. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. 3. On the one hand, theoretical studies of an underlying model have been developed; this direction is, for example, represented by the various kinds of logic models and probabilistic models (e.g., [14, 3, 15, 22]). 16 Good big picture of the field (LMIR), but not about contribution of the paper (smoothing in LMIR)
Impact-based Summary Paper = “A study of smoothing methods for language models applied to ad hoc information retrieval” 1. Figure 5: Interpolation versus backoff for Jelinek- Mercer (top), Dirichlet smoothing (middle), and absolute discounting (bottom). 2. Second, one can de-couple the two different roles of smoothing by adopting a two stage smoothing strategy in which Dirichlet smoothing is first applied to implement the estimation role and Jelinek-Mercer smoothing is then applied to implement the role of query modeling 3. We find that the backoff performance is more sensitive to the smoothing parameter than that of interpolation, especially in Jelinek-Mercer and Dirichlet prior. 17 Specific to smoothing LM in IR; especially for the concrete smoothing techniques (Dirichlet and JM)
Related Work Text summarization (extractive) –E.g., Luhn ’58; McKeown and Radev ’95; Goldstein et al. ’99; Kraaij et al. ’01 (using language modeling) Technical paper summarization –Paice and Jones ’93; Saggion and Lapalme ’02; Teufel and Moens ’02 Citation context –Ritchie et al. ’06; Schwartz et al. ’07 Anchor text and hyperlink structure Language Modeling for information retrieval –Ponte and Croft ’98; Zhai and Lafferty ’01; Lafferty and Zhai ’01 18
Conclusion Novel problem of Impact-based Summarization Language Modeling approach –Citation context Impact language model –Accommodating authority and proximity features Feasibility study rather than optimizing Future work –Optimize features/methods –Large scale evaluation 19
Thanks! 20
Feature Study 21 What we have explored: –Unigram language models - doc; citation context; –Length features –Authority features; –Proximity features; –Position-based re-ranking; What we haven’t done: –Redundancy removal (Diversity); –Deeper NLP features; ngram features; –Learning to weight features;
Scientific Literature with Citations … They have been also successfully used in part of speech tagging [7], machine translation [3, 5], information retrieval [4, 20], transliteration [13] and text summarization [14].... For example, Ponte and Croft [20] adopt a language modeling approach to information retrieval. … … While the statistical properties of text corpora are fundamental to the use of probabilistic models, as well as to the use of other recent models [19, 21], the statistical properties … paper Citation paper Citation Citation context 22
Language Modeling in Information Retrieval Estimate document language models –Unigram multinomial distribution of words –θ d : {P(w|d)} Ranking documents with query likelihood –R(doc, Q) ~ P(q|d), a special case of –negative KL-divergence: R(doc, Q) ~ -D(θ q || θ d ) Smooth the document language model –Interpolation-based (p(w|d) ~ p ML (w|d) + p(w|REF)) –Dirichlet smoothing empirically performs well 23