Download presentation
Presentation is loading. Please wait.
Published byPhyllis Lawson Modified over 9 years ago
1
Yin Yang (Hong Kong University of Science and Technology) Nilesh Bansal (University of Toronto) Wisam Dakka (Google) Panagiotis Ipeirotis (New York University) Nick Koudas (University of Toronto) Dimitris Papadias (Hong Kong University of Science and Technology)
2
Explosion of Web 2.0 content blogs, micro-blogs, social networking Need for “cross reference” on the web after we read a news article, we wonder if there are any blogs discussing it and vice versa
3
A service of the BlogScope system a real blog search engine serving 20K users /day Input: a text document Output: relevant blog posts Methodology extract key phrases from the input document use these phrases to query BlogScope
6
Novel Query-by-Document (QBD) model Practical phrase extractor Phrase set enhancement with Wikipedia knowledge (QBD-W) Evaluation of all proposed methods using Amazon Mechanical Turk Human annotators are serious because they get paid for the tasks
7
Example of RF Distinctions between RF and QBD RF involves interaction, while QBD does not RF is most effective for improving recall, whereas QBD aims at both high precision and recall RF starts with a keyword query; QBD directly takes a document as input
8
Two classes of methods Very slow but accurate, from the machine learning community Practical, not so accurate as the above (our method falls in this category) Phrase extraction in QBD has distinct goals Document retrieval accuracy is more important than that of the phrase set itself A better phrase extractor is not necessarily more suitable for QBD, as shown in our experiments
9
Query expansion Used when user’s keyword set does not express herself properly PageRank, TrustRank, … QBD-W follows this framework Wikipedia mining
10
Recall that Query-by-Document Extracts key phrases from the input document And then query them against a search engine Idea: given a query document D Identify all phrases from D Score each individual phrase Obtain the set of phrases with highest scores, and refine it
11
Process the document with a Part-of-Speech tagger Nouns, adjectives, verbs, … We compiled a list of POS patterns Indexed by a POS trie forest Each term sequence following such a POS pattern is considered a phrase
12
PatternInstance NNintendo JNglobal warming NNApple computer JJNdeclarative approximate selection NNNcomputer science department JCJNefficient and effective algorithm JNNNJunior United States Senator NNNNMicrosoft Host Integration Server …… NNNNNUnited States President Barrack Obama
14
Two scoring functions f t, based on TF/IDF f l, based on the concept of mutual information
15
Extract the most characteristic phrases from the input document D But may obtain term sequences which are not really phrases Example: “moment Down Jones” in “at this moment Dow Jones”
16
MI: the conditional probability of a pair of events, with respect to their individual probabilities Eliminates non-phrases
17
Take the top-k phrases with highest scores Eliminates duplicates Two different phrases may carry similar meanings Remove phrases who are ▪ Subsumed by another with higher score ▪ Differ from a better phrase only in the last term ▪ And other rules …
18
Motivation: The user may also be interested in web documents related to the given one, but does not contain the same key phrases Example: after reading an article on Michelle Obama, the user may also want to learn her husband, and past American presidents Main idea: Obtain an initial phrase set with QBD Use Wikipedia knowledge to identify phrases that are related to the initial phrases Our method follows the spreading-activation framework
20
Given an initial phrase set Locate nodes corresponding to these phrases on the Wiki Graph Assign weights to these nodes Iteratively spreads node weights to neighbors ▪ Assume the random surfer model ▪ With a certain probability, return to one of the initial nodes
21
S is the initial phrase set Initial weights are normalized s(c v ) is the score of c v, assigned by QBD
22
WiiSonyNintendoPlay Station Tomb Raider Wii02/107/101/100 Sony0004/40 Nintendo5/61/6000 Play Station 2/116/111/1102/11 Tomb Raider 0001/10
23
With probability α v’, proceed to a neighbor; Otherwise, return to one of the initial nodes α v’ is a function of the node v’
24
α v is not a constant, unlike other algorithms (e.g., TrustRank) α v gets smaller, and eventually drops to zero, for nodes increasingly farther away from the initial ones Reduce CPU overhead of RelevanceRank computation, since only a subset of nodes are considered Important, as RelevanceRank is calculated online
25
IterationWiiSonyNintendoPlay Station 00010 10.670.130.10 20.130.060.740.06 30.490.110.380.02 40.250.080.620.05 50.410.100.460.03 …………… Infinite0.350.090.520.03
26
Methodology Employ human annotators at Amazon Mturk Dataset A random sample of news articles from the New York Times, the Economist, Reuters, and Financial Times during Aug-Sep 2007 Competitors for phrase extraction QBD-TFIDF (tf-idf scoring) QBD-MI (mutual information scoring) QBD-YAHOO (Yahoo! phrase extractor)
27
Quality of Phrase Retrieval Quality of Document Retrieval Efficiency The total running time of QBD is negligible
32
l max Time (seconds) 10.160 21.142 310.262 457.915 5143.828
33
We propose the query-by-document model two effective phrase extraction algorithms enhancing the phrase set with the Wikipedia graph Future work more sophisticated phrase extraction (e.g., with additional background knowledge) blog matching using key phrases
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.