CS276B Text Retrieval and Mining Winter 2005 Lecture 11.

Slides:



Advertisements
Similar presentations
Lecture 18: Link analysis
Advertisements

Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
CS345 Data Mining Page Rank Variants. Review Page Rank  Web graph encoded by matrix M N £ N matrix (N = number of web pages) M ij = 1/|O(j)| iff there.
Link Analysis David Kauchak cs160 Fall 2009 adapted from:
DATA MINING LECTURE 12 Link Analysis Ranking Random walks.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 March 23, 2005
Information Retrieval IR10 Today’s lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
Authoritative Sources in a Hyperlinked Environment Hui Han CSE dept, PSU 10/15/01.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.
6/16/20151 Recent Results in Automatic Web Resource Discovery Soumen Chakrabartiv Presentation by Cui Tao.
Link Analysis, PageRank and Search Engines on the Web
Web Search – Summer Term 2006 VII. Selected Topics - The Hilltop Algorithm (c) Wolfgang Hürst, Albert-Ludwigs-University.
Sigir’99 Inside Internet Search Engines: Search Jan Pedersen and William Chang.
Link Structure and Web Mining Shuying Wang
INF 2914 Web Search Lecture 4: Link Analysis Today’s lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
CS347 Lecture 6 April 25, 2001 ©Prabhakar Raghavan.
Adapted from Lectures by
Information Retrieval
Link Analysis HITS Algorithm PageRank Algorithm.
Chapter 5: Information Retrieval and Web Search
Information Retrieval Link-based Ranking. Ranking is crucial… “.. From our experimental data, we could observe that the top 20% of the pages with the.
CS246 Link-Based Ranking. Problems of TFIDF Vector  Works well on small controlled corpus, but not on the Web  Top result for “American Airlines” query:
Λ14 Διαδικτυακά Κοινωνικά Δίκτυα και Μέσα
1 Page Link Analysis and Anchor Text for Web Search Lecture 9 Many slides based on lectures by Chen Li (UCI) an Raymond Mooney (UTexas)
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 21 Link analysis.
Information retrieval Lecture 9 Recap and today’s topics Last lecture web search overview pagerank Today more sophisticated link analysis using links.
PrasadL17LinkAnalysis1 Link Analysis Adapted from Lectures by Prabhakar Raghavan (Yahoo, Stanford) and Christopher Manning (Stanford)
ITCS 6265 Lecture 17 Link Analysis This lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
Lecture 14: Link Analysis
CS276 Lecture 18 Link Analysis Today’s lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar Raghavan Lecture 18: Link analysis.
Hyperlink Analysis for the Web. Information Retrieval Input: Document collection Goal: Retrieve documents or text with information content that is relevant.
MapReduce and Graph Data Chapter 5 Based on slides from Jimmy Lin’s lecture slides ( (licensed.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Chris Manning, Pandu Nayak and.
Introduction to Information Retrieval Introduction to Information Retrieval Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan Lecture.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 21: Link analysis.
Introduction to Information Retrieval LINK ANALYSIS 1.
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
When Experts Agree: Using Non-Affiliated Experts To Rank Popular Topics Meital Aizen.
CS315 – Link Analysis Three generations of Search Engines Anchor text Link analysis for ranking Pagerank HITS.
Web Search and Tex Mining Lecture 9 Link Analysis.
How Does a Search Engine Work? Part 2 Dr. Frank McCown Intro to Web Science Harding University This work is licensed under a Creative Commons Attribution-NonCommercial-
Chapter 6: Information Retrieval and Web Search
CS 533 Information Retrieval Systems.  Introduction  Connectivity Analysis  Kleinberg’s Algorithm  Problems Encountered  Improved Connectivity Analysis.
CS349 – Link Analysis 1. Anchor text 2. Link analysis for ranking 2.1 Pagerank 2.2 Pagerank variants 2.3 HITS.
Ch 14. Link Analysis Padmini Srinivasan Computer Science Department
Algorithmic Detection of Semantic Similarity WWW 2005.
Link Analysis Rong Jin. Web Structure  Web is a graph Each web site correspond to a node A link from one site to another site forms a directed edge 
PrasadL21LinkAnalysis1 Link Analysis Adapted from Lectures by Prabhakar Raghavan (Yahoo, Stanford), Christopher Manning (Stanford), and Raymond Mooney.
Ranking Link-based Ranking (2° generation) Reading 21.
COMP4210 Information Retrieval and Search Engines Lecture 9: Link Analysis.
Web-Mining Agents Community Analysis Tanya Braun Universität zu Lübeck Institut für Informationssysteme.
Information Retrieval and Web Search Link analysis Instructor: Rada Mihalcea (Note: This slide set was adapted from an IR course taught by Prof. Chris.
- Murtuza Shareef Authoritative Sources in a Hyperlinked Environment More specifically “Link Analysis” using HITS Algorithm.
CS 540 Database Management Systems Web Data Management some slides are due to Kevin Chang 1.
1 CS 430 / INFO 430: Information Retrieval Lecture 20 Web Search 2.
Modified by Dongwon Lee from slides by
Quality of a search engine
HITS Hypertext-Induced Topic Selection
Search Engines and Link Analysis on the Web
Link analysis and Page Rank Algorithm
Information Retrieval Christopher Manning and Prabhakar Raghavan
Information Retrieval
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Improved Algorithms for Topic Distillation in a Hyperlinked Environment (ACM SIGIR ‘98) Ruey-Lung, Hsiao Nov 23, 2000.
Junghoo “John” Cho UCLA
Presentation transcript:

CS276B Text Retrieval and Mining Winter 2005 Lecture 11

This lecture Wrap up pagerank Anchor text HITS Behavioral ranking

Pagerank: Issues and Variants How realistic is the random surfer model? What if we modeled the back button? [Fagi00] Surfer behavior sharply skewed towards short paths [Hube98] Search engines, bookmarks & directories make jumps non- random. Biased Surfer Models Weight edge traversal probabilities based on match with topic/query (non-uniform edge selection) Bias jumps to pages on topic (e.g., based on personal bookmarks & categories of interest)

Topic Specific Pagerank [Have02] Conceptually, we use a random surfer who teleports, with say 10% probability, using the following rule: Selects a category (say, one of the 16 top level ODP categories) based on a query & user -specific distribution over the categories Teleport to a page uniformly at random within the chosen category Sounds hard to implement: can’t compute PageRank at query time!

Topic Specific Pagerank [Have02] Implementation offline:Compute pagerank distributions wrt to individual categories Query independent model as before Each page has multiple pagerank scores – one for each ODP category, with teleportation only to that category online: Distribution of weights over categories computed by query context classification Generate a dynamic pagerank score for each page - weighted sum of category-specific pageranks

Influencing PageRank (“Personalization”) Input: Web graph W influence vector v v : (page  degree of influence) Output: Rank vector r: (page  page importance wrt v) r = PR(W, v)

Non-uniform Teleportation Teleport with 10% probability to a Sports page Sports

Interpretation of Composite Score For a set of personalization vectors {v j }  j [w j · PR(W, v j )] = PR(W,  j [w j · v j ]) Weighted sum of rank vectors itself forms a valid rank vector, because PR() is linear wrt v j

Interpretation 10% Sports teleportation Sports

Interpretation Health 10% Health teleportation

Interpretation Sports Health pr = (0.9 PR sports PR health ) gives you: 9% sports teleportation, 1% health teleportation

The Web as a Directed Graph Assumption 1: A hyperlink between pages denotes author perceived relevance (quality signal) Assumption 2: The anchor of the hyperlink describes the target page (textual context) Page A hyperlink Page B Anchor

Assumptions Tested A link is an endorsement (quality signal) Except when affiliated Can we recognize affiliated links? [Davi00] 1536 links manually labeled 59 binary features (e.g., on-domain, meta tag overlap, common outlinks) C4.5 decision tree, 10 fold cross validation showed 98.7% accuracy Additional surrounding text has lower probability but can be useful

Assumptions tested Anchors describe the target Topical Locality [Davi00b] ~200K pages (query results + their outlinks) Computed “page to page” similarity (TFIDF measure) Link-to-Same-Domain > Cocited > Link-to- Different-Domain Computed “anchor to page” similarity Mean anchor len = mean probability of an anchor term in target page

Anchor Text WWW Worm - McBryan [Mcbr94] For [ ibm] how to distinguish between: IBM’s home page (mostly graphical) IBM’s copyright page (high term freq. for ‘ibm’) Rival’s spam page (arbitrarily high term freq.) “ibm” “ibm.com” “IBM home page” A million pieces of anchor text with “ibm” send a strong signal

Indexing anchor text When indexing a document D, include anchor text from links pointing to D. Armonk, NY-based computer giant IBM announced today Joe’s computer hardware links Compaq HP IBM Big Blue today announced record profits for the quarter

Indexing anchor text Can sometimes have unexpected side effects - e.g., evil empire. Can index anchor text with less weight.

Anchor Text Other applications Weighting/filtering links in the graph HITS [Chak98], Hilltop [Bhar01] Generating page descriptions from anchor text [Amit98, Amit00]

Hyperlink-Induced Topic Search (HITS) - Klei98 In response to a query, instead of an ordered list of pages each meeting the query, find two sets of inter-related pages: Hub pages are good lists of links on a subject. e.g., “Bob’s list of cancer-related links.” Authority pages occur recurrently on good hubs for the subject. Best suited for “broad topic” queries rather than for page-finding queries. Gets at a broader slice of common opinion.

Hubs and Authorities Thus, a good hub page for a topic points to many authoritative pages for that topic. A good authority page for a topic is pointed to by many good hubs for that topic. Circular definition - will turn this into an iterative computation.

The hope Long distance telephone companies Hubs Authorities

High-level scheme Extract from the web a base set of pages that could be good hubs or authorities. From these, identify a small set of top hub and authority pages;  iterative algorithm.

Base set Given text query (say browser), use a text index to get all pages containing browser. Call this the root set of pages. Add in any page that either points to a page in the root set, or is pointed to by a page in the root set. Call this the base set.

Visualization Root set Base set

Assembling the base set [Klei98] Root set typically nodes. Base set may have up to 5000 nodes. How do you find the base set nodes? Follow out-links by parsing root set pages. Get in-links (and out-links) from a connectivity server. (Actually, suffices to text-index strings of the form href=“URL” to get in-links to URL.)

Distilling hubs and authorities Compute, for each page x in the base set, a hub score h(x) and an authority score a(x). Initialize: for all x, h(x)  1; a(x)  1; Iteratively update all h(x), a(x); After iterations output pages with highest h() scores as top hubs highest a() scores as top authorities. Key

Iterative update Repeat the following updates, for all x: x x

Scaling To prevent the h() and a() values from getting too big, can scale down after each iteration. Scaling factor doesn’t really matter: we only care about the relative values of the scores.

How many iterations? Claim: relative values of scores will converge after a few iterations: in fact, suitably scaled, h() and a() scores settle into a steady state! proof of this comes later. We only require the relative orders of the h() and a() scores - not their absolute values. In practice, ~5 iterations get you close to stability.

Japan Elementary Schools The American School in Japan The Link Page ‰ªèŽs—§ˆä“c¬ŠwZƒz[ƒ€ƒy[ƒW Kids' Space ˆÀéŽs—§ˆÀé¼”¬ŠwZ ‹{é‹³ˆç‘åŠw‘®¬ŠwZ KEIMEI GAKUEN Home Page ( Japanese ) Shiranuma Home Page fuzoku-es.fukui-u.ac.jp welcome to Miasa E&J school _“ސ쌧E‰¡lŽs— §’†ì¼¬ŠwZ‚̃y fukui haruyama-es HomePage Torisu primary school goo Yakumo Elementary,Hokkaido,Japan FUZOKU Home Page Kamishibun Elementary School... schools LINK Page-13 “ú–{‚ÌŠwZ a‰„¬ŠwZƒz[ƒ€ƒy[ƒW 100 Schools Home Pages (English) K-12 from Japan 10/...rnet and Education ) ‚l‚f‚j¬ŠwZ‚U”N‚P‘g¨Œê ÒŠ—’¬—§ÒŠ—“Œ¬ŠwZ Koulutus ja oppilaitokset TOYODA HOMEPAGE Education Cay's Homepage(Japanese) –y“쏬ŠwZ‚̃z[ƒ€ƒy[ƒW UNIVERSITY ‰J—³¬ŠwZ DRAGON97-TOP ŽÂ‰ª¬ŠwZ‚T”N‚P‘gƒz[ƒ€ƒy[ƒW ¶µ°é¼ÂÁ© ¥á¥Ë¥å¡¼ ¥á¥Ë¥å¡¼ HubsAuthorities

Things to note Pulled together good pages regardless of language of page content. Use only link analysis after base set assembled iterative scoring is query-independent. Iterative computation after text index retrieval - significant overhead.

Proof of convergence n  n adjacency matrix A: each of the n pages in the base set has a row and column in the matrix. Entry A ij = 1 if page i links to page j, else =

Hub/authority vectors View the hub scores h() and the authority scores a() as vectors with n components. Recall the iterative updates

Rewrite in matrix form h=Aa. a=A t h. Recall A t is the transpose of A. Substituting, h=AA t h and a=A t Aa. Thus, h is an eigenvector of AA t and a is an eigenvector of A t A. Further, our algorithm is a particular, known algorithm for computing eigenvectors: the power iteration method. Guaranteed to converge.

Issues Topic Drift Off-topic pages can cause off-topic “authorities” to be returned E.g., the neighborhood graph can be about a “super topic” Mutually Reinforcing Affiliates Affiliated pages/sites can boost each others’ scores Linkage between affiliated pages is not a useful signal

Solutions ARC [Chak98] and Clever [Chak98b] Distance-2 neighborhood graph Tackling affiliated linkage IP prefix (E.g., *.*) rather than hosts to identify “same author” pages Tackling topic drift Weight edges by match between query and extended anchor text Distribute hub score non-uniformly to outlinks Intuition: Regions of the hub page with links to good authorities get more of the hub score (For follow-up based on Document Object Model see [Chak01])

Solutions (contd) Topic Distillation [Bhar98] Tackling affiliated linkage Normalize weights of edges from/to a single host Tackling topic drift Query expansion. “Topic vector” computed from docs in the initial ranking. Match with topic vector used to weight edges and remove off-topic nodes Evaluation 28 broad queries. Pooled results, blind ratings of results by 3 reviewers per query Average 10 Topic Distillaton = 0.66, HITS = 0.46 Host A Host B 1/3 1

Hilltop [Bhar01] Preprocessing: Special index of “expert” hubs Select a subset of the web (~ 5%) High out-degree to non-affiliated pages on a theme At query time compute: Expert score (Hub score) Based on text match between query and expert hub Authority score Based on scores of non-affiliated experts pointing to the given page Also based on match between query and extended anchor-text (includes enclosing headings + title) Return top ranked pages by authority score

Behavior-based ranking

For each query Q, keep track of which docs in the results are clicked on On subsequent requests for Q, re-order docs in results based on click-throughs First due to DirectHit  AskJeeves Relevance assessment based on Behavior/usage vs. content

Query-doc popularity matrix B Queries Docs q j B qj = number of times doc j clicked-through on query q When query q issued again, order docs by B qj values.

Issues to consider Weighing/combining text- and click-based scores. What identifies a query? Ferrari Mondial Ferrari mondial ferrari mondial “Ferrari Mondial” Can use heuristics, but search parsing slowed.

Vector space implementation Maintain a term-doc popularity matrix C as opposed to query-doc popularity initialized to all zeros Each column represents a doc j If doc j clicked on for query q, update C j  C j +  q (here q is viewed as a vector). On a query q’, compute its cosine proximity to C j for all j. Combine this with the regular text score.

Issues Normalization of C j after updating Assumption of query compositionality “white house” document popularity derived from “white” and “house” Updating - live or batch?

Basic Assumption Relevance can be directly measured by number of click throughs Valid?

Validity of Basic Assumption Click through to docs that turn out to be non-relevant: what does a click mean? Self-perpetuating ranking Spam All votes count the same

Variants Time spent viewing page Difficult session management Inconclusive modeling so far Does user back out of page? Does user stop searching? Does user transact?