Modified by Dongwon Lee from slides by

Slides:



Advertisements
Similar presentations
Lecture 18: Link analysis
Advertisements

Matrices, Digraphs, Markov Chains & Their Use by Google Leslie Hogben Iowa State University and American Institute of Mathematics Leslie Hogben Iowa State.
CS345 Data Mining Link Analysis Algorithms Page Rank Anand Rajaraman, Jeffrey D. Ullman.
Link Analysis: PageRank
More on Rankings. Query-independent LAR Have an a-priori ordering of the web pages Q: Set of pages that contain the keywords in the query q Present the.
DATA MINING LECTURE 12 Link Analysis Ranking Random walks.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 March 23, 2005
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.
Introduction to PageRank Algorithm and Programming Assignment 1 CSC4170 Web Intelligence and Social Computing Tutorial 4 Tutor: Tom Chao Zhou
Information Retrieval IR10 Today’s lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
Multimedia Databases SVD II. Optimality of SVD Def: The Frobenius norm of a n x m matrix M is (reminder) The rank of a matrix M is the number of independent.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 April 2, 2006
15-853Page :Algorithms in the Real World Indexing and Searching III (well actually II) – Link Analysis – Near duplicate removal.
CS276B Text Retrieval and Mining Winter 2005 Lecture 11.
Multimedia Databases SVD II. SVD - Detailed outline Motivation Definition - properties Interpretation Complexity Case studies SVD properties More case.
Link Analysis, PageRank and Search Engines on the Web
CS345 Data Mining Link Analysis Algorithms Page Rank Anand Rajaraman, Jeffrey D. Ullman.
INF 2914 Web Search Lecture 4: Link Analysis Today’s lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
CS347 Lecture 6 April 25, 2001 ©Prabhakar Raghavan.
Singular Value Decomposition and Data Management
Link Analysis. 2 HITS - Kleinberg’s Algorithm HITS – Hypertext Induced Topic Selection For each vertex v Є V in a subgraph of interest: A site is very.
Adapted from Lectures by
Information Retrieval Link-based Ranking. Ranking is crucial… “.. From our experimental data, we could observe that the top 20% of the pages with the.
CS246 Link-Based Ranking. Problems of TFIDF Vector  Works well on small controlled corpus, but not on the Web  Top result for “American Airlines” query:
Motivation When searching for information on the WWW, user perform a query to a search engine. The engine return, as the query’s result, a list of Web.
Cloud and Big Data Summer School, Stockholm, Aug., 2015 Jeffrey D. Ullman.
Stochastic Approach for Link Structure Analysis (SALSA) Presented by Adam Simkins.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 21 Link analysis.
Information retrieval Lecture 9 Recap and today’s topics Last lecture web search overview pagerank Today more sophisticated link analysis using links.
PrasadL17LinkAnalysis1 Link Analysis Adapted from Lectures by Prabhakar Raghavan (Yahoo, Stanford) and Christopher Manning (Stanford)
ITCS 6265 Lecture 17 Link Analysis This lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
Lecture 14: Link Analysis
CS276 Lecture 18 Link Analysis Today’s lecture Anchor text Link analysis for ranking Pagerank and variants HITS.
CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar Raghavan Lecture 18: Link analysis.
Hyperlink Analysis for the Web. Information Retrieval Input: Document collection Goal: Retrieve documents or text with information content that is relevant.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Chris Manning, Pandu Nayak and.
Introduction to Information Retrieval Introduction to Information Retrieval Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan Lecture.
Introduction to Information Retrieval Introduction to Information Retrieval Modified from Stanford CS276 slides Chap. 21: Link analysis.
Introduction to Information Retrieval LINK ANALYSIS 1.
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
CS315 – Link Analysis Three generations of Search Engines Anchor text Link analysis for ranking Pagerank HITS.
Web Intelligence Web Communities and Dissemination of Information and Culture on the www.
Web Search and Tex Mining Lecture 9 Link Analysis.
PageRank. s1s1 p 12 p 21 s2s2 s3s3 p 31 s4s4 p 41 p 34 p 42 p 13 x 1 = p 21 p 34 p 41 + p 34 p 42 p 21 + p 21 p 31 p 41 + p 31 p 42 p 21 / Σ x 2 = p 31.
CS349 – Link Analysis 1. Anchor text 2. Link analysis for ranking 2.1 Pagerank 2.2 Pagerank variants 2.3 HITS.
Link Analysis Rong Jin. Web Structure  Web is a graph Each web site correspond to a node A link from one site to another site forms a directed edge 
PrasadL21LinkAnalysis1 Link Analysis Adapted from Lectures by Prabhakar Raghavan (Yahoo, Stanford), Christopher Manning (Stanford), and Raymond Mooney.
Ranking Link-based Ranking (2° generation) Reading 21.
COMP4210 Information Retrieval and Search Engines Lecture 9: Link Analysis.
Information Retrieval and Web Search Link analysis Instructor: Rada Mihalcea (Note: This slide set was adapted from an IR course taught by Prof. Chris.
Link Analysis Algorithms Page Rank Slides from Stanford CS345, slightly modified.
CS 540 Database Management Systems Web Data Management some slides are due to Kevin Chang 1.
Quality of a search engine
HITS Hypertext-Induced Topic Selection
Search Engines and Link Analysis on the Web
PageRank Random Surfers on the Web Transition Matrix of the Web Dead Ends and Spider Traps Topic-Specific PageRank Hubs and Authorities Jeffrey D. Ullman.
Link analysis and Page Rank Algorithm
Information Retrieval Christopher Manning and Prabhakar Raghavan
DTMC Applications Ranking Web Pages & Slotted ALOHA
Degree and Eigenvector Centrality
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Iterative Aggregation Disaggregation
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Junghoo “John” Cho UCLA
CS276 Information Retrieval and Web Search
Presentation transcript:

Modified by Dongwon Lee from slides by Ch 21 Link Analysis Modified by Dongwon Lee from slides by Christopher Manning and Prabhakar Raghavan

The Web as a Directed Graph Sec. 21.1 The Web as a Directed Graph Page A hyperlink Page B Anchor Assumption 1: A hyperlink between pages denotes author perceived relevance (quality signal) Assumption 2: The text in the anchor of the hyperlink describes the target page (textual context)

Sec. 21.1.1 Indexing anchor text When indexing a document D, include anchor text from links pointing to D. Armonk, NY-based computer giant IBM announced today www.ibm.com Big Blue today announced record profits for the quarter Joe’s computer hardware links Sun HP IBM

Sec. 21.1.1 Indexing anchor text Can sometimes have unexpected side effects - e.g., evil empire. Can score anchor text with weight depending on the authority of the anchor page’s website E.g., if we were to assume that content from cnn.com or yahoo.com is authoritative, then trust the anchor text from them

Pagerank scoring Imagine a browser doing a random walk on web pages: Sec. 21.2 Pagerank scoring Imagine a browser doing a random walk on web pages: Start at a random page At each step, go out of the current page along one of the links on that page, equiprobably “In the steady state” each page has a long-term visit rate - use this as the page’s score. 1/3 PageRank score

Will random walk reach to steady state? Random walks on regular, undirected graph  uniformly distributed sample Theorem [Markov chain folklore]: After steps, a random walk reaches the stationary distribution : depends on the graph structure N: number of nodes Problem Web is neither regular nor undirected IST 516

Full of “sinks” The web is full of dead-ends. Sec. 21.2 Full of “sinks” The web is full of dead-ends. Random walk can get stuck in dead-ends. Makes no sense to talk about long-term visit rates. ??

Solution: Teleporting Sec. 21.2 Solution: Teleporting At a dead end, jump to a random web page. Eg, At any non-dead end, with probability 10%, jump to a random web page. With remaining probability (90%), go out on a random link. 10% - a parameter.

Main Idea of PageRank Combine Random Walk and Teleporting Let as the teleportation rate (say 0.1) Main Idea: Repeat Do teleport with the probability of Do random work with the probability of Until things become stable

Sec. 21.2.1 Markov chains A Markov chain consists of n states, plus an nn transition probability matrix P. At each step, we are in exactly one of the states. For 1  i,j  n, the matrix entry Pij tells us the probability of j being the next state, given we are currently in state i. Pii>0 is OK. i j Pij

Markov chains Clearly, for all i, Sec. 21.2.1 Markov chains Clearly, for all i, Markov chains are abstractions of random walks.

Ergodic Markov chains A Markov chain is ergodic if Sec. 21.2.1 Ergodic Markov chains A Markov chain is ergodic if you have a path from any state to any other For any start state, after a finite transient time T0, the probability of being in any state at a fixed time T>T0 is nonzero.

Sec. 21.2.1 Ergodic Markov chains For any ergodic Markov chain, there is a unique long-term visit rate for each state. Steady-state probability distribution. Over a long time-period, we visit each state in proportion to this rate. It doesn’t matter where we start  This means: PageRank w. random walk+teleport will reach to the steady state

Transition Probability Matrix P Captures the idea of PageRank Do teleport with the probability of Do random work with the probability of P = X teleport + X random-walk Teleport  1/(# of nodes) = 1/N Random-walk  In the adjacent matrix, divide each “1” by the number of 1s in its rows Each row in P is called a probability vector

Example 1 2 3

Sec. 21.2.1 Probability vectors A probability (row) vector x = (x1, … xn) tells us where the walk is at any point. E.g., (000…1…000) means we’re in state i. 1 i n More generally, the vector x = (x1, … xn) means the walk is in state i with probability xi.

Change in probability vector Sec. 21.2.1 Change in probability vector If the probability vector is x = (x1, … xn) at this step, what is it at the next step? Recall that row i of the transition prob. Matrix P tells us where we go next from state i. So from x, our next state is distributed as xP.

How do we compute this vector? Sec. 21.2.2 How do we compute this vector? Let a = (a1, … an) denote the row vector of steady-state probabilities. If our current position is described by a, then the next step is distributed as aP. But a is the steady state, so a=aP. Solving this matrix equation gives us a. So a is the (left) eigenvector for P. (Corresponds to the “principal” eigenvector of P with the largest eigenvalue.) Transition probability matrices always have largest eigenvalue 1.

Sec. 21.2.2 One way of computing a Recall, regardless of where we start, we eventually reach the steady state a. Start with any distribution (say x=(10…0)). After one step, we’re at xP; after two steps at xP2 , then xP3 and so on. “Eventually” means for “large” k, xPk = a. Algorithm: multiply x by increasing powers of P until the product looks stable.

Pagerank summary Preprocessing: Query processing: Sec. 21.2.2 Pagerank summary Preprocessing: Given graph of links, build matrix P. From it compute a. The entry ai is a number between 0 and 1: the pagerank of page i. Query processing: Retrieve pages meeting query. Rank them by their pagerank. Order is query-independent.

Hyperlink-Induced Topic Search (HITS) Sec. 21.3 Hyperlink-Induced Topic Search (HITS) In response to a query, instead of an ordered list of pages each meeting the query, find two sets of inter-related pages: Hub pages are good lists of links on a subject. e.g., “Bob’s list of cancer-related links.” Authority pages occur recurrently on good hubs for the subject. Best suited for “broad topic” queries rather than for page-finding queries. Gets at a broader slice of common opinion.

Sec. 21.3 Hubs and Authorities Thus, a good hub page for a topic points to many authoritative pages for that topic. A good authority page for a topic is pointed to by many good hubs for that topic. Circular definition - will turn this into an iterative computation.

Mobile telecom companies Sec. 21.3 The hope Authorities Hubs Mobile telecom companies

Sec. 21.3 High-level scheme Extract from the web a base set of pages that could be good hubs or authorities. From these, identify a small set of top hub and authority pages; iterative algorithm.

Sec. 21.3 Base set Given text query (say browser), use a text index to get all pages containing browser. Call this the root set of pages. Add in any page that either points to a page in the root set, or is pointed to by a page in the root set. Call this the base set.

Sec. 21.3 Visualization Root set Base set

Assembling the base set Sec. 21.3 Assembling the base set Root set typically 200-1000 nodes. Base set may have thousands of nodes Topic-dependent How do you find the base set nodes? Follow out-links by parsing root set pages. Get in-links (and out-links) from a connectivity server

Distilling hubs and authorities Sec. 21.3 Distilling hubs and authorities Compute, for each page x in the base set, a hub score h(x) and an authority score a(x). Initialize: for all x, h(x)1; a(x) 1; Iteratively update all h(x), a(x); After iterations output pages with highest h() scores as top hubs highest a() scores as top authorities. Key

Iterative update Repeat the following updates, for all x: x x Sec. 21.3 Iterative update Repeat the following updates, for all x: x x

Sec. 21.3 Scaling To prevent the h() and a() values from getting too big, can scale down after each iteration Normalization Scaling factor doesn’t really matter: we only care about the relative values of the scores.

Sec. 21.3 How many iterations? Claim: relative values of scores will converge after a few iterations: in fact, suitably scaled, h() and a() scores settle into a steady state! We only require the relative orders of the h() and a() scores - not their absolute values. In practice, ~50 iterations get you close to stability.

Japan Elementary Schools Sec. 21.3 Japan Elementary Schools Hubs Authorities schools LINK Page-13 “ú–{‚ÌŠwZ a‰„¬ŠwZƒz[ƒ€ƒy[ƒW 100 Schools Home Pages (English) K-12 from Japan 10/...rnet and Education ) http://www...iglobe.ne.jp/~IKESAN ‚l‚f‚j¬ŠwZ‚U”N‚P‘g•¨Œê ÒŠ—’¬—§ÒŠ—“Œ¬ŠwZ Koulutus ja oppilaitokset TOYODA HOMEPAGE Education Cay's Homepage(Japanese) –y“쏬ŠwZ‚̃z[ƒ€ƒy[ƒW UNIVERSITY ‰J—³¬ŠwZ DRAGON97-TOP ŽÂ‰ª¬ŠwZ‚T”N‚P‘gƒz[ƒ€ƒy[ƒW ¶µ°é¼ÂÁ© ¥á¥Ë¥å¡¼ ¥á¥Ë¥å¡¼ The American School in Japan The Link Page ‰ªèŽs—§ˆä“c¬ŠwZƒz[ƒ€ƒy[ƒW Kids' Space ˆÀéŽs—§ˆÀé¼•”¬ŠwZ ‹{é‹³ˆç‘åŠw•‘®¬ŠwZ KEIMEI GAKUEN Home Page ( Japanese ) Shiranuma Home Page fuzoku-es.fukui-u.ac.jp welcome to Miasa E&J school _“ސ쌧E‰¡•lŽs—§’†ì¼¬ŠwZ‚̃y http://www...p/~m_maru/index.html fukui haruyama-es HomePage Torisu primary school goo Yakumo Elementary,Hokkaido,Japan FUZOKU Home Page Kamishibun Elementary School...

Sec. 21.3 Things to note Pulled together good pages regardless of language of page content. Use only link analysis after base set assembled iterative scoring is query-independent. Iterative computation after text index retrieval - significant overhead.

Proof of convergence nn adjacency matrix A: Sec. 21.3 Proof of convergence nn adjacency matrix A: each of the n pages in the base set has a row and column in the matrix. Entry Aij = 1 if page i links to page j, else = 0. 1 2 3 1 2 1 2 3 0 1 0 1 1 1 3 1 0 0

Hub/authority vectors Sec. 21.3 Hub/authority vectors View the hub scores h() and the authority scores a() as vectors with n components. Recall the iterative updates

Rewrite in matrix form Substituting, h=AAth and a=AtAa. Sec. 21.3 Rewrite in matrix form And their convergence was already proved (eg, in Linear Algebra) h=Aa. a=Ath. Recall At is the transpose of A. Substituting, h=AAth and a=AtAa. Thus, h is an eigenvector of AAt and a is an eigenvector of AtA. Further, our algorithm is a particular, known algorithm for computing eigenvectors: the power iteration method. Guaranteed to converge.

Terms EigenVector of an nxn matrix is a vector that gives a direction in which that transformation is simply a scaling EigenValue of a nxn matrix A is a scalar c such that A*x = c*x holds for some non-zero vector x EigenVector with the largest EigenValue is called the Principal EngenVector IST 516

Example A B C D iteration h(A) a(A) h(B) a(B) h(C) a(C) h(D) a(D) 1 1 1 1 1 1 1 1 1 2 0 2 2 1 0 1 2 0 3 0 4 3 2 0 2 3 0 4 0 6 6 3 0 3 6 0 5 0 12 9 6 0 6 9 0 6 0 18 18 9 0 9 18 0 (normalization ignored HIT’s output: “A is the most authoritative source (B & C are OK too) B and D are the best hubs (‘index’ pages).”

Example

Resources IIR Chap 21 http://www2004.org/proceedings/docs/1p309.pdf http://www2004.org/proceedings/docs/1p595.pdf http://www2003.org/cdrom/papers/refereed/p270/kamvar-270-xhtml/index.html http://www2003.org/cdrom/papers/refereed/p641/xhtml/p641-mccurley.html