Presentation is loading. Please wait.

Presentation is loading. Please wait.

Information Retrieval Link-based Ranking. Ranking is crucial… “.. From our experimental data, we could observe that the top 20% of the pages with the.

Similar presentations


Presentation on theme: "Information Retrieval Link-based Ranking. Ranking is crucial… “.. From our experimental data, we could observe that the top 20% of the pages with the."— Presentation transcript:

1 Information Retrieval Link-based Ranking

2 Ranking is crucial… “.. From our experimental data, we could observe that the top 20% of the pages with the highest number of incoming links obtained 70% of the new links after 7 months, while the bottom 60% of the pages obtained virtually no new incoming links during that period…” [ Cho et al., 04 ] Cho et al., 04

3 Query processing First retrieve all pages meeting the text query Order these by their link popularity +… other scores: TF-IDF, Title, Anchor,

4 Query-independent ordering First generation: using link counts as simple measures of popularity. Two basic suggestions: Undirected popularity: Each page gets a score given by the number of in-links plus the number of out-links (es. 3+2=5). Directed popularity: Score of a page = number of its in-links (es. 3). SPAM!!!!

5 Second generation: PageRank Each link has its own importance!! PageRank is independent of the query many interpretations…

6 Basic Intuition… What about nodes with no out-links?

7 Google’s Pagerank B(i) : set of pages linking to i. #out(j) : number of outgoing links from j. e : vector of components 1/sqrt{N}. Random jump

8 Pagerank: use in Search Engines Preprocessing: Given graph of links, build matrix P Compute its principal eigenvector The i-th entry is the pagerank of page i We are interested in the relative order Query processing: Retrieve pages meeting query Rank them by their pagerank Order is query-independent

9 Three different interpretations Graph (intuitive interpretation) What is the random jump? What happens to the “real” web graph? Matrix (easy for computation) Turns out to be an eigenvector computation Or a linear system solution Markov Chain (useful to show convergence) a sort of Usage Simulation

10 Pagerank as Usage Simulation Imagine a user doing a random walk on web: Start at a random page At each step, go out of the current page along one of the links on that page, equiprobably “In the steady state” each page has a long-term visit rate - use this as the page’s score. 1/3

11 Not quite enough The web is full of dead-ends. Random walk can get stuck in dead-ends. Cannot talk about long-term visit rates.

12 Teleporting At each step, with probability 10%, jump to a random page. With remaining probability (90%), go out on a out-link taken at random. Any node 0.9 Neighbors 0.1

13 Result of teleporting Now we CANNOT get stuck locally. There is a long-term rate at which any page is visited Not obvious  Markov chains

14 Markov chains A Markov chain consists of n states, plus an n  n transition probability matrix P. At each step, we are in exactly one of the states. For 1  i,j  n, the matrix entry P ij tells us the probability of going from state i to state j. ij P ij P ii >0 is OK.

15 Markov chains Clearly, for all i, Markov chains abstracts random walks A Markov chain is ergodic if you have a path from any state to any other you can be in any state at every time step, with non-zero probability. Not ergodic Not ergodic

16 Ergodic Markov chains For any ergodic Markov chain, there is a unique long-term visit rate for each state. Steady-state distribution. Over a long time-period, we visit each state in proportion to this rate. It doesn’t matter where we start.

17 Computing steady-state probabilities ? Let x = (x 1, …, x n ) denote the row vector of steady-state probabilities. If our current position is described by x, then the next step is distributed as xP. But x is the steady state, so x=xP. Solving this matrix equation gives us x. So x is the (left) eigenvector for P. (Corresponds to the “principal” eigenvector of P with the largest eigenvalue.)

18 Computing x by Power Method Recall, regardless of where we start, we eventually reach the steady state a. Start with any distribution (say x=(10…0)). After one step, we’re at xP; after two steps at xP 2, then xP 3 and so on. “Eventually” means for “large” k, a = xP k Algorithm: multiply x by increasing powers of P until the product looks stable.

19 Why we need a fast ranking? “…The link structure of the Web is significantly more dynamic than the contents on the Web. Every week, about 25% new links are created. After a year, about 80% of the links on the Web are replaced with new ones. This result indicates that search engines need to update link-based ranking metrics very often…” [ Cho et al., 04 ] Cho et al., 04

20 Accelerating PageRank Web Graph Compression to fit in internal memory Efficient external-memory implementation Mathematical approaches Combination of the above strategies

21 Personalized PageRank a b

22 Influencing PageRank (“Personalization”) Input: Web graph W influence vector v v : page  degree of influence Output: Rank vector r: page  importance wrt v r = PR(W, v)

23 Personalized PageRank α is the dumping factor v is a personalization vector

24 HITS: Hubs and Authorities A good hub page for a topic points to many authoritative pages for that topic. A good authority page for a topic is pointed to by many good hubs for that topic. Circular definition - will turn this into an iterative computation.

25 HITS: Hypertext Induced Topic Search It is query-dependent Produces two scores per page: Authority score Hub score

26 Hubs & Authorities Calculation Root Set Base Set Query

27 Assembling the base set Root set typically 200-1000 nodes. Base set may have up to 5000 nodes. How do you find the base set nodes? Follow out-links by parsing root set pages. Get in-links (and out-links) from a connectivity server. (Actually, suffices to text-index strings of the form href=“URL” to get in-links to URL.)

28 Authority and Hub scores 2 3 4 1 1 5 6 7 a(1) = h(2) + h(3) + h(4) h(1) = a(5) + a(6) + a(7)

29 Iterative update Repeat the following updates, for all x: x x

30 HITS: Link Analysis Computation Where a: Vector of Authorities’ scores h: Vector of Hubs’ scores. A: Adjacency matrix in which a i,j = 1 if i points to j. Thus, h is an eigenvector of AA t a is an eigenvector of A t A scaling SVD of A

31 Math Behind the Algorithm Theorem (Kleinberg, 1998). The vectors a(p) and h(p) converge to the principal eigenvectors of A T A and AA T, where A is the adjacency matrix of the (directed) Web subgraph.

32 How many iterations? Claim: relative values of scores will converge after a few iterations: We only require the relative orders of the h() and a() scores - not their absolute values. In practice ~5 iterations get you close to stability. Scores are sensitive to small variations of the graph. Bad !!

33 HITS Example Results Authority Hubness 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Authority and hubness weights

34 Weighting links In hub/authority link analysis, can match text to query, then weight link.

35 Weighting links Should increase with the number of query terms in anchor text. E.g.: 1+ number of query terms. www.ibm.com Armonk, NY-based computer giant IBM announced today xy Weight of this link for query computer is 2.

36 Practical HITS Used by ASK.com How to compute it efficiently??????

37 Comparison Pagerank Pros Hard to spam Quality signal for all pages Cons Non-trivial to compute Not query specific Doesn’t work on small graphs Proven to be effective for general purpose ranking HITS & Variants Pros Query specific Works on small graphs Cons Local graph structure can be manufactured (spam!) Real-time computation is hard. Well suited for supervised directory construction

38 SALSA Proposed by Lempel and Moran 2001. Probabilistic extension of the HITS algorithm Random walk is carried out by following hyperlinks both in the forward and in the backward direction Two separate random walks Hub walk Authority walk

39 Forming a Bipartite Graph in SALSA

40 SALSA: Random Walks Hub walk Follow a forward-link from an hub u h to an authority w a Traverse a back-link from w a to some page v h ’ Authority Walk Follow a back-link from an authority w a to an hub u h Traverse a forward-link from an hub u h to an authority z a Lempel and Moran (2001) proved that SALSA weights are more robust that HITS weights in some cases. Authority-score converges to Indegree, Hub-score converges to outdegree

41 Information Retrieval Spamming

42 Spam Farm (SF), rules of thumb Use all available own pages in the SF Accumulate the maximum number of inlinks to SF NO links pointing outside the SF Avoid dangling nodes within the SF Spamming PageRank

43 Spamming HITS Easy to spam Create a new page p pointing to many authority pages (e.g., Yahoo, Google, etc.)  p becomes a good hub page … On p, add a link to your home page

44 Many applications Ranking Web News Financial Institutions Peers Human Trustness Viruses Social networks You propose!

45 Information Retrieval Other tools

46

47

48 snippet

49 Joint with A. Gullì WWW 2005, Tokio (JP)

50

51

52

53 SnakeT ’s personalization Many interesting features Full-adaptivity to the variegate user needs/behaviors Scalable to many users and their unbounded profiles Privacy protection: no login or profiled information is required/used Black-box: no change in the underlying (unpersonalized) search engine The user first gets a “glimpse” on all themes of results without scanning all of them, and then selects (filters) the results interesting for him/her.


Download ppt "Information Retrieval Link-based Ranking. Ranking is crucial… “.. From our experimental data, we could observe that the top 20% of the pages with the."

Similar presentations


Ads by Google