Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scaling Personalized Web Search Glen Jeh, Jennfier Widom Stanford University Presented by Li-Tal Mashiach Search Engine Technology course (236620) Technion.

Similar presentations


Presentation on theme: "Scaling Personalized Web Search Glen Jeh, Jennfier Widom Stanford University Presented by Li-Tal Mashiach Search Engine Technology course (236620) Technion."— Presentation transcript:

1 Scaling Personalized Web Search Glen Jeh, Jennfier Widom Stanford University Presented by Li-Tal Mashiach Search Engine Technology course (236620) Technion

2 Today’s topics Overview Motivation Personal PageRank Vector Efficient calculation of PPV Experimental results Discussion

3 PageRank Overview Ranking method of web pages based on the link structure of the web Important pages are those linked-to by many important pages Original PageRank has no initial preference for any particular pages

4 PageRank Overview random surfer The ranking is based on the probability that a random surfer will visit a certain page at a given time E(p) E(p) can be: Uniformly distributed Biased distributed

5 Motivation We would like to give higher importance to user selected pages P preferred pages User may have a set P of preferred pages to random page Instead of jumping to any random page with probability c, the jump is restricted to P That way, we increase the probability that the random surfer will stay in the near environment of pages in P personalized view Considering P will create a personalized view of the importance of pages on the web

6 Personalized PageRank Vector (PPV) Restrict preference sets P to subsets of a set of hub pages H - set of pages with high PageRank PPV is a vector of length n, where n is the number of pages on the web PPV[p] = the importance of page p

7 PPV Equation u – preference vector |u| = 1 u(p) = the amount of preference for page p A – n x n matrix c – the probability the random surfer jumps to a page in P

8 PPV – Problem Not practical to compute PPV’s during query time Not practical to compute and store offline There are preference sets How to calculate PPV? How to do it efficiently?

9 Main Steps to solution preference vectors common Break down preference vectors into common components offline online Computation divided between offline (lots of time) and online (focused computation) redundant Eliminates redundant computation

10 Linearity Theorem The solution to a linear combination of preference vectors is the same linear combination of the corresponding PPV’s. x unit vector Let x i be a unit vector r i hub vector Let r i be the PPV corresponding to x i, called hub vector

11 Example …r1 …r2 …r12 … x1, x2, x12 Personal preferences of David …rk …

12 Good, but not enough… If hub vector r i for each page in H can be computed ahead of time and stored, then computing PPV is easier The number of pre-computed PPV decrease from to |H|. But…. Each hub vector computation requires multiple scans of the web graph Time and space grow linearly with |H| The solution so far is impractical

13 Decomposition of Hub Vectors In order to compute and store the hub vectors efficiently, we can further break them down into… Partial vector Partial vector –unique component Hubs skeleton Hubs skeleton –encode interrelationships among hub vectors hub vector Construct into full hub vector during query time Saves computation time and storage due to sharing of components among hub vectors

14 Inverse P-distance Hub vector r p inverse P-distance vector Hub vector r p can be represented as inverse P-distance vector l(t) – the number of edges in path t P(t) – the probability of traveling on path t

15 Partial Vectors r p Breaking r p into into two components: Partial Vectors- Partial Vectors- computed without using any intermediate nodes from H The rest For well-chosen sets H, it will be true that for many pages p,q Partial Vector Paths that going through some page

16 Precompute and store the partial vector Cheaper to compute and store than Decreases as |H| increases Add at query time to compute the full hub vector But… Computing and storing could be expensive as itself Good, but not enough…

17 Hubs Skeleton Breaking down : Hubs skeleton Hubs skeleton - The set of distances among hub, giving the interrelationships among partial vectors r p (H) for each p, r p (H) has size at most |H|, much smaller than the full hub vector Partial Vectors Hubs skeleton Handling the case p or q is itself in H Paths that go through some page

18

19 Example H a b d c

20 Putting it all together Given a chosen reference set P 1. Form a preference vector u 2. Calculate hub vector for each i k 3. Combine the hub vectors Pre- computed of partial vectors Hubs skeleton may be deferred to query time

21 Algorithms Decomposition theorem Basic dynamic programming algorithm Partial vectors - Selective expansion algorithm Hubs skeleton - Repeated squaring algorithm

22 Decomposition theorem r p The basis vector r p is the average of the basis vectors of its out-neighbors, plus a compensation factor Define relationships among basis vectors r p Having computed the basis vectors of p’s out-neighbors to certain precision, we can use the theorem to compute r p to greater precision

23 Basic dynamic programming algorithm dynamic programming algorithm Using the decomposition theory, we can build a dynamic programming algorithm which iteratively improves the precision of the calculation On iteration k, only paths with length ≤ k-1 are being considered The error is reduced by a factor of 1-c on each iteration

24 Computing partial vectors Selective expansion algorithm Tours passing through a hub page H are never considered The expansion from p will stop when reaching page from H

25 Computing hubs skeleton Repeated squaring algorithm Using the intermediate results from the computation of partial vectors squared The error is squared on each iteration – reduces error much faster r p (H) Running time and storage depend only on the size of r p (H) This allows to defer the computation to query time

26 Experimental results Perform experiments using real web data from Stanford’s WebBase, containing 80 million pages after removing leaf pages Experiments were run using a 1.4 gigahertz CPU on a machine with 3.5 gigabytes of memory

27 Experimental results Partial vector approach is much more effective when H contains high-PageRank pages H was taken from the top 1000 to the top 100,000 pages with the highest PageRank

28 Experimental results Compute hubs skeleton for |H|=10,000 Average size is 9021 entries, much less than dimensions of full hub vectors

29 Experimental results r p (H) Instead of using the entire set r p (H), using only the highest m enteries Hub vector containing 14 million nonzero entries can be constructed from partial vectors in 6 seconds

30 Discussion Are personalized PageRank’s even useful? What if personally chosen pages are not representative enough? Too focused? Even if overhead is scalable with number of pages, do light-web users want to accept that overhead? performance depends on choice of personal pages

31 References Scaling Personalized Web Search Glen Jeh and Jennifer Widom WWW2003 Personalized PageRank seminar: Link mining http://www.informatik.uni- freiburg.de/~ml/teaching/ws04/lm/20041207_PageRank_ Alcazar.ppt http://www.informatik.uni- freiburg.de/~ml/teaching/ws04/lm/20041207_PageRank_ Alcazar.ppt


Download ppt "Scaling Personalized Web Search Glen Jeh, Jennfier Widom Stanford University Presented by Li-Tal Mashiach Search Engine Technology course (236620) Technion."

Similar presentations


Ads by Google