Download presentation
Presentation is loading. Please wait.
Published byGillian Barton Modified over 8 years ago
1
Page Quality: In Search of an Unbiased Web Ranking Seminar on databases and the internet. Hebrew University of Jerusalem Winter 2008 Ofir Cooper (ofir.cooper@gmail.com)
2
Page Quality: in search of unbiased web ranking Junghoo Cho, Sourashis Roy, Robert E. Adams UCLA Computer Science department (June 2005) Impact of search engines of page popularity Junghoo Cho, Sourashis Roy UCLA Computer Science department (May 2004) References
3
Overview What is the current algorithm search engines use. Motivation to improve. The proposed method. Implementation. Experimental results. Conclusions (problems & future work)
4
Search engines today Search engines today use a variant of the PageRank rating system to sort relevant results. PageRank (PR) tries to measure the “importance” of a page, by measuring its popularity.
5
What is PageRank ? Based on the random-surfer web-user model: A person starts surfing the web at a random page. The person advances by clicking on links in the page (selected at random). At each step, there is a small chance the person will jump to a new, random page. This model does not take into account search engines.
6
What is PageRank ? PageRank (PR) measures the probability that the random-surfer is at page p, at any given time. Computed by this formula:
7
The problem with PR PageRank rating creates a “rich-get- richer” phenomenon. Popular pages will become even more popular over time. Unpopular pages are hardly ever visited, because they remain unknown to users. They are doomed to obscurity.
8
The problem with PR This was observed in an experiment: The Open Directory (http://dmoz.org) was sampled, twice within seven months period. Change to number of incoming links to each page was recorded. Pages were divided into popularity groups, and the results…
9
The bias against low-PageRank pages
11
In their study, Cho and Roy show that in a search-dominant world, discovery time of new pages rises by a factor of 66 !
12
What can be done to remedy the situation ? Ranking should reflect quality of pages. Popularity is not a good-enough measure for quality, because there are many good, yet unknown, pages. We want to give new pages an equal opportunity (if they are of high quality).
13
How to define “quality” ? Quality is a very subjective notion. Let’s try to define it anyway… Page Quality – the probability that an average user will like the page when he/she visits it for the first time.
14
How to estimate quality ? Quality can be measured exactly if we show all users the page, and ask their opinion. It’s impractical to ask users their opinion on every page they visit. (PageRank is a good measure of quality, if all pages had been given equal opportunity to be discovered. That is no longer the case)
15
How to estimate quality ? We want to estimate quality, but from measurable quantities. We can talk about these quantities: Q(p) – page quality. The probability that a user will like page p when exposed to it for the first time. P(p,t) – page popularity. The fraction of users who like p at time t. V(p,t) – visit popularity. The number of “visits” page p receives at unit time interval at time t. A(p,t) – page awareness. The fraction of web users who are aware of page p, at time t.
16
Estimating quality Lemma 1 This is not sufficient – we can’t measure awareness. We can measure page popularity, P(p,t). How do we estimate Q(p) only from P(p,t) ? Proof: follows from definitions.
17
Estimating quality Observation 1 – popularity (as measured in incoming links) measures quality well for pages with the same age. Observation 2 – the popularity of new, high- quality pages, will increase faster then the popularity of new, low-quality pages. In other words, the time-derivative of popularity is also a measure of quality.
18
Estimating quality We need a web-user model to link popularity and quality. We start with these two assumptions: 1. Visit popularity is proportional to popularity; V(p,t) = rP(p,t) 2. Random visit hypothesis: a visit to page p can be from any user with equal probability.
19
Estimating quality Lemma 2 A(p,t) can be computed from past popularity: * n is number of web users.
20
Estimating quality Proof: By time t, page p was visited times. We compute the probability that some user, u, is not aware of p, after p was visited k times.
21
Estimating quality PR(i’th visitor to p is not u)=1-1/n PR(u didn’t visit p | p was visited k times) =
22
Estimating quality We can combine lemmas 1 and 2 to get a popularity as a function of time. Theorem: The proof is a bit long, we won’t go into it (available on hard copy, to those interested).
23
Estimating quality This is popularity vs. time, as predicted by our formula: (This trend was seen in practice, by companies such as NetRatings)
24
Estimating quality Important fact: Popularity converges to quality over a long period of time. We will use this fact to check estimates about quality later.
25
Estimating quality Lemma 3 Proof: We differentiate the equation P(p,t)=A(p,t)Q(p) by time, plug in the expression we found for A(p,t) in Lemma 2, and that’s it.
26
Estimating quality We define the “relative popularity increase function”:
27
Estimating quality
28
Theorem Q(p) = I(p,t)+P(p,t) at all times. Proof:
29
Estimating quality We can now estimate quality of page by measuring only popularity. What happens if quality changes in time? Is our estimate still good ?
30
Quality change over time In reality, quality of pages changes: Web pages change. Expectation of users rise as better pages appear all the time. Will the model handle changing quality well ?
31
Quality change over time Theorem If quality changed at time T (from Q 1 to Q 2 ), then for t > T, the estimate for quality is still:
32
Quality change over time Proof: After time T, we put users into three groups: (1) Users who visited the page before T. (group u1) (2) Users who visited the page after T. (group u2) (3) Users who never visited the page.
33
Quality change over time Fraction of users who like the page at time t>T: * After time t, group u 2 expands, while u 1 remains the same. We will have to compute u 2 (t).
34
Quality change over time From the proof of lemma 2 (calculation of awareness at time t) it is easy to see that:
35
Quality change over time Size of |u 1 -u 2 (t)|: * The size of intersection of u 1 and u 2 is their multiplication, because they are independent. (According to random-visit hypothesis, the probability that a user visits page p at time t is independent of his past visit history.)
36
Quality change over time
37
Q.E.D
38
Implementation
39
The implementation of a quality-estimator system is very simple: 1. “Sample” the web at different times. 2. Compute popularity (PageRank) for each page, and popularity change. 3. Estimate quality of each page.
40
Implementation But there are problems with this implementation. 1. Approximation error – we sample at discrete time points, not a continuous sample. 2. Quality change between samples makes estimate inaccurate. 3. We will have a time lag. Quality estimate will never be up-to-date.
41
Implementation Examining approximation error Q=0.5 ∆t=1 (units not specified!)
42
Implementation Examining slow change in quality Q(p,t)=0.4+0.0006t Q(p,t)=0.5+ct
43
Implementation Examining rapid change in quality
44
The Experiment
45
Evaluating a web metric such as quality is difficult. Quality is subjective. There is no standard corpus. Doing a user survey is not practical.
46
The Experiment The experiment is based on the observation that popularity converges to quality (assuming quality is constant). If we estimate quality of pages, and wait some time, we can check our estimates against the eventual popularity.
47
The Experiment The test was done on 154 web sites, obtained from the Open Directory (http://dmoz.org).http://dmoz.org All pages of these web sites were downloaded (~5 million).
48
First three snapshots were used to estimate quality, and fourth snapshot was used to check prediction 4 snapshots were taken at these times:
49
The Experiment Only “stable” pages were included (pages where quality estimates did not change much). This turned out to be most pages.
50
The Experiment Quality is taken to be PR(t=4). Quality estimator is measured against PR(t=3)
51
The Experiment The results:
52
The quality estimator metric seems better than PageRank. Its average error is smaller: Average error of Q3 estimator = 45% Average error of P3 estimator = 74% The distribution of error is also better.
53
Summary & Conclusions
54
Summary We saw the bias created by search engines. A more desirable ranking will rank pages by quality, not popularity.
55
Summary We can estimate quality from the link structure of the web (popularity and popularity evolution). Implementation is feasible, only slightly different than current PageRank system.
56
Summary Experimental results show that quality estimator is better than PageRank
57
Conclusions Problems & Future work: Statistical noise is not negligible for pages with low popularity. Experiment was done on small scale. Should try it on large scale.
58
Conclusions Problems & Future work: Can we use number of “visits” to pages to estimate popularity increase, instead of number of incoming links ? Theory is based on a web-user model that doesn’t take into account search engines. That is unrealistic in this day and age.
59
Follow up suggestions Many more interesting publications in Junghoo Cho’s website: http://oak.cs.ucla.edu/~cho/ Such as: Estimating Frequency of Change Shuffling the deck: Randomizing search results Automatic Identification of User Interest for Personalized Search
60
Other algorithms for ranking Extra material can be found at: http://www.seoresearcher.com/category/link- popularity-algorithms/ Algorithms such as: Hub and Authority HITS HUBAVG
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.