Download presentation
Presentation is loading. Please wait.
1
Google搜索与 Inter网的信息检索
马志明 May 16, 2008
3
约有626,000项符合中国科学院数学与系统科学研究院的查询结果,以下是第1-100项。 (搜索用时 0.45 秒)
How can google make a ranking of 626,000 pages in 0.45 seconds?
4
involving plenty of Mathematics
A main task of Internet (Web) Information Retrieval = Design and Analysis of Search Engine (SE) Algorithm involving plenty of Mathematics 基于算法的网络搜索技术 4
5
HITS Jon Kleinberg Cornell University PageRank Sergey Brin and Larry Page Stanford University
6
Nevanlinna Prize(2006) Jon Kleinberg
One of Kleinberg‘s most important research achievements focuses on the internetwork structure of the World Wide Web. Prior to Kleinberg‘s work, search engines focused only on the content of web pages,not on the link structure. Kleinberg introduced the idea of “authorities” and “hubs”: An authority is a web page that contains information on a particular topic, and a hub is a page that contains links to many authorities Zhuzihu thesis.pdf
7
Page Rank, the ranking system used by the Google search engine.
Query independent content independent. using only the web graph structure
9
Page Rank, the ranking system used by the Google search engine.
15
PageRank as a Function of the Damping Factor
WWW paper PageRank as a Function of the Damping Factor Paolo Boldi Massimo Santini Sebastiano Vigna DSI, Università degli Studi di Milano 3 General Behaviour 3.1 Choosing the damping factor 3.2 Getting close to 1 can we somehow characterise the properties of ? what makes different from the other (infinitely many, if P is reducible) limit distributions of P?
16
Conjecture 1 : is the limit distribution of P when the starting
distribution is uniform, that is,
19
Website provide plenty of information:
pages in the same website may share the same IP, run on the same web server and database server, and be authored / maintained by the same person or organization. there might be high correlations between pages in the same website, in terms of content, page layout and hyperlinks. websites contain higher density of hyperlinks inside them (about 75% ) and lower density of edges in between.
22
transition information
HostGraph loses much transition information Can a surfer jump from page 5 of site 1 to a page in site 2 ?
23
From: s06-pc-chairs-email@u. washington
From: [mailto:s06-pc-chairs-Sent: 2006年4月4日 8:36 To: Tie-Yan Liu; Subject: [SIGIR2006] Your Paper #191 Title: AggregateRank: Bring Order to Web Sites Congratulations!! 29th Annual International Conference on Research & Development on Information Retrieval (SIGIR’06, August 6–11, 2006, Seattle, Washington, USA).
24
Ranking Websites, a Probabilistic View
Internet Mathematics, Volume 3 (2007), Issue 3 Ying Bao, Gang Feng, Tie-Yan Liu, Zhi-Ming Ma, and Ying Wang
25
---We show that this mean frequency is
- --- We suggest evaluating the importance of a website with the mean frequency of visiting the website for the Markov chain on the Internet Graph describing random surfing. ---We show that this mean frequency is equal to the sum of the PageRanks of all the webpages in that website (hence is referred as PageRankSum )
26
theory of stochastic complement
---We propose a novel algorithm (AggregateRank Algorithm) based on the theory of stochastic complement to calculate the rank of a website. ---The AggregateRank Algorithm can approximate the PageRankSum accurately, while the corresponding computational complexity is much lower than PageRankSum
27
--- By constructing return-time Markov chains restricted to each website, we describe also the probabilistic relation between PageRank and AggregateRank. ---The complexity and the error bound of AggregateRank Algorithm with experiments of real dada are discussed at the end of the paper.
28
n webs in N sites,
29
The stationary distribution, known as the PageRank vector, is given by
We may rewrite the stationary distribution as with as a row vector of length
30
We define the one-step transition probability from the website to the website by
where e is an dimensional column vector of all ones
32
The N×N matrix C(α)=(cij(α)) is referred to as the coupling matrix, whose elements represent the transition probabilities between websites. It can be proved that C(α) is an irreducible stochastic matrix, so that it possesses a unique stationary probability vector. We use ξ(α) to denote this stationary probability, which can be gotten from
33
Since One can easily check that is the unique solution to We shall refer as the AggregateRank
34
That is, the probability of visiting a website is equal to the sum of PageRanks of all the pages in that website. This conclusion is consistent to our intuition.
35
the transition probability from Si to Sj actually summarizes all the cases that the random surfer jumps from any page in Si to any page in Sj within one-step transition. Therefore, the transition in this new HostGraph is in accordance with the real behavior of the Web surfers. In this regard, the so-calculated rank from the coupling matrix C(α) will be more reasonable than those previous works.
36
Let denote the number of
visiting the website during the n times , that is We have
37
Assume a starting state in website A, i.e.
We define and inductively It is clear that all the variables are stopping times for X.
41
Let denote the transition matrix of
the return-time Markov chain for site Similarly, we have
42
Suppose that AggregateRank, i.e.
the stationary distribution of is Since Therefore
44
Based on the above discussions, the direct approach of computing the AggregateRank ξ(α) is to accumulate PageRank values (denoted by PageRankSum). However, this approach is unfeasible because the computation of PageRank is not a trivial task when the number of web pages is as large as several billions. Therefore, Efficient computation becomes a significant problem .
45
AggregateRank 1. Divide the n × n matrix into
N × N blocks according to the N sites. Construct the stochastic matrix for by changing the diagonal elements of to make each raw sum up to 1.
46
3. Determine from Form an approximation to the coupling matrix , by evaluating 5. Determine the stationary distribution of and denote it , i.e.,
47
Experiments In our experiments, the data corpus is the benchmark data for the Web track of TREC and 2004, which was crawled from the .gov domain in the year of It contains 1,247,753 webpages in total.
48
we get 731 sites in the. gov dataset
we get 731 sites in the .gov dataset. The largest website contains ,103 web pages while the smallest one contains only 1 page.
50
Performance Evaluation of Ranking Algorithms based on Kendall's distance
51
Similarity between PageRankSum and
other three ranking results.
54
From: pcchairs@sigir2008. confmaster
From: Sent: Thursday, April 03, :48 AM Dear Yuting Liu, Bin Gao, Tie-Yan Liu, Ying Zhang, Zhiming Ma, Shuyuan He, Hang Li We are pleased to inform you that your paper Title: BrowseRank: Letting Web Users Vote for Page Importance has been accepted for oral presentation as a full paper and for publication as an eight-page paper in the proceedings of the 31st Annual International ACM SIGIR Conference on Research & Development on Information Retrieval. Congratulations!!
59
Building model Properties of Q process: Stationary distribution:
Jumping probability: Embedded Markov chain: is a Markov chain with the transition probability matrix
60
Main conclusion 1 is the mean of the staying time on page i.
The more important a page is, the longer staying time on it is. is the mean of the first re-visit time at page i. The more important a page is, the smaller the re-visit time is, and the larger the visit frequency is.
61
Main conclusion 2 is the stationary distribution of
The stationary distribution of discrete model is easy to compute Power method for Log data for
63
Further questions How about inhomogenous process? Marked point process
Statistic result show: different period of time possesses different visiting frequency. Poisson processes with different intensity. Marked point process Hyperlink is not reliable. Users’ real behavior should be considered.
64
Relevance Ranking Many features for measuring relevance Questions
Term distribution (anchor, URL, title, body, proximity, ….) Recommendation & citation (PageRank, click-through data, …) Statistics or knowledge extracted from web data Questions What is the optimal ranking function to combine different features (or evidences)? How to measure relevance? Semantic structure of a web page can be obtained by analyzing its visual representation -> Web Page Blocks E.g., line, blank area, color, font size, image, etc The importance of each block can be measured using its content and position features -> Block Importance Model E.g. Location, (LinkNum, LinkText), (InteractionNum, InteractionSize), (ImageNum, ImageSize), (FormNum, FormSize), InnerText, FontSize, … 64
65
Learning to Rank What is the optimal weightings for combining the various features Use machine learning methods to learn the ranking function Human relevance system (HRS) Relevance verification tests (RVT) A method of utilizing human judges to explicitly measure the relevance of results generated for various query terms across multiple search engines. Industry standard means of measuring relevancy Used by: MSN, Google, Inktomi/Yahoo, & FAST HRS does not measure: page layout speed of page navigation aids contextual descriptions HRS is used for training, measurement and steering. RVT: The process of computing relevance scores. RVT has three important components: Language/market Static set version Source search engine RVT uses HRS relevance ratings for query/result pairs. RVT is unaware of specific HRS judges/ratings. HRS tool RVT International Relevance portal Wei-Ying Ma, Microsoft Research Asia 65
66
Learning to Rank Learning System Model Ranking System min Loss 66 66
Wei-Ying Ma, Microsoft Research Asia 66
67
Learning to Rank (Cont)
State-of-the-art algorithms for learning to rank take the pairwise approach Ranking SVM RankBoost RankNet (employed at Live Search) Learning to Rank (Cont) Break down 67 Wei-Ying Ma, Microsoft Research Asia 67
68
learning to rank The goal of learning to rank is to construct a real-valued function that can generate a ranking on the documents associated with the given query. The state-of-the-art methods transforms the learning problem into that of classification and then performs the learning task:
69
For each query, it is assumed that there are two categories of documents: positive and negative (representing relevant and irreverent with respect to the query). Then document pairs are constructed between positive documents and negative documents. In the training process, the query information is actually ignored.
70
[5] Y. Cao, J. Xu, T. -Y. Liu, H. Li, Y. Huang, and H. -W. Hon
[5] Y. Cao, J. Xu, T.-Y. Liu, H. Li, Y. Huang, and H.-W. Hon Adapting ranking svm to document retrieval. In Proc. of SIGIR’06, pages 186–193, 2006. [11] T. Qin, T.-Y. Liu, M.-F. Tsai, X.-D. Zhang, and H. Li. Learning to search web pages with query-level loss functions. Technical Report MSR-TR , 2006. As case studies, we investigate Ranking SVM and RankBoost. We show that after introducing query-level normalization to its objective function, Ranking SVM will have query-level stability. For RankBoost, the query-level stability can be achieved if we introduce both query-level normalization and regularization to its objective function.
72
We re-represent the learning to rank problem by introducing the concept of
‘query’ and ‘distribution given query’ into its mathematical formulation. More precisely, we assume that queries are drawn independently from a query space Q according to an (unknown) probability distribution
73
It should be noted that if , then the bound makes sense
It should be noted that if , then the bound makes sense. This condition can be satisfied in many practical cases. As case studies, we investigate Ranking SVM and RankBoost. We show that after introducing query-level normalization to its objective function, Ranking SVM will have query-level stability. For RankBoost, the query-level stability can be achieved if we introduce both query-level normalization and regularization to its objective function. These analyses agree largely with our experiments and the experiments in [5] and [11].
74
Rank aggregation Rank aggregation is to combine ranking results of entities from multiple ranking functions in order to generate a better one. The individual ranking functions are referred to as base rankers, or simply rankers.
75
Score-based aggregation
Rank aggregation can be classified into two categories [2]. In the first category, the entities in individual ranking lists are assigned scores and the rank aggregation function is assumed to use the scores (denoted as score-based aggregation) [11][18][28].
76
order-based aggregation
In the second category, only the orders of the entities in individual ranking lists are used by the aggregation function (denoted as order-based aggregation). Order-based aggregation is employed at meta-search, for example, in which only order (rank) information from individual search engines is available.
77
Previously order-based aggregation was mainly addressed with the unsupervised learning approach, in the sense that no training data is utilized; methods like Borda Count [2][7][27], median rank aggregation [9], genetic algorithm [4], fuzzy logic based rank aggregation [1], Markov Chain based rank aggregation [7] and so on were proposed.
85
It turns out that the optimization problems for the Markov Chain based methods are hard, because they are not convex optimization problems. We are able to develop a method for the optimization of one Markov Chain based method, called Supervised MC2. We prove that we can transform the optimization problem into that of Semidefinite Programming. As a result, we can efficiently solve the issue.
87
Next Generation Web Search ? (Web Search 2.0 --> 3.0)
Directions for new innovations Process-centric vs. data-centric Infrastructure for Web-scale data mining Intelligence & knowledge discovery Wei-Ying Ma, Microsoft Research Asia 87
88
Web Search – Past, Present, and Future
Wei-Ying Ma Web Search and Mining Group Microsoft Research Asia next generation.ppt Web Search - Past Present and Future - public.ppt 88
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.