Presentation is loading. Please wait.

Presentation is loading. Please wait.

Websearch Multi-Task Learning and Web Search Ranking Gordon Sun ( 孙国政 ) Yahoo! Inc March 2007.

Similar presentations


Presentation on theme: "Websearch Multi-Task Learning and Web Search Ranking Gordon Sun ( 孙国政 ) Yahoo! Inc March 2007."— Presentation transcript:

1 websearch Multi-Task Learning and Web Search Ranking Gordon Sun ( 孙国政 ) Yahoo! Inc March 2007

2 websearch Page 2 Outline: 1.Brief Review: Machine Learning in web search ranking and Multi-Task learning. 2.MLR with Adaptive Target Value Transformation – each query is a task. 3.MLR for Multi-Languages – each language is a task. 4.MLR for Multi-query classes – each type of queries is a task. 5.Future work and Challenges.

3 websearch Page 3 MLR (Machine Learning Ranking) General Function Estimation and Risk Minimization: Input: x = {x 1, x 2, …, x n } Output: y Training set: {y i, x i }, i = 1, …, n Goal: Estimate mapping function y = F(x) In MLR work: x = x (q, d) = {x 1, x 2, …, x n } --- ranking features y = judgment labeling: e.g. {P E G F B} mapped to {0, 1, 2, 3, 4}. Loss Function: L(y, F(x)) = (y – F(x)) 2 Algorithm: MLR with regression.

4 websearch Page 4 Rank features construction Query features: query language, query word types (Latin, Kanji, …), … Document features: page_quality, page_spam, page_rank,… Query-Document dependent features: Text match scores in body, title, anchor text (TF/IDF, proximity),... Evaluation metric – DCG ( Discounted Cumulative Gain ) where grades G i = grade values for {P, E, G, F, B} (NDCG – 2 n ) DCG5 -- (n=5), DCG10 -- (n=10)

5 websearch Page 5 Distribution of judgment grades

6 websearch Page 6 Milti-Task Learning Single-Task Learning (STL) -One prediction task (classification/regression): to estimate a function based on oneTraining/testing set: T= {y i, x i }, i = 1, …, n Multi-Task Learning (MTL) -Multiple prediction tasks, each with their own training/testing set: T k = {y ki, x ki }, k = 1, …, m, i = 1, …, n k -Goal is to solve multiple tasks together: - Tasks share the same input space (or at least partially): - Tasks are related (say, MLR -- share one mapping function)

7 websearch Page 7 Milti-Task Learning: Intuition and Benefits Empirical Intuition -Data from “related” tasks could help -- -Equivalent to increase the effective sample size Goal: Share data and knowledge from task to task --- Transfer Learning. Benefits - when # of training examples per task is limited - when # of tasks is large and can not be handled by MLR for each task. - when it is difficult/expensive to obtain examples for some tasks - possible to obtain meta-level knowledge

8 websearch Page 8 Milti-Task Learning: “Relatedness” approaches. Probabilistic modeling for task generation [Baxter ’00], [Heskes ’00], [The, Seeger, Jordan ’05], [Zhang, Gharamani, Yang ’05] Latent Variable correlations – Noise correlations [Greene ’02] – Latent variable modeling [Zhang ’06] Hidden common data structure and latent variables. – Implicit structure (common kernels) [Evgeniou, Micchelli, Pontil ’05] – Explicit structure (PCA) [Ando, Zhang ’04] Transformation relatedness [Shai ’05]

9 websearch Page 9 Milti-Task Learning for MLR Different levels of relatedness. Grouping data based on queries, each query could be one task. Grouping data based on languages of queries, each language is a task. Grouping data based on query classes

10 websearch Page 10 Outline: 1.Brief Review: Machine Learning in web search ranking and Multi-Task learning. 2.MLR with Adaptive Target Value Transformation – each query is a task. 3.MLR for Multi-Languages – each language is a task. 4.MLR for Multi-query classes – each type of queries is a task. 5.Future work and Challenges.

11 websearch Page 11 Adaptive Target Value Transformation Intuition: Rank features vary a lot from query to query. Rank features vary a lot from sample to sample with same labeling. MLR is a ranking problem, but regression is to minimize prediction errors. Solution: Adaptively adjust training target values: Where linear (monotonic) transformation is required (nonlinear g() may not reserve orders of E(y|x))

12 websearch Page 12 Adaptive Target Value Transformation Implementation: Empirical Risk Minimization Where the linear transformation weights are regularized, λ α and λ β are regularization parameters, the p-norm. The solution will be

13 websearch Page 13 Adaptive Target Value Transformation Norm p=2 solution: for each ( λ α and λ β ) 1.For initial (αβ), find F(x) by solving: 2.For given F(x), solve for each (α q, β q ), q = 1, 2, … Q. 3.Repeat 1 until Norm p=1 solution, solve conditional quadratic programming [Lasso/lars] Convergence Analysis: Assuming

14 websearch Page 14 Adaptive Target Value Transformation Experiments data:

15 websearch Page 15 Adaptive Target Value Transformation Evaluation of aTVT on US and CN data

16 websearch Page 16 Adaptive Target Value Transformation

17 websearch Page 17 Adaptive Target Value Transformation

18 websearch Page 18 Adaptive Target Value Transformation Observations: 1. Relevance gain (DCG5 ~ 2%) is visible. 2. Regularization is needed. 3. Different query types gain differently from aTVT.

19 websearch Page 19 Outline: 1.Brief Review: Machine Learning in web search ranking and Multi-Task learning. 2.MLR with Adaptive Target Value Transformation – each query is a task. 3.MLR for Multi-Languages – each language is a task. 4.MLR for Multi-query classes – each type of queries is a task. 5.Future work and Challenges.

20 websearch Page 20 Multi-Language MLR Objective: Make MLR globally scalable: >100 languages, >50 regions. Improve MLR for small regions/languages using data from other languages. Build a Universal MLR for all regions that do not have data and editorial support.

21 websearch Page 21 Multi-Language MLR Part 1 1.Feature Differences between Languages 2.MLR function differences between Languages.

22 websearch Page 22 Multi-Language MLR Distribution of Text Score Legend: JP, CN, DE, UK, KR Perf+Excellent urlsBad urls

23 websearch Page 23 Multi-Language MLR Distribution of Spam Score Legend: JP, CN, DE, UK, KR Perf+Excellent urlsBad urls JP, KR similar DE, UK similar

24 websearch Page 24 Multi-Language MLR Training and Testing on Different Languages UKDEKRJPCN UK6.222.29-0.212.960.32 DE6.9613.16.256.053.94 KR1.50-0.555.694.493.86 JP-1.25-3.79-0.304.481.30 CN1.91-3.530.292.507.47 Train Language Test Language % DCG improvement over base function

25 websearch Page 25 Multi-Language MLR Language Differences: observations  Feature difference across languages is visible but not huge.  MLR trained for one language does not work well for other languages.

26 websearch Page 26 Multi-Language MLR Part 2 Transfer Learning with Region features

27 websearch Page 27 Multi-Language MLR Query Region Feature  New feature: query region:  Multiple Binary Valued Features: Feature vector: qr = (CN, JP, UK, DE, KR) CN queries: (1, 0, 0, 0, 0) JP queries: (0, 1, 0, 0, 0) UK queries: (0, 0, 1, 0, 0) …  To test the Trained Universal MLR on new languages: e.g. FR Feature vector: qr = (0, 0, 0, 0, 0)

28 websearch Page 28 Multi-Language MLR Query Region Feature: Experiment results LanguageCombined ModelCombined Model With Query Region Feature JP3.07%3.53% CN6.24%7.02% UK4.34%5.92% DE9.86%10.51% KR5.79%6.83% % DCG-5 improvement over base function

29 websearch Page 29 Multi-Language MLR Query Region Feature: Experiment results CJK and UK,DE Models Test LanguageAll Language ModelUK, DE Model CJK Model JP3.53%4.39% CN7.02%7.17% UK5.92%5.93% DE10.51%12.5% KR6.83%6.14% All models include query region feature

30 websearch Page 30 Multi-Language MLR Query Region Feature: Observations  Query Region feature seems to improve combined model performance in every case. Not always statistically significant.  Helped more when we had less data (KR).  Helped more when introducing “near languages” models (CJK, EU)  Would not help for languages with large training data (JP, CN).

31 websearch Page 31 Multi-Language MLR Experiments: Overweighting Target Language  This method deals with the common case where there is a language with a small amount of data available.  Use all available data, but change the weight of the data from the target language.  When weight=1 “Universal Language Model”  As weight->INF becomes Single Language Model.

32 websearch Page 32 Multi-Language MLR Germany

33 websearch Page 33 Multi-Language MLR UK

34 websearch Page 34 Multi-Language MLR China

35 websearch Page 35 Multi-Language MLR Korea

36 websearch Page 36 Multi-Language MLR Japan

37 websearch Page 37 Multi-Language MLR Average DCG Gain For JP, CN, DE, UK, KR

38 websearch Page 38 Multi-Language MLR Overweighting Target Language Observations:  It helps on certain languages with small size of data (KR, DE).  It does not help on some languages (CN, JP).  For languages with enough data, it will not help.  The weighting of 10 seems better than 1 and 100 on average.

39 websearch Page 39 Multi-Language MLR Part 3 Transfer Learning with Language Neutral Data and Regression Diff

40 websearch Page 40 Multi-Language MLR Selection of Language Neutral queries:  For each of (CN, JP, KR, DE, UK), train an MLR with own data.  Test queries of one language by all languages MLRs.  Select queries that showed best DCG cross different language MLRs.  Consider these queries as language neutral and could be shared by all language MLR development.

41 websearch Page 41 Multi-Language MLR Evaluation of Language Neutral Queries on CN- simplified dataset (2,753 queries). All the queriesLanguage- Neutral queries only (top ~500 queries) CN-Traditionaldcg5 = 5.645.79(+2.7%) Korean5.195.50(+6%) Japanese5.855.83

42 websearch Page 42 Outline: 1.Brief Review: Machine Learning in web search ranking and Multi-Task learning. 2.MLR with Adaptive Target Value Transformation – each query is a task. 3.MLR for Multi-Languages – each language is a task. 4.MLR for Multi-query classes – each type of queries is a task. 5.Future work and Challenges.

43 websearch Page 43 Multi-Query Class MLR Intuitions: Different types of queries behave differently: Require different ranking features, (Time sensitive queries  page_time_stamps). Expect different results: (Navigational queries  one official page on the top.) Also, different types of queries could share the same ranking features.. Multi-class learning could be done in a unified MLR by Introducing query classification and use query class as input ranking features. Adding page level features for the corresponding classes.

44 websearch Page 44 Multi-Query Class MLR Time Recency experiments: Feature implementation: Binary query feature: Time Sensitive (0,1) Binary page feature: discovered within last three month. Data: 300 time sensitive queries (editorial). ~2000 ordinary queries. Over weight time sensitive queries by 3. 10-fold cross validation on MLR training/testing.

45 websearch Page 45 Multi-Query Class MLR Time Recency experiments result: Compare MLR with and w/o page_time feature. DCG gainP-value Time sensitive queries 2.31%1.08e-6 All queries0.52%0.0017

46 websearch Page 46 Multi-Query Class MLR Name Entity queries: Feature implementation: Binary query feature: name entity query (0,1) 11 new page features implemented: Path length oHost length oNumber of host component (url depth) oPath contains “index” oPath contains either “cgi”, “asp”, “jsp”, or “php” oPath contains “search” or “srch”, … Data: 142 place name entity queries. ~2000 ordinary queries. 10-fold cross validation on MLR training/testing.

47 websearch Page 47 Multi-Query Class MLR Name Entity query experiments result: Compared MLR with base model without name entity features. DCG gainP-value Name Entity queries (142) 0.82%0.09 All queries0.28%0.09

48 websearch Page 48 Multi-Query Class MLR Observations: Query class combined with page level features could help MLR relevance. More research is needed on query classification and page level feature optimization.

49 websearch Page 49 Outline: 1.Brief Review: Machine Learning in web search ranking and Multi-Task learning. 2.MLR with Adaptive Target Value Transformation – each query is a task. 3.MLR for Multi-Languages – each language is a task. 4.MLR for Multi-query classes – each type of queries is a task. 5.Future work and Challenges.

50 websearch Page 50 Future Work and Challenges  Multi-task learning extended to different types of training data: Editorial judgment data. User click-through data  Multi-task learning extended to different types of relevance judgments: Absolute relevance judgment. Relative relevance judgment  Multi-task learning extended to use both Labeled data. Unlabeled data.  Multi-task learning extended to different types of search user intentions.

51 websearch Page 51 Contributors from Yahoo! International Search Relevance team: Algorithm and model development: Zhaohui Zheng, Hongyuan Zha, Lukas Biewald, Haoying Fu Data exporting/processing/QA: Jianzhang He Srihari Reddy Director: Gordon Sun.

52 websearch Page 52 Thank you. Q&A?


Download ppt "Websearch Multi-Task Learning and Web Search Ranking Gordon Sun ( 孙国政 ) Yahoo! Inc March 2007."

Similar presentations


Ads by Google