Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning to Rank with Ties

Similar presentations


Presentation on theme: "Learning to Rank with Ties"— Presentation transcript:

1 Learning to Rank with Ties
Authors: Ke Zhou, Gui-Rong Xue, Hongyuan Zha, and Yong Yu Presenter: Davidson Date: 2009/12/29 Published in SIGIR 2008

2 Contents Introduction Pairwise learning to rank
Models for paired comparisons with ties General linear models Bradley-Terry model Thurstone-Mosteller model Loss functions Learning by functional gradient boosting Experiments Conclusions and future work

3 Introduction Learning to rank: Learning to rank methods:
Ranking objects for some queries Document retrieval, expert finding, anti web spam, and product ratings, etc. Learning to rank methods: Pointwise approach Pairwise approach Listwise approach

4 Existing approaches (1/2)
Vector space model methods Represent query and documents as vectors of features Compute the distance as similarity measure Language modeling based methods Use a probabilistic framework for the relevance of a document with respect to a query Estimate the parameters in probability models

5 Existing approaches (2/2)
Supervised machine learning framework Learn a ranking function from pairwise preference data Minimize the number of contradicting pairs in training data Direct optimization of loss function designed from performance measure Obtain a ranking function that is optimal with respect to some performance measure Require absolute judgment data

6 Pairwise learning to rank
Use pairs of preference data = is preferred to How to obtain preference judgments? Clickthroughs User click count on search results Use heuristic rules such as clichthrough rates Absolute relevant judgments Human labels E.g. (4-level judgments) perfect, excellent, good, and bad Need to convert preference judgments to preference data

7 Conversion from preference judgments to preference data
E.g. 4 samples with 2-level judgment Preference judgment Preference data (A, C) (A, D) (B, C) (B, D) Data Label Preference Judgment A Relevant B C Irrelevant D Tie cases (A, B) and (C, D) are ignored!

8 Models for paired comparisons with ties
Notations: => is preferred to => and are preferred equally Proposed framework General linear models for paired comparisons (in Oxford University Press, 1988) Statistical models for paired comparisons Bradley-Terry model (in Biometrika, 1952) Thurstone-Mosteller model (in Psychological Review, 1927)

9 General linear models (1/4)
For a non-decreasing function The probability that document is preferred to document is: = scoring/ranking function

10 General linear models (2/4)
With ties, the function becomes: = a threshold that controls the tie probability , this model is identical to the original general linear models.

11 General linear models (3/4)

12 General linear models (4/4)

13 Bradley-Terry model The function is set to be so that With ties: where

14 Thurstone-Mosteller model
The function is set to be the Gaussian cumulative distribution With ties:

15 Loss functions Training data Minimize the empirical risk:
preference data, tie data Minimize the empirical risk: Loss function:

16 Learning by functional gradient boosting
Obtain a function from a function space that minimizes the empirical loss: Apply gradient boosting algorithm Approximate by iteratively constructing a sequence of base learners Base learners are regression trees The number of iteration and shrinkage factor in boosting algorithm are found by using cross validation

17 Experiments Learning to rank methods: Datasets (Letor data collection)
BT (Bradley-Terry model) and TM (Thurstone-Mosteller model) BT-noties and TM-noties (BT and TM without ties) RankSVM, RankBoost, AdaRank, Frank Datasets (Letor data collection) OHSUMED (16,140 pairs, 3-level judgment) TREC2003 (49,171 pairs, binary judgment) TREC2004 (74,170 pairs, binary judgment)

18 Performance measures Precision Mean Average Precision (MAP)
Binary judgment only Mean Average Precision (MAP) Sensitive to the entire ranking order Normalized Discount Cumulative Gain (NDCG) Multi-level judgment

19 Performance comparisons over OHSUMED

20 Performance comparisons over TREC2003

21 Performance comparisons over TREC2004

22 Performance comparison: ties from different relevance levels

23 Learning curves of Bradley-Terry model

24 Conclusions and future work
Tie data improve the performance Common features of relevant documents are extracted Irrelevant documents have more diverse tie features and are less effective BT and TM are comparable in most cases Future work Theoretical analysis of ties New methods/algorithms/loss functions of incorporating ties


Download ppt "Learning to Rank with Ties"

Similar presentations


Ads by Google