Ranking and Learning 293S UCSB, Tao Yang, 2017

Slides:



Advertisements
Similar presentations
A Support Vector Method for Optimizing Average Precision
Advertisements

1 Classification using instance-based learning. 3 March, 2000Advanced Knowledge Management2 Introduction (lazy vs. eager learning) Notion of similarity.
Improvements and extras Paul Thomas CSIRO. Overview of the lectures 1.Introduction to information retrieval (IR) 2.Ranked retrieval 3.Probabilistic retrieval.
ECG Signal processing (2)
Introduction to Information Retrieval
Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:
On-line learning and Boosting
Optimizing search engines using clickthrough data
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
1 Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Eric Brill Susan Dumais Robert Ragno Microsoft Research.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 15-2: Learning to Rank 1.
K nearest neighbor and Rocchio algorithm
1 CS 430 / INFO 430 Information Retrieval Lecture 8 Query Refinement: Relevance Feedback Information Filtering.
Presented by Li-Tal Mashiach Learning to Rank: A Machine Learning Approach to Static Ranking Algorithms for Large Data Sets Student Symposium.
T.Sharon - A.Frank 1 Internet Resources Discovery (IRD) IR Queries.
Probabilistic IR Models Based on probability theory Basic idea : Given a document d and a query q, Estimate the likelihood of d being relevant for the.
Support Vector Machines Based on Burges (1998), Scholkopf (1998), Cristianini and Shawe-Taylor (2000), and Hastie et al. (2001) David Madigan.
Hinrich Schütze and Christina Lioma
Retrieval Models II Vector Space, Probabilistic.  Allan, Ballesteros, Croft, and/or Turtle Properties of Inner Product The inner product is unbounded.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Ensemble Learning (2), Tree and Forest
Modeling (Chap. 2) Modern Information Retrieval Spring 2000.
Learning to Rank for Information Retrieval
Slide Image Retrieval: A Preliminary Study Guo Min Liew and Min-Yen Kan National University of Singapore Web IR / NLP Group (WING)
Processing of large document collections Part 2 (Text categorization) Helena Ahonen-Myka Spring 2006.
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.
윤언근 DataMining lab.  The Web has grown exponentially in size but this growth has not been isolated to good-quality pages.  spamming and.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Machine Learning in Ad-hoc IR. Machine Learning for ad hoc IR We’ve looked at methods for ranking documents in IR using factors like –Cosine similarity,
Linear Discrimination Reading: Chapter 2 of textbook.
Linear Classification with Perceptrons
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 20 11/3/2011.
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
NTU & MSRA Ming-Feng Tsai
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Introduction to Information Retrieval Introduction to Information Retrieval Lecture Probabilistic Information Retrieval.
Learning to Rank: From Pairwise Approach to Listwise Approach Authors: Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li Presenter: Davidson Date:
Machine Learning Lecture 1: Intro + Decision Trees Moshe Koppel Slides adapted from Tom Mitchell and from Dan Roth.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Query Refinement and Relevance Feedback.
Usefulness of Quality Click- through Data for Training Craig Macdonald, ladh Ounis Department of Computing Science University of Glasgow, Scotland, UK.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Roughly overview of Support vector machines Reference: 1.Support vector machines and machine learning on documents. Christopher D. Manning, Prabhakar Raghavan.
Non-separable SVM's, and non-linear classification using kernels Jakob Verbeek December 16, 2011 Course website:
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
CS276: Information Retrieval and Web Search
Ranking and Learning 290N UCSB, Tao Yang, 2014
Chapter 7. Classification and Prediction
Deep Feedforward Networks
Large-Scale Content-Based Audio Retrieval from Text Queries
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Trees, bagging, boosting, and stacking
Table 1. Advantages and Disadvantages of Traditional DM/ML Methods
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Learning to Rank Shubhra kanti karmaker (Santu)
Machine Learning Week 1.
Linear Discriminators
CS276: Information Retrieval and Web Search
From frequency to meaning: vector space models of semantics
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Most slides were adapted from Stanford CS 276 course.
Authors: Wai Lam and Kon Fan Low Announcer: Kyu-Baek Hwang
Jonathan Elsas LTI Student Research Symposium Sept. 14, 2007
Linear Discrimination
Learning to Rank with Ties
Naïve Bayes Text Classification
Learning to Rank using Language Models and SVMs
SVMs for Document Ranking
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
ADVANCED TOPICS IN INFORMATION RETRIEVAL AND WEB SEARCH
Presentation transcript:

Ranking and Learning 293S UCSB, Tao Yang, 2017 Partially based on Manning, Raghavan, and Schütze‘s text book.

Table of Content Weighted scoring for ranking Learning to rank: A simple example Learning to ranking as classification

Scoring Similarity-based approach Similarity of query features with document features Weighted approach: Scoring with weighted features return in order the documents most likely to be useful to the searcher Consider each document has subscores in each feature or in each subarea.

Simple Model of Ranking with Similarity

Similarity ranking: example [ Croft, Metzler, Strohman‘s textbook slides]

Weighted scoring with linear combination A simple weighted scoring method: use a linear combination of subscores: E.g., Score = 0.6*< Title score> + 0.3*<Abstract score> + 0.1*<Body score> Example with binary subscores Query term “ucsb admission” appears in title, and ”ucsb” appears in body. Document score: (0.6・ 2) + (0.1・ 1) = 1.3.

How to determine weights automatically: Motivation Modern systems – especially on the Web – use a great number of features: Arbitrary useful features – not a single unified model Log frequency of query word in anchor text? Query word highlighted on page? Span of query words on page # of (out) links on page? PageRank of page? URL length? URL contains “~”? Page edit recency? Page length? Major web search engines use “hundreds” of such features – and they keep changing

Machine learning for computing weights Sec. 15.4 Machine learning for computing weights How do we combine these signals into a good ranker? “machine-learned relevance” or “learning to rank” Learning from examples These examples are called training data Training examples Ranking formula User query and matched results Ranked results

Learning weights: Methodology Given a set of training examples, each contains (query q, document d, relevance score r). r is relevance judgment for d on q Simplest scheme relevant (1) or nonrelevant (0) More sophisticated: graded relevance judgments 1 (bad), 2 (Fair), 3 (Good), 4 (Excellent), 5 (Perfect) Learn weights from these examples, so that the learned scores approximate the relevance judgments in the training examples 9

Simple example Each doc has two zones, Title and Body For a chosen w[0,1], score for doc d on query q where: sT(d, q){0,1} is a Boolean denoting whether q matches the Title and sB(d, q){0,1} is a Boolean denoting whether q matches the Body

Examples of Training Data From these 7 examples, learn the best value of w.

How? For each example t we can compute the score based on We quantify Relevant as 1 and Non-relevant as 0 Would like the choice of w to be such that the computed scores are as close to these 1/0 judgments as possible Denote by r(dt,qt) the judgment for t Then minimize total squared error

Optimizing w There are 4 kinds of training examples Thus only four possible values for score And only 8 possible values for error Let n01r be the number of training examples for which sT(d, q)=0, sB(d, q)=1, judgment = Relevant. Similarly define n00r , n10r , n11r , n00i , n01i , n10i , n11i Judgment=1  Error=w Judgment=0  Error=1–w Error:

Total error – then calculus Add up contributions from various cases to get total error Now differentiate with respect to w to get optimal value of w as: Very simple , How about more features? How about nonlinear ranking functions? now let’s go further…

Generalizing this simple example More (than 2) features Non-Boolean features What if the title contains some but not all query terms … Categorical features (query terms occur in plain, boldface, italics, etc) Scores are nonlinear combinations of features Multilevel relevance judgments (Perfect, Good, Fair, Bad, etc) Complex error functions Not always a unique, easily computable setting of score parameters

Framework of Learning to Rank

Learning-based Web Search Given features x1,x2,…,xM for each document, learn a ranking function f(x1,x2,…,xm) that minimizes the loss function L under a query f*=min L( f(x1,x2,…,xM ), GroundTruth) Some related issues The functional space linear/non-linear? continuous? Derivative? The search strategy The loss function

A richer example Collect a training corpus of (q, d, r) triples Sec. 15.4.1 A richer example Collect a training corpus of (q, d, r) triples Relevance r is still binary for now Document is represented by a feature vector x = (α, ω) α is cosine similarity, ω is minimum query window size ω is the shortest text span that includes all query words (Query term proximity in the document) Train a machine learning model to predict the class r of a document-query pair

Using classification for deciding relevance Sec. 15.4.1 Using classification for deciding relevance A linear score function is Score(d, q) = Score(α, ω) = aα + bω + c And the linear classifier is Decide relevant if Score(d, q) > θ Otherwise irrelevant … just like when we were doing classification

Using classification for deciding relevance Sec. 15.4.1 Using classification for deciding relevance 0.05 Decision surface R R cosine score  N R R R R R N N 0.025 R R R R N N N N N N N 2 3 4 5 Term proximity 

More complex example of using classification for search ranking [Nallapati SIGIR 2004] We can generalize this to classifier functions over more features We can use methods we have seen previously for learning the linear classifier weights

An SVM classifier for relevance [Nallapati SIGIR 2004] Let g(r|d,q) = wf(d,q) + b Derive weights from the training examples: want g(r|d,q) ≤ −1 for nonrelevant documents g(r|d,q) ≥ 1 for relevant documents Testing: decide relevant iff g(r|d,q) ≥ 0 Train a classifier as the ranking function

Ranking vs. Classification Well studied over 30 years Bayesian, Neural network, Decision tree, SVM, Boosting, … Training data: points Pos: x1, x2, x3, Neg: x4, x5 Ranking Less studied: only a few works published in recent years Training data: pairs (partial order) Correct order: (x1, x2), (x1, x3), (x1, x4), (x1, x5) (x2, x3), (x2, x4) … Other order is incorrect x1 x2 x3 x4 x5 Very Important: How to deduce ranking problem into classification problem?

Learning to rank: Classification vs. regression Sec. 15.4.2 Learning to rank: Classification vs. regression Classification probably isn’t the right way to think about score learning: Classification problems: Map to an unordered set of classes Regression problems: Map to a real value Ordinal regression problems: Map to an ordered set of classes This formulation gives extra power: Relations between relevance levels are modeled Documents are good versus other documents for query given collection; not an absolute scale of goodness

Assume a number of categories C of relevance exist “Learning to rank” Assume a number of categories C of relevance exist These are totally ordered: c1 < c2 < … < cJ This is the ordinal regression setup Assume training data is available consisting of document-query pairs represented as feature vectors ψi and relevance ranking ci

Modified example Collect a training corpus of (q, d, r) triples Sec. 15.4.1 Modified example Collect a training corpus of (q, d, r) triples Relevance label r has 4 values Perfect, Relevant, Weak, Nonrelevant Train a machine learning model to predict the class r of a document-query pair Perfect Nonrelevant Relevant Weak

“Learning to rank” Point-wise learning Given a query-document pair, predict a score (e.g. relevancy score) Map f(x) to one of relevance vaules 0,1,2… Pair-wise learning the input is a pair of results for a query, and the class is the relevance ordering relationship between them Order f(x1) >f (x2) if x1 is more relevant than x2 List-wise learning Directly optimize the ranking metric for each query

Point-wise learning: Example Goal is to learn a threshold to separate each rank

ci < ck iff f(ψi) > f(ψk) ci < ck iff w(ψi-ψk)>0 Sec. 15.4.2 The Ranking SVM : Pairwise Learning [Herbrich et al. 1999, 2000; Joachims et al. KDD 2002] Aim is to classify instance pairs as correctly ranked or incorrectly ranked This turns an ordinal regression problem back into a binary classification problem We want a ranking function f such that ci is ranked before ck : ci < ck iff f(ψi) > f(ψk) Suppose that f is a linear function f(ψi) = wψi Thus ci < ck iff w(ψi-ψk)>0

Ranking SVM Training Set for each query q, we have a ranked list of documents totally ordered by a person for relevance to the query. Features vector of features for each document/query pair feature differences for two documents di and dj Classification if di is judged more relevant than dj, denoted di ≺ dj then assign the vector Φ(di, dj, q) the class yijq =+1; otherwise −1. Give a linear classifier which can rank pairs of documents The training set can build from click-through data, that’s cool RankSVM for Web Search Optimization (Joachims, SIGKDD 2002)

Ranking SVM optimization problem is equivalent to that of a classification SVM on pairwise difference vectors Φ(qk, di) - Φ (qk, dj)