Karthik Raman, Pannaga Shivaswamy & Thorsten Joachims Cornell University 1.

Slides:



Advertisements
Similar presentations
A Support Vector Method for Optimizing Average Precision
Advertisements

An Interactive Learning Approach to Optimizing Information Retrieval Systems Yahoo! August 24 th, 2010 Yisong Yue Cornell University.
ICML 2009 Yisong Yue Thorsten Joachims Cornell University
Evaluating the Robustness of Learning from Implicit Feedback Filip Radlinski Thorsten Joachims Presentation by Dinesh Bhirud
Diversified Retrieval as Structured Prediction Redundancy, Diversity, and Interdependent Document Relevance (IDR ’09) SIGIR 2009 Workshop Yisong Yue Cornell.
Optimizing Recommender Systems as a Submodular Bandits Problem Yisong Yue Carnegie Mellon University Joint work with Carlos Guestrin & Sue Ann Hong.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:
Online Max-Margin Weight Learning for Markov Logic Networks Tuyen N. Huynh and Raymond J. Mooney Machine Learning Group Department of Computer Science.
Optimizing search engines using clickthrough data
Query Chains: Learning to Rank from Implicit Feedback Paper Authors: Filip Radlinski Thorsten Joachims Presented By: Steven Carr.
1.Accuracy of Agree/Disagree relation classification. 2.Accuracy of user opinion prediction. 1.Task extraction performance on Bing web search log with.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Beat the Mean Bandit Yisong Yue (CMU) & Thorsten Joachims (Cornell)
L EARNING TO D IVERSIFY USING IMPLICIT FEEDBACK Karthik Raman, Pannaga Shivaswamy & Thorsten Joachims Cornell University 1.
Toward Whole-Session Relevance: Exploring Intrinsic Diversity in Web Search Date: 2014/5/20 Author: Karthik Raman, Paul N. Bennett, Kevyn Collins-Thompson.
Optimal Design Laboratory | University of Michigan, Ann Arbor 2011 Design Preference Elicitation Using Efficient Global Optimization Yi Ren Panos Y. Papalambros.
Linear Submodular Bandits and their Application to Diversified Retrieval Yisong Yue (CMU) & Carlos Guestrin (CMU) Optimizing Recommender Systems Every.
Planning under Uncertainty
Unsupervised Feature Selection for Multi-Cluster Data Deng Cai et al, KDD 2010 Presenter: Yunchao Gong Dept. Computer Science, UNC Chapel Hill.
Kuang-Hao Liu et al Presented by Xin Che 11/18/09.
Mortal Multi-Armed Bandits Deepayan Chakrabarti,Yahoo! Research Ravi Kumar,Yahoo! Research Filip Radlinski, Microsoft Research Eli Upfal,Brown University.
Nisha Ranga TURNING DOWN THE NOISE IN BLOGOSPHERE.
Morris LeBlanc.  Why Image Retrieval is Hard?  Problems with Image Retrieval  Support Vector Machines  Active Learning  Image Processing ◦ Texture.
1 CS 430 / INFO 430 Information Retrieval Lecture 8 Query Refinement: Relevance Feedback Information Filtering.
Logistic Regression Rong Jin. Logistic Regression Model  In Gaussian generative model:  Generalize the ratio to a linear model Parameters: w and c.
1 Budgeted Nonparametric Learning from Data Streams Ryan Gomes and Andreas Krause California Institute of Technology.
Expectation Maximization Algorithm
Time-Sensitive Web Image Ranking and Retrieval via Dynamic Multi-Task Regression Gunhee Kim Eric P. Xing 1 School of Computer Science, Carnegie Mellon.
Hierarchical Exploration for Accelerating Contextual Bandits Yisong Yue Carnegie Mellon University Joint work with Sue Ann Hong (CMU) & Carlos Guestrin.
“B Y THE U SER, F OR THE U SER, W ITH THE L EARNING S YSTEM ”: L EARNING F ROM U SER I NTERACTIONS Karthik Raman December 12, 2014 Joint work with Thorsten.
“B Y THE U SER, F OR THE U SER, W ITH THE L EARNING S YSTEM ”: L EARNING F ROM U SER I NTERACTIONS Karthik Raman March 27, 2014 Joint work with Thorsten.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
Processing of large document collections Part 2 (Text categorization) Helena Ahonen-Myka Spring 2006.
1 Information Filtering & Recommender Systems (Lecture for CS410 Text Info Systems) ChengXiang Zhai Department of Computer Science University of Illinois,
A Comparative Study of Search Result Diversification Methods Wei Zheng and Hui Fang University of Delaware, Newark DE 19716, USA
Some Vignettes from Learning Theory Robert Kleinberg Cornell University Microsoft Faculty Summit, 2009.
Bringing Order to the Web: Automatically Categorizing Search Results Hao Chen, CS Division, UC Berkeley Susan Dumais, Microsoft Research ACM:CHI April.
A Simple Unsupervised Query Categorizer for Web Search Engines Prashant Ullegaddi and Vasudeva Varma Search and Information Extraction Lab Language Technologies.
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.
Chengjie Sun,Lei Lin, Yuan Chen, Bingquan Liu Harbin Institute of Technology School of Computer Science and Technology 1 19/11/ :09 PM.
Latent Semantic Analysis Hongning Wang Recap: vector space model Represent both doc and query by concept vectors – Each concept defines one dimension.
Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005.
Giorgos Giannopoulos (IMIS/”Athena” R.C and NTU Athens, Greece) Theodore Dalamagas (IMIS/”Athena” R.C., Greece) Timos Sellis (IMIS/”Athena” R.C and NTU.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Improving Web Search Results Using Affinity Graph Benyu Zhang, Hua Li, Yi Liu, Lei Ji, Wensi Xi, Weiguo Fan, Zheng Chen, Wei-Ying Ma Microsoft Research.
Greedy is not Enough: An Efficient Batch Mode Active Learning Algorithm Chen, Yi-wen( 陳憶文 ) Graduate Institute of Computer Science & Information Engineering.
Fast and accurate text classification via multiple linear discriminant projections Soumen Chakrabarti Shourya Roy Mahesh Soundalgekar IIT Bombay
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
PSEUDO-RELEVANCE FEEDBACK FOR MULTIMEDIA RETRIEVAL Seo Seok Jun.
CSKGOI'08 Commonsense Knowledge and Goal Oriented Interfaces.
Adish Singla, Microsoft Bing Ryen W. White, Microsoft Research Jeff Huang, University of Washington.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Ensemble Methods in Machine Learning
Classification Ensemble Methods 1
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Predicting Consensus Ranking in Crowdsourced Setting Xi Chen Mentors: Paul Bennett and Eric Horvitz Collaborator: Kevyn Collins-Thompson Machine Learning.
Text Information Management ChengXiang Zhai, Tao Tao, Xuehua Shen, Hui Fang, Azadeh Shakery, Jing Jiang.
Autumn Web Information retrieval (Web IR) Handout #14: Ranking Based on Click Through data Ali Mohammad Zareh Bidoki ECE Department, Yazd University.
Page 1 CS 546 Machine Learning in NLP Review 1: Supervised Learning, Binary Classifiers Dan Roth Department of Computer Science University of Illinois.
1 Dongheng Sun 04/26/2011 Learning with Matrix Factorizations By Nathan Srebro.
Queensland University of Technology
Dan Roth Department of Computer and Information Science
Tingdan Luo 05/02/2016 Interactively Optimizing Information Retrieval Systems as a Dueling Bandits Problem Tingdan Luo
B. Jayalakshmi and Alok Singh 2015
Optimizing Submodular Functions
Learning Preferences on Trajectories via Iterative Improvement
Location Recommendation — for Out-of-Town Users in Location-Based Social Network Yina Meng.
Structured Learning of Two-Level Dynamic Rankings
Jonathan Elsas LTI Student Research Symposium Sept. 14, 2007
Presentation transcript:

Karthik Raman, Pannaga Shivaswamy & Thorsten Joachims Cornell University 1

2 U.S. Economy Soccer Tech Gadgets

 Relevance-Based? 3 Becomes too redundant, ignoring some interests of the user. All about the economy. Nothing about sports or tech.

4 Intrinsic Diversity: Different interests of a user addressed.[Radlinski et. al] Need to have right balance with relevance.

 Methods for learning diversity: ◦ El-Arini et. al propose method for diversified scientific paper discovery.  Assume noise-free feedback ◦ Radlinski et. al propose Bandit Learning method  Does not generalize across queries ◦ Yue et. al. propose online learning methods to maximize submodular utilities  Utilize cardinal utilities. ◦ Slivkins et. al. learn diverse rankings:  Hard-coded notion of diversity. 5

 Utility function to model relevance- diversity trade-off.  Propose online learning method: ◦ Simple and easy to implement ◦ Fast and can learn on the fly. ◦ Uses implicit feedback to learn ◦ Solution is robust to noise. ◦ Learns diverse rankings. 6

 KEY: For a given query and user intent, the marginal benefit of seeing additional relevant documents diminishes. 7

*Can replace intents with terms for prediction. 8 d1d1 d2d2 d3d3 d4d4 t1t1 t2t2 t3t P(t 1 ) =1/2 P(t 2 ) =1/3 P(t 3 ) =1/6 U(d 1 |t) U(d 2 |t) U(d 3 |t) U(d 4 |t) t1t1 t2t2 t3t t1t1 t2t2 t3t3 Given ranking θ = (d 1, d 2,…. d k ) and concave function g

 where Φ(y) is the : ◦ aggregation of (text) features ◦ over documents of ranking y. ◦ using any submodular function  Allows to model relevance-diversity tradeoff 9

10 EconomyUSASoccerTechnology d1d d2d d3d d4d Φ(y)Φ(y) EconomyUSASoccerTechnology d1d d2d d3d Φ(y)Φ(y) 8940 EconomyUSASoccerTechnology d1d d2d Φ(y)Φ(y) 5740 EconomyUSASoccerTechnology d1d Φ(y)Φ(y) 5400 EconomyUSASoccerTechnology Φ(y)Φ(y) 0000

11 EconomyUSASoccerTechnology d1d d2d d3d d4d Φ(y)Φ(y) 5444 EconomyUSASoccerTechnology d1d d2d d3d Φ(y)Φ(y) 5440 EconomyUSASoccerTechnology d1d d2d Φ(y)Φ(y) 5440 EconomyUSASoccerTechnology d1d Φ(y)Φ(y) 5400 EconomyUSASoccerTechnology Φ(y)Φ(y) 0000

 Given the utility function, can find ranking that optimizes it using a greedy algorithm: ◦ At each iteration: Choose Document that Maximizes Marginal Benefit 12 d1d1 Look at Marginal Benefits d1d1 2.2 d2d d3d d4d d4d4 ? d2d2 ? d1d1 2.2 d2d d3d d4d ? d1d1 2.2 d2d2 1.7 d3d3 0.4 d4d4 1.9 d1d1 economy:3, usa:4, finance:2.. d2d2 usa:3, soccer:2,world cup:2.. d3d3 usa:2, politics:3, president:5 … d4d4 gadgets:2, technology:4, usa:2..

 Hand-labeling document-intent for documents is difficult.  LETOR research has shown large datasets required to perform well.  Imperative to be able to use weaker signals/information source.  Our Approach: ◦ Implicit Feedback from Users (i.e., clicks) 13

14

15 PRESENTED RANKING PRESENTED RANKING OPTIMAL RANKING FEEDBACK RANKING  Will assume the feedback is informative:  The “Alpha” quantifies the quality of the feedback and how noisy it is.

1. Initialize weight vector w. 2. Get fresh set of documents/articles. 3. Compute ranking using greedy algorithm (using current w). 4. Present to user and get feedback. 5. Update w... ◦ E.g: w += Φ( Feedback) - Φ( Presented) ◦ Gives the Diversifying Perceptron (DP). 6. Repeat from step 2 for next user interaction. 16

 Would like to obtain user utility as close to the optimal.  Define regret as the average difference between utility of the optimal and that of the presented.  Despite not knowing the optimal, we can theoretically show the regret for the DP: ◦ Converges to 0 as T -> ∞, at rate of 1/T ◦ Is independent of the feature dimensionality. ◦ Changes gracefully as noise increases 17

 No labeled intrinsic diversity dataset. ◦ Create artificial datasets by simulating users using the RCV1 news corpus. ◦ Documents relevant to at most 1 topic.  Each intrinsically diverse user has 5 randomly chosen topics as interests.  Results average over 50 different users. 18

 Can the algorithm learn to cover different interests (i.e., beyond just relevance)?  Consider purely-diversity seeking user ◦ Would like as many intents covered as possible  Every iteration: User returns feedback of ≤5 documents (with α = 1) 19

 Submodularity helps cover more intents. 20

 Able to find all intents in top 10. ◦ Compared to the 20 required for non- diversified algorithm. 21

22 Works well even with noisy feedback.

 Able to outperform supervised learning: ◦ Despite not being told the true labels and receiving only partial information.  Able to learn the required amount of diversity ◦ By combining relevance and diversity features ◦ Works as well almost as knowing true user utility. 23

 Presented an online learning algorithm for learning diverse rankings using implicit feedback.  Relevance-Diversity balance by modeling utility as submodular function.  Theoretically and empirically shown to be robust to noisy feedback. 24

25

 Users want differing amounts of diversity.  Can learn this on per-user level by: ◦ Combining relevance and diversity features ◦ Algorithm learns relative weights. 26

INTRINSICEXTRINSIC Diversity among the interests of a single user. Avoid redundancy and cover different aspects of a information need. Diversity among interests/ information need of different users. Balancing interests of different users and provide some information to all users. Less-studiedWell-studied Applicable for personalized search/recommendation General purpose search/ recommendation. 27 Radlinski, Bennett, Carterette and Joachims, Redundancy, diversity and interdependent document relevance; SIGIR Forum ‘09

28

29 PRESENTED RANKING PRESENTED RANKING OPTIMAL RANKING FEEDBACK RANKING

 Let’s allow for noise: 30

31  Previous algorithm can have negative weights which breaks guarantees.  Same regret bound as previous.

 What if feedback can be worse than presented ranking? 32

 Regret is comparable to case where user’s true utility is known.  Algorithm is able to learn relative importance of the two feature sets. 33

34 Different users have different information needs. Here too balance with relevance is crucial.

35  This method will favor sparsity (similar to L1 regularized methods)  Similarly can bound regret.

 Significantly outperforms the method despite using far less information: complete relevance labels vs. preference feedback.  Orders of magnitude faster training: 1000 vs. 0.1 sec 36