1 Click Chain Model in Web Search Fan Guo Carnegie Mellon University PPT Revised and Presented by Xin Xin.

Slides:



Advertisements
Similar presentations
Click Chain Model in Web Search Fan Guo Carnegie Mellon University 11/29/2014WWW'09, Madrid, Spain.
Advertisements

Evaluating Implicit Measures to Improve the Search Experience SIGIR 2003 Steve Fox.
Statistical Models for Web Search Click Log Analysis
Accurately Interpreting Clickthrough Data as Implicit Feedback Joachims, Granka, Pan, Hembrooke, Gay Paper Presentation: Vinay Goel 10/27/05.
Evaluating Novelty and Diversity Charles Clarke School of Computer Science University of Waterloo two talks in one!
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:
Optimizing search engines using clickthrough data
Query Chains: Learning to Rank from Implicit Feedback Paper Authors: Filip Radlinski Thorsten Joachims Presented By: Steven Carr.
The Probabilistic Model. Probabilistic Model n Objective: to capture the IR problem using a probabilistic framework; n Given a user query, there is an.
@ Carnegie Mellon Databases User-Centric Web Crawling Sandeep Pandey & Christopher Olston Carnegie Mellon University.
Searchable Web sites Recommendation Date : 2012/2/20 Source : WSDM’11 Speaker : I- Chih Chiu Advisor : Dr. Koh Jia-ling 1.
1 Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Eric Brill Susan Dumais Robert Ragno Microsoft Research.
Improving relevance prediction by addressing biases and sparsity in web search click data Qi Guo, Dmitry Lagun, Denis Savenkov, Qiaoling Liu
Statistic Models for Web/Sponsored Search Click Log Analysis The Chinese University of Hong Kong 1 Some slides are revised from Mr Guo Fan’s tutorial at.
BBM: Bayesian Browsing Model from Petabyte-scale Data Chao Liu, MSR-Redmond Fan Guo, Carnegie Mellon University Christos Faloutsos, Carnegie Mellon University.
Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge.
Query Operations: Automatic Local Analysis. Introduction Difficulty of formulating user queries –Insufficient knowledge of the collection –Insufficient.
Carnegie Mellon 1 Maximum Likelihood Estimation for Information Thresholding Yi Zhang & Jamie Callan Carnegie Mellon University
Time-dependent Similarity Measure of Queries Using Historical Click- through Data Qiankun Zhao*, Steven C. H. Hoi*, Tie-Yan Liu, et al. Presented by: Tie-Yan.
Ryen W. White, Microsoft Research Jeff Huang, University of Washington.
Carnegie Mellon Exact Maximum Likelihood Estimation for Word Mixtures Yi Zhang & Jamie Callan Carnegie Mellon University Wei Xu.
Putting Query Representation and Understanding in Context: ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign A.
Federated Search of Text Search Engines in Uncooperative Environments Luo Si Language Technology Institute School of Computer Science Carnegie Mellon University.
BM25, BM25F, and User Behavior Chris Manning and Pandu Nayak
Online Search Evaluation with Interleaving Filip Radlinski Microsoft.
Time-Sensitive Web Image Ranking and Retrieval via Dynamic Multi-Task Regression Gunhee Kim Eric P. Xing 1 School of Computer Science, Carnegie Mellon.
Advisor: Hsin-Hsi Chen Reporter: Chi-Hsin Yu Date:
Modern Retrieval Evaluations Hongning Wang
1 Context-Aware Search Personalization with Concept Preference CIKM’11 Advisor : Jia Ling, Koh Speaker : SHENG HONG, CHUNG.
Introduction to Information Retrieval Introduction to Information Retrieval BM25, BM25F, and User Behavior Chris Manning, Pandu Nayak and Prabhakar Raghavan.
User Browsing Graph: Structure, Evolution and Application Yiqun Liu, Yijiang Jin, Min Zhang, Shaoping Ma, Liyun Ru State Key Lab of Intelligent Technology.
Understanding and Predicting Graded Search Satisfaction Tang Yuk Yu 1.
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.
Ramakrishnan Srikant Sugato Basu Ni Wang Daryl Pregibon 1.
Fan Guo 1, Chao Liu 2 and Yi-Min Wang 2 1 Carnegie Mellon University 2 Microsoft Research Feb 11, 2009.
CIKM’09 Date:2010/8/24 Advisor: Dr. Koh, Jia-Ling Speaker: Lin, Yi-Jhen 1.
Understanding and Predicting Personal Navigation Date : 2012/4/16 Source : WSDM 11 Speaker : Chiu, I- Chih Advisor : Dr. Koh Jia-ling 1.
Center for E-Business Technology Seoul National University Seoul, Korea BrowseRank: letting the web users vote for page importance Yuting Liu, Bin Gao,
Giorgos Giannopoulos (IMIS/”Athena” R.C and NTU Athens, Greece) Theodore Dalamagas (IMIS/”Athena” R.C., Greece) Timos Sellis (IMIS/”Athena” R.C and NTU.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Personalized Search Xiao Liu
Context-Sensitive Information Retrieval Using Implicit Feedback Xuehua Shen : department of Computer Science University of Illinois at Urbana-Champaign.
Mining and Querying Multimedia Data Fan Guo Sep 19, 2011 Committee Members: Christos Faloutsos, Chair Eric P. Xing William W. Cohen Ambuj K. Singh, University.
UCAIR Project Xuehua Shen, Bin Tan, ChengXiang Zhai
Toward A Session-Based Search Engine Smitha Sriram, Xuehua Shen, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
Analysis of Topic Dynamics in Web Search Xuehua Shen (University of Illinois) Susan Dumais (Microsoft Research) Eric Horvitz (Microsoft Research) WWW 2005.
Qi Guo Emory University Ryen White, Susan Dumais, Jue Wang, Blake Anderson Microsoft Presented by Tetsuya Sakai, Microsoft Research.
Ranking objects based on relationships Computing Top-K over Aggregation Sigmod 2006 Kaushik Chakrabarti et al.
Implicit User Modeling for Personalized Search Xuehua Shen, Bin Tan, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
More Than Relevance: High Utility Query Recommendation By Mining Users' Search Behaviors Xiaofei Zhu, Jiafeng Guo, Xueqi Cheng, Yanyan Lan Institute of.
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
Learning a Monolingual Language Model from a Multilingual Text Database Rayid Ghani & Rosie Jones School of Computer Science Carnegie Mellon University.
Modern Retrieval Evaluations Hongning Wang
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Date: 2013/9/25 Author: Mikhail Ageev, Dmitry Lagun, Eugene Agichtein Source: SIGIR’13 Advisor: Jia-ling Koh Speaker: Chen-Yu Huang Improving Search Result.
A Framework to Predict the Quality of Answers with Non-Textual Features Jiwoon Jeon, W. Bruce Croft(University of Massachusetts-Amherst) Joon Ho Lee (Soongsil.
Why Decision Engine Bing Demos Search Interaction model Data-driven Research Problems Q & A.
Predicting Short-Term Interests Using Activity-Based Search Context CIKM’10 Advisor: Jia Ling, Koh Speaker: Yu Cheng, Hsieh.
Navigation Aided Retrieval Shashank Pandit & Christopher Olston Carnegie Mellon & Yahoo.
Context-Sensitive IR using Implicit Feedback Xuehua Shen, Bin Tan, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
Evaluation Anisio Lacerda.
Modern Retrieval Evaluations
Content-Aware Click Modeling
Author: Kazunari Sugiyama, etc. (WWW2004)
John Lafferty, Chengxiang Zhai School of Computer Science
Click Chain Model in Web Search
Efficient Multiple-Click Models in Web Search
Interactive Information Retrieval
Presentation transcript:

1 Click Chain Model in Web Search Fan Guo Carnegie Mellon University PPT Revised and Presented by Xin Xin

2 Outline Background and motivation Designing a click model Algorithms Experiments

3

4 How to utilize users’ feedback to improve search engine results?

5 Diverse User Feedback Click-through Browser action Dwelling time Explicit judgment Other page elements 5

6 Web Search Click Log Auto-generated data keeping important information about search activity. PositionURLClick 1cikm2008.org1 2www.cikm.org0 3www.cikm.org/ www.fc.ul.pt/cikm www.comp.polyu.edu.hk/conference/cikm cikmconference.org0 7Ir.iit.edu/cikm www.informatik.uni-trier.de/~ley/db/conf/cikm/index.html0 9www.tzi.de/CIKM www.cikm.com0 Query cikm Session ID f851c5af178384d12f3d

7 A real world example

8 – search logs: 10+ TB/day –In existing publications: [Craswell+08]: 108k sessions [Dupret+08] : 4.5M sessions (21 subsets * 216k sessions) [Guo +09a] : 8.8M sessions from 110k unique queries [Guo+09b]: 8.8M sessions from 110k unique queries [Chapelle+09]: 58M sessions from 682k unique queries [Liu+09a]: 0.26PB data from 103M unique queries How large is the clicklog?

9 Intuition to Utilize Clicks Adapt ranking to user clicks # of clicks received

10 Position Bias Problem # of clicks received

11 Problem Definition Given a click log data set, for each query- document pair, compute user-perceived relevance and the solution should be –Aware of the position bias and context dependency –Scalable to Terabyte data –Incremental to stay updated

12 Outline Background and motivation Designing a click model Algorithms Experiments

13 Examination Hypothesis A document must be examined before a click. The (conditional) probability of click upon examination depends on document relevance.

14 Cascade Hypothesis The first document is always examined. First-order Markov property: –Examination at position (i+1) depends on examination and click at position i only Examination follows a strict linear order: Position iPosition (i+1)

15 User Behavior Description Examine the Document Click? See Next Doc? Done No Yes No Yes See Next Doc? Done No

16 Click Chain Model C4C4 C3C3 C2C2 C1C1 R1R1 E1E1 E2E2 R2R2 R3R3 R4R4 E3E3 E4E4 … … … C5C5 R5R5 E5E5 Examination Hypothesis Cascade Hypothesis

17 Outline Background and motivation Designing a click model Algorithms Experiments

18 A Coin-Toss Example for Bayesian Framework Prior Posterior x 1 (1-x) 0 x 2 (1-x) 0 x 3 (1-x) 0 x 3 (1-x) 1 x 4 (1-x) 1 Density Function (not normalized)

19 Click Data Example Prior Density Function (not normalized) x 1 (1-x) 0 (1-0.6x) 0 (1+0.3x) 1 (1-0.5x) 0 (1- 0.2x) 0 … x 1 (1-x) 1 (1-0.6x) 0 (1+0.3x) 1 (1-0.5x) 0 (1- 0.2x) 0 … x 2 (1-x) 1 (1-0.6x) 0 (1+0.3x) 2 (1-0.5x) 0 (1- 0.2x) 0 … x 3 (1-x) 1 (1-0.6x) 1 (1+0.3x) 2 (1-0.5x) 0 (1- 0.2x) 0 … x 3 (1-x) 1 (1-0.6x) 1 (1+0.3x) 2 (1-0.5x) 1 (1- 0.2x) 0 …

20 Estimating P(C|Ri)

21 C4C4 C3C3 C2C2 C1C1 R1R1 E1E1 E2E2 R2R2 R3R3 R4R4 E3E3 E4E4 … … … C5C5 R5R5 E5E5 0101

22 C4C4 C3C3 C2C2 C1C1 R1R1 E1E1 E2E2 R2R2 R3R3 R4R4 E3E3 E4E4 … … … C5C5 R5R5 E5E5 0101

23 C4C4 C3C3 C2C2 C1C1 R1R1 E1E1 E2E2 R2R2 R3R3 R4R4 E3E3 E4E4 … … … C5C5 R5R5 E5E5 0101

24 C4C4 C3C3 C2C2 C1C1 R1R1 E1E1 E2E2 R2R2 R3R3 R4R4 E3E3 E4E4 … … … C5C5 R5R5 E5E5 0101

25 C4C4 C3C3 C2C2 C1C1 R1R1 E1E1 E2E2 R2R2 R3R3 R4R4 E3E3 E4E4 … … … C5C5 R5R5 E5E5 0101

26 Putting them together

27 Alpha Estimation

28 Outline Background and motivation Designing a click model Algorithms Experiments

29 Data Set Collected in 2 weeks in July Preprocessing: –Discard no-click sessions for fair comparison. –178 most frequent queries removed. Split to training/test sets according to time stamps.

30 Data Set After preprocessing: –110,630 distinct queries; –4.8M/4.0M query sessions in the training/test set.

31 Metric Efficiency: –Computational Time Effectiveness: –Perplexity –Log-likely hood –Click Prediction.

32 Competitors UBM: User Browsing Model (Dupret et al., SIGIR’08) DCM: Dependent Click Model (WSDM’09)

33 Results - Time Environment: Unix Server, 2.8GHz cores, MATLAB R2008b. CCMUBMDCM 9.8 min333 min5.4 min

34 Results – Perplexity Worse Better

35 Results – Log Likelihood Better Worse

36 First Clicked Position

37 Last Clicked Position

38 The End