Interactive Information Retrieval

Slides:



Advertisements
Similar presentations
Accurately Interpreting Clickthrough Data as Implicit Feedback Joachims, Granka, Pan, Hembrooke, Gay Paper Presentation: Vinay Goel 10/27/05.
Advertisements

CONTRIBUTIONS Ground-truth dataset Simulated search tasks environment Multiple everyday applications (MS Word, MS PowerPoint, Mozilla Browser) Implicit.
ENT4310 Business Economics and Marketing A six-step model for marketing research Arild Aspelund.
Haystack: Per-User Information Environment 1999 Conference on Information and Knowledge Management Eytan Adar et al Presented by Xiao Hu CS491CXZ.
Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:
Bringing Order to the Web: Automatically Categorizing Search Results Hao Chen SIMS, UC Berkeley Susan Dumais Adaptive Systems & Interactions Microsoft.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Information Retrieval: Human-Computer Interfaces and Information Access Process.
Optimal Design Laboratory | University of Michigan, Ann Arbor 2011 Design Preference Elicitation Using Efficient Global Optimization Yi Ren Panos Y. Papalambros.
Overview of Collaborative Information Retrieval (CIR) at FIRE 2012 Debasis Ganguly, Johannes Leveling, Gareth Jones School of Computing, CNGL, Dublin City.
1 Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Eric Brill Susan Dumais Robert Ragno Microsoft Research.
From requirements to design
Click Evidence Signals and Tasks Vishwa Vinay Microsoft Research, Cambridge.
PROBLEM BEING ATTEMPTED Privacy -Enhancing Personalized Web Search Based on:  User's Existing Private Data Browsing History s Recent Documents 
Instructional Information in Adaptive Spatial Hypertext Luis Francisco-Revilla and Frank Shipman Presented By : Ananda Man Shrestha.
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
An investigation of query expansion terms Gheorghe Muresan Rutgers University, School of Communication, Information and Library Science 4 Huntington St.,
12 -1 Lecture 12 User Modeling Topics –Basics –Example User Model –Construction of User Models –Updating of User Models –Applications.
Online Search Evaluation with Interleaving Filip Radlinski Microsoft.
Modern Retrieval Evaluations Hongning Wang
Personalization in Local Search Personalization of Content Ranking in the Context of Local Search Philip O’Brien, Xiao Luo, Tony Abou-Assaleh, Weizheng.
Personalization of the Digital Library Experience: Progress and Prospects Nicholas J. Belkin Rutgers University, USA
1 Information Filtering & Recommender Systems (Lecture for CS410 Text Info Systems) ChengXiang Zhai Department of Computer Science University of Illinois,
Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?
A Comparative Study of Search Result Diversification Methods Wei Zheng and Hui Fang University of Delaware, Newark DE 19716, USA
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.
Fan Guo 1, Chao Liu 2 and Yi-Min Wang 2 1 Carnegie Mellon University 2 Microsoft Research Feb 11, 2009.
CIKM’09 Date:2010/8/24 Advisor: Dr. Koh, Jia-Ling Speaker: Lin, Yi-Jhen 1.
Karthik Raman, Pannaga Shivaswamy & Thorsten Joachims Cornell University 1.
Lecture 2 Jan 13, 2010 Social Search. What is Social Search? Social Information Access –a stream of research that explores methods for organizing users’
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Context-Sensitive Information Retrieval Using Implicit Feedback Xuehua Shen : department of Computer Science University of Illinois at Urbana-Champaign.
Collaborative Information Retrieval - Collaborative Filtering systems - Recommender systems - Information Filtering Why do we need CIR? - IR system augmentation.
Lecture 2 Jan 15, 2008 Social Search. What is Social Search? Social Information Access –a stream of research that explores methods for organizing users’
Jun Li, Peng Zhang, Yanan Cao, Ping Liu, Li Guo Chinese Academy of Sciences State Grid Energy Institute, China Efficient Behavior Targeting Using SVM Ensemble.
Information Retrieval in Context of Digital Libraries - or DL in Context of IR Peter Ingwersen Royal School of LIS Denmark –
Personalized Social Search Based on the User’s Social Network David Carmel et al. IBM Research Lab in Haifa, Israel CIKM’09 16 February 2011 Presentation.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Similarity & Recommendation Arjen P. de Vries CWI Scientific Meeting September 27th 2013.
Xutao Li1, Gao Cong1, Xiao-Li Li2
ASSIST: Adaptive Social Support for Information Space Traversal Jill Freyne and Rosta Farzan.
Context-Aware Query Classification Huanhuan Cao, Derek Hao Hu, Dou Shen, Daxin Jiang, Jian-Tao Sun, Enhong Chen, Qiang Yang Microsoft Research Asia SIGIR.
1 Click Chain Model in Web Search Fan Guo Carnegie Mellon University PPT Revised and Presented by Xin Xin.
Augmenting (personal) IR Readings Review Evaluation Papers returned & discussed Papers and Projects checkin time.
The Loquacious ( 愛說話 ) User: A Document-Independent Source of Terms for Query Expansion Diane Kelly et al. University of North Carolina at Chapel Hill.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
26134 Business Statistics Week 4 Tutorial Simple Linear Regression Key concepts in this tutorial are listed below 1. Detecting.
Text Information Management ChengXiang Zhai, Tao Tao, Xuehua Shen, Hui Fang, Azadeh Shakery, Jing Jiang.
Navigation Aided Retrieval Shashank Pandit & Christopher Olston Carnegie Mellon & Yahoo.
Thomas Grandell April 8 th, 2016 This work is licensed under the Creative Commons Attribution 4.0 International.
WHIM- Spring ‘10 By:-Enza Desai. What is HCIR? Study of IR techniques that brings human intelligence into search process. Coined by Gary Marchionini.
SEARCH AND CONTEXT Susan Dumais, Microsoft Research INFO 320.
Introduction to Survey Research
Presented by Archana Kumari ( ) | Supervised By Mr Vikram Singh
Recommender Systems & Collaborative Filtering
DATA COLLECTION METHODS IN NURSING RESEARCH
Accurately Interpreting Clickthrough Data as Implicit Feedback
Inference for Regression (Chapter 14) A.P. Stats Review Topic #3
Kyriaki Dimitriadou, Brandeis University
Adaptive, Personalized Diversity for Visual Discovery
26134 Business Statistics Week 5 Tutorial
Recommenders for Information Seeking Tasks: Lessons Learned
Introduction to IR Research
Lecture 8 Preview: Interval Estimates and Hypothesis Testing
Jamie Cummins Bryan Roche Aoife Cartwright Maynooth University
Information Retrieval Models: Probabilistic Models
CHAPTER 29: Multiple Regression*
Exploratory Search Beyond the Query–Response Paradigm
Evidence from Behavior
THE PROCESS OF INTERACTION DESIGN
Presentation transcript:

Interactive Information Retrieval Exploratory Search in Interactive Information Retrieval Dorota Dorota Glowacka

Exploratory Search First introduce what is exploratory search. There are two broad categories of search 1. lookup and 2. exploratory search. In lookup search the search goals are well defined.. G. Marchionini. Exploratory search: from finding to understanding. Comm. ACM, 49(4):41–46, 2006

Core research questions How to support the user in navigating the information space? Search goals can be poorly defined. How to understand the user’s search intent? Information needs are dynamic. How to adapt with the user? These are the major challenges users face in exploratory search.

Approach Build practical information retrieval systems for exploratory search using reinforcement learning Draw insights from user studies to understand user behaviour and fine tune the system Draw insights from user behaviour to improve user modelling (and iterate …)

SCINET Glowacka et al. IUI 2013, Kangasraasio et al. IUI 2015, Kangasraasio et al. UMAP 2016

PULP Athukorala et al. CIKM15, Athukorala et al. IUI16, Medlar et al. SIGIR16, Medlar et al. IUI17

Why Reinforcement Learning? Generic framework for building user model over a search session Allows a system to balance exploration and exploitation Users are not trapped in a local context bubble based on the initial query Users exposed to more diverse set of data Harder problem than expected ….

Balancing Exploration/Exploitation Implicit feedback (Athukorala et al. CIKM15) Model the relationship between exploration rate and number of relevant documents Offline – one exploration rate for population Explicit feedback (Medlar et al. IUI17) Model the relationship between exploration rate and user experience Online – personalised exploration rate based on interaction data

Setting Exploration/Exploitation with Explicit Feedback Ordinary regression fits a model based on linear relationships between response and explanatory variables… 𝛄

Setting Exploration/Exploitation with Explicit Feedback Ordinary regression fits a model based on linear relationships between response and explanatory variables, but 𝛄 is a parameter, we do not observe it! 𝛄

Setting Exploration/Exploitation with Explicit Feedback Run experiments with random 𝛄 and collect user feedback. For some users, 𝛄 was too low, for others 𝛄 was too high 𝛄

Setting Exploration/Exploitation with Explicit Feedback Don't know "true" 𝛄, but if too high, "true" 𝛄 will be in interval [0, 𝛄] and if too low, "true" 𝛄 will be in interval [𝛄, ∞] 𝛄

Setting Exploration/Exploitation with Explicit Feedback Fit model over censored intervals and then make predictions as normal! 𝛄

Balancing Exploration/Exploitation with User Experience Users state document diversity preferences relative to random exploration rate Three user interaction variables (knowledge level, interface time, clicked documents) Prediction consistent with user feedback in 80% of cases (Medlar et al. IUI17)

Analysing User Behaviour in Exploratory Search Building new IR systems will only get us so far… Need to develop a better understanding of user behavior to improve user modelling Is exploratory search different from lookup search? How reliable is relevance feedback (assume users know what they want and are consistent)

User Behaviour in Exploratory Search vs. Lookup Search People behave differently when performing exploratory and lookup searches (Athukorala et al. JASIST16): longer queries, more scrolling, longer search sessions, more clicks, etc. Differences are predictive: simple classifier predicts exploratory/lookup with 81% accuracy (Athukorala et al. IUI16).

Consistency in User Behaviour in Exploratory Search Permutation-based metric for scoring relevance feedback consistency (CIKM18) Significant difference between exploratory and lookup search (P=0.00004) Relevance feedback in exploratory search more likely to be inconsistent compared to lookup

Questions?