ASSOCIATIVE BROWSING Evaluating 1 Jin Y. Kim / W. Bruce Croft / David Smith by Simulation.

Slides:



Advertisements
Similar presentations
Relevance Feedback User tells system whether returned/disseminated documents are relevant to query/information need or not Feedback: usually positive sometimes.
Advertisements

Struggling or Exploring? Disambiguating Long Search Sessions
Query Chain Focused Summarization Tal Baumel, Rafi Cohen, Michael Elhadad Jan 2014.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:
Query Chains: Learning to Rank from Implicit Feedback Paper Authors: Filip Radlinski Thorsten Joachims Presented By: Steven Carr.
SEARCHING QUESTION AND ANSWER ARCHIVES Dr. Jiwoon Jeon Presented by CHARANYA VENKATESH KUMAR.
Developing and Evaluating a Query Recommendation Feature to Assist Users with Online Information Seeking & Retrieval With graduate students: Karl Gyllstrom,
1.Accuracy of Agree/Disagree relation classification. 2.Accuracy of user opinion prediction. 1.Task extraction performance on Bing web search log with.
FindAll: A Local Search Engine for Mobile Phones Aruna Balasubramanian University of Washington.
Information Retrieval: Human-Computer Interfaces and Information Access Process.
Context-aware Query Suggestion by Mining Click-through and Session Data Authors: H. Cao et.al KDD 08 Presented by Shize Su 1.
Evaluating Search Engine
The Research Project - Preliminary Proposal Presentation Contextual Suggestion Track: Travel Plan Recommendation System Based on Open-web Information Presenter:
A Web of Concepts Dalvi, et al. Presented by Andrew Zitzelberger.
1 CS 430 / INFO 430 Information Retrieval Lecture 8 Query Refinement: Relevance Feedback Information Filtering.
Modern Information Retrieval
Active Repository Systems Yunwen Ye Cleaver Retreat June 14, 2001.
Information Retrieval Concerned with the: Representation of Storage of Organization of, and Access to Information items.
INFO 624 Week 3 Retrieval System Evaluation
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Learning to Advertise. Introduction Advertising on the Internet = $$$ –Especially search advertising and web page advertising Problem: –Selecting ads.
Information Retrieval: Human-Computer Interfaces and Information Access Process.
Recall: Query Reformulation Approaches 1. Relevance feedback based vector model (Rocchio …) probabilistic model (Robertson & Sparck Jones, Croft…) 2. Cluster.
Web Mining Research: A Survey
Ed H. Chi IMA Digital Library Workshop Ed H. Chi U of Minnesota Ph.D.: Visualization Spreadsheets M.S.: Computational Biology.
1 CS 430 / INFO 430 Information Retrieval Lecture 24 Usability 2.
1 LM Approaches to Filtering Richard Schwartz, BBN LM/IR ARDA 2002 September 11-12, 2002 UMASS.
Web Projections Learning from Contextual Subgraphs of the Web Jure Leskovec, CMU Susan Dumais, MSR Eric Horvitz, MSR.
J. Chen, O. R. Zaiane and R. Goebel An Unsupervised Approach to Cluster Web Search Results based on Word Sense Communities.
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Online Learning for Web Query Generation: Finding Documents Matching a Minority Concept on the Web Rayid Ghani Accenture Technology Labs, USA Rosie Jones.
Query session guided multi- document summarization THESIS PRESENTATION BY TAL BAUMEL ADVISOR: PROF. MICHAEL ELHADAD.
Retrieval and Evaluation Techniques for Personal Information Jin Young Kim 7/26 Ph.D Dissertation Seminar.
HUMANE INFORMATION SEEKING: GOING BEYOND THE IR WAY JIN YOUNG IBM RESEARCH 1.
Retrieval Model and Evaluation Jinyoung Kim UMass Amherst CS646 Lecture 1.
Search and Retrieval: Relevance and Evaluation Prof. Marti Hearst SIMS 202, Lecture 20.
Jaime Teevan Microsoft Research Finding and Re-Finding Personal Information.
CS598CXZ Course Summary ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
IR Evaluation Evaluate what? –user satisfaction on specific task –speed –presentation (interface) issue –etc. My focus today: –comparative performance.
Understanding and Predicting Graded Search Satisfaction Tang Yuk Yu 1.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
Personal Information Management Vitor R. Carvalho : Personalized Information Retrieval Carnegie Mellon University February 8 th 2005.
Hao Wu Nov Outline Introduction Related Work Experiment Methods Results Conclusions & Next Steps.
Mining the Web to Create Minority Language Corpora Rayid Ghani Accenture Technology Labs - Research Rosie Jones Carnegie Mellon University Dunja Mladenic.
1 Information Retrieval Acknowledgements: Dr Mounia Lalmas (QMW) Dr Joemon Jose (Glasgow)
Implicit Acquisition of Context for Personalization of Information Retrieval Systems Chang Liu, Nicholas J. Belkin School of Communication and Information.
Probabilistic Query Expansion Using Query Logs Hang Cui Tianjin University, China Ji-Rong Wen Microsoft Research Asia, China Jian-Yun Nie University of.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Toward A Session-Based Search Engine Smitha Sriram, Xuehua Shen, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
GUIDED BY DR. A. J. AGRAWAL Search Engine By Chetan R. Rathod.
Ben Carterette Paul Clough Evangelos Kanoulas Mark Sanderson.
WIRED Week 3 Syllabus Update (next week) Readings Overview - Quick Review of Last Week’s IR Models (if time) - Evaluating IR Systems - Understanding Queries.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
Adish Singla, Microsoft Bing Ryen W. White, Microsoft Research Jeff Huang, University of Washington.
Retroactive Answering of Search Queries Beverly Yang Glen Jeh.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Information Retrieval
ASSIST: Adaptive Social Support for Information Space Traversal Jill Freyne and Rosta Farzan.
ASSOCIATIVE BROWSING Evaluating 1 Jinyoung Kim / W. Bruce Croft / David Smith for Personal Information.
Relevance Models and Answer Granularity for Question Answering W. Bruce Croft and James Allan CIIR University of Massachusetts, Amherst.
Augmenting (personal) IR Readings Review Evaluation Papers returned & discussed Papers and Projects checkin time.
Chapter. 3: Retrieval Evaluation 1/2/2016Dr. Almetwally Mostafa 1.
Predicting User Interests from Contextual Information R. W. White, P. Bailey, L. Chen Microsoft (SIGIR 2009) Presenter : Jae-won Lee.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Query Refinement and Relevance Feedback.
Navigation Aided Retrieval Shashank Pandit & Christopher Olston Carnegie Mellon & Yahoo.
1 © 2004 Cisco Systems, Inc. All rights reserved. Session Number Presentation_ID Cisco Technical Support Seminar Using the Cisco Technical Support Website.
Date: 2012/11/15 Author: Jin Young Kim, Kevyn Collins-Thompson,
Retrieval Performance Evaluation - Measures
Ranking using Multiple Document Types in Desktop Search
Presentation transcript:

ASSOCIATIVE BROWSING Evaluating 1 Jin Y. Kim / W. Bruce Croft / David Smith by Simulation

* What do you remember about your documents? 2 Registration James Use search if you recall keywords!

* What if keyword search is not enough? 3 Registration Associative browsing to the rescue!

* Probabilistic User Modeling Query generation model Term selection from a target document [Kim&Croft09] State transition model Use browsing when result looks marginally relevant Link selection model Click on browsing suggestions based on perceived relevance 4

* Simulating Interaction using Probabilistic User Model 5 Initial Query : James Registration Marginally Relevant (11 < Rank D < 50 ) Not Relevant (Rank D > 50 ) Reformulated Query : Two Dollar Registration Search Click On a Result : 1. Two Dollar Regist… End Target Doc at Top 10 Target Doc at Top 10 Target Doc :

* A User Model for Link Selection User’s browsing behavior [Smucker&Allan06] Fan-out 1~3: the number of clicks per ranked list BFS vs. DFS : the order in which documents are visited

* A User Model for Link Selection User’s level of knowledge Random : randomly click on a ranked list Informed : more likely to click on more relevant item Oracle : always click on the most relevant item Relevance estimated using the position of target item

* Evaluation Results Simulated interaction was generated using CS collection 63,260 known-item finding sessions in total The Value of Browsing Browsing was used in 15% of all sessions Browsing saved 42% of sessions when used Comparison with User Study Results Roughly matches in terms of overall usage and success ratio Evaluation Type TotalBrowsing used Successful Simulation63,2609,410 (14.8%)3,957 (42.0%) User Study29042 (14.5%)15 (35.7%)

* Evaluation Results Success Ratio of Browsing More Exploration

* Summary 10 Associative Browsing ModelEvaluation by Simulation Any Questions? Jin Y. Kim / W. Bruce Croft / David Smith Simulated evaluation showed very similar statistics to user study in when and how successfully associative browsing is used Simulated evaluation reveals a subtle interaction between the level of knowledge and the degree of exploration

* Simulation of Know-item Finding using Memory Model Build the model of user’s memory Model how the memory degrades over time Generate search and browsing behavior on the model Query-term selection from the memory model Use information scent to guide browsing choices [Pirolli, Fu, Chi] Update the memory model during the interaction New terms and associations are learned 11 t1t1 t1t1 t2t2 t2t2 t3t3 t3t3 t4t4 t4t4 t5t5 t5t5 t3t3 t3t3

O PTIONAL S LIDES 12

* Evaluation Results Lengths of Successful Sessions

* Summary of Previous Evaluation User study by DocTrack Game [Kim&Croft11] Collect public documents in UMass CS department Build a web interface by which participants can find documents Department people were asked to join and compete Limitations Fixed collection, with a small set of target tasks Hard to evaluate with varying system parameters Simulated Evaluation as a Solution Build a model of user behavior Generate simulated interaction logs 14 If search accuracy improves by X%, how will it affect user behavior? How would its effectiveness vary for diverse groups of users?

* Building the Associative Browsing Model Concept Extraction 3. Link Extraction 4. Link Refinement 1. Document Collection Term Similarity Temporal Similarity Co-occurrence

* DocTrack Game 16

* Community Efforts based on the Datasets 17