Active Feedback: UIUC TREC 2003 HARD Track Experiments Xuehua Shen, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.

Slides:



Advertisements
Similar presentations
CWS: A Comparative Web Search System Jian-Tao Sun, Xuanhui Wang, § Dou Shen Hua-Jun Zeng, Zheng Chen Microsoft Research Asia University of Illinois at.
Advertisements

0 - 0.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
MULT. INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
Addition Facts
ACM SIGIR 2009 Workshop on Redundancy, Diversity, and Interdependent Document Relevance, July 23, 2009, Boston, MA 1 Modeling Diversity in Information.
Accurately Interpreting Clickthrough Data as Implicit Feedback Joachims, Granka, Pan, Hembrooke, Gay Paper Presentation: Vinay Goel 10/27/05.
Evaluating the Robustness of Learning from Implicit Feedback Filip Radlinski Thorsten Joachims Presentation by Dinesh Bhirud
Alexander Kotov and ChengXiang Zhai University of Illinois at Urbana-Champaign.
A Cross-Collection Mixture Model for Comparative Text Mining
ACM CIKM 2008, Oct , Napa Valley 1 Mining Term Association Patterns from Search Logs for Effective Query Reformulation Xuanhui Wang and ChengXiang.
Introduction to Information Retrieval
Addition 1’s to 20.
Week 1.
Document Summarization using Conditional Random Fields Dou Shen, Jian-Tao Sun, Hua Li, Qiang Yang, Zheng Chen IJCAI 2007 Hao-Chin Chang Department of Computer.
1 Opinion Summarization Using Entity Features and Probabilistic Sentence Coherence Optimization (UIUC at TAC 2008 Opinion Summarization Pilot) Nov 19,
1 Language Models for TR (Lecture for CS410-CXZ Text Info Systems) Feb. 25, 2011 ChengXiang Zhai Department of Computer Science University of Illinois,
Search Results Need to be Diverse Mark Sanderson University of Sheffield.
Query Dependent Pseudo-Relevance Feedback based on Wikipedia SIGIR ‘09 Advisor: Dr. Koh Jia-Ling Speaker: Lin, Yi-Jhen Date: 2010/01/24 1.
IR Challenges and Language Modeling. IR Achievements Search engines  Meta-search  Cross-lingual search  Factoid question answering  Filtering Statistical.
Re-ranking Documents Segments To Improve Access To Relevant Content in Information Retrieval Gary Madden Applied Computational Linguistics Dublin City.
An investigation of query expansion terms Gheorghe Muresan Rutgers University, School of Communication, Information and Library Science 4 Huntington St.,
Language Modeling Frameworks for Information Retrieval John Lafferty School of Computer Science Carnegie Mellon University.
The Relevance Model  A distribution over terms, given information need I, (Lavrenko and Croft 2001). For term r, P(I) can be dropped w/o affecting the.
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 30, (2014) BERLIN CHEN, YI-WEN CHEN, KUAN-YU CHEN, HSIN-MIN WANG2 AND KUEN-TYNG YU Department of Computer.
Evaluation David Kauchak cs458 Fall 2012 adapted from:
Evaluation David Kauchak cs160 Fall 2009 adapted from:
1 Information Filtering & Recommender Systems (Lecture for CS410 Text Info Systems) ChengXiang Zhai Department of Computer Science University of Illinois,
Minimal Test Collections for Retrieval Evaluation B. Carterette, J. Allan, R. Sitaraman University of Massachusetts Amherst SIGIR2006.
Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?
IR Evaluation Evaluate what? –user satisfaction on specific task –speed –presentation (interface) issue –etc. My focus today: –comparative performance.
A Comparative Study of Search Result Diversification Methods Wei Zheng and Hui Fang University of Delaware, Newark DE 19716, USA
Mining the Web to Create Minority Language Corpora Rayid Ghani Accenture Technology Labs - Research Rosie Jones Carnegie Mellon University Dunja Mladenic.
A General Optimization Framework for Smoothing Language Models on Graph Structures Qiaozhu Mei, Duo Zhang, ChengXiang Zhai University of Illinois at Urbana-Champaign.
Context-Sensitive Information Retrieval Using Implicit Feedback Xuehua Shen : department of Computer Science University of Illinois at Urbana-Champaign.
UCAIR Project Xuehua Shen, Bin Tan, ChengXiang Zhai
Distributed Information Retrieval Server Ranking for Distributed Text Retrieval Systems on the Internet B. Yuwono and D. Lee Siemens TREC-4 Report: Further.
Toward A Session-Based Search Engine Smitha Sriram, Xuehua Shen, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
Relevance Feedback Hongning Wang What we have learned so far Information Retrieval User results Query Rep Doc Rep (Index) Ranker.
Probabilistic Models of Novel Document Rankings for Faceted Topic Retrieval Ben Cartrette and Praveen Chandar Dept. of Computer and Information Science.
Less is More Probabilistic Models for Retrieving Fewer Relevant Documents Harr Chen, David R. Karger MIT CSAIL ACM SIGIR 2006 August 9, 2006.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
ACM SIGIR 2009 Workshop on Redundancy, Diversity, and Interdependent Document Relevance, July 23, 2009, Boston, MA 1 Modeling Diversity in Information.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Positional Relevance Model for Pseudo–Relevance Feedback Yuanhua Lv & ChengXiang Zhai Department of Computer Science, UIUC Presented by Bo Man 2014/11/18.
Implicit User Modeling for Personalized Search Xuehua Shen, Bin Tan, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
Mining Dependency Relations for Query Expansion in Passage Retrieval Renxu Sun, Chai-Huat Ong, Tat-Seng Chua National University of Singapore SIGIR2006.
1 Evaluating High Accuracy Retrieval Techniques Chirag Shah,W. Bruce Croft Center for Intelligent Information Retrieval Department of Computer Science.
Supporting Knowledge Discovery: Next Generation of Search Engines Qiaozhu Mei 04/21/2005.
Active Feedback in Ad Hoc IR Xuehua Shen, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
1 Adaptive Subjective Triggers for Opinionated Document Retrieval (WSDM 09’) Kazuhiro Seki, Kuniaki Uehara Date: 11/02/09 Speaker: Hsu, Yu-Wen Advisor:
Relevance Models and Answer Granularity for Question Answering W. Bruce Croft and James Allan CIIR University of Massachusetts, Amherst.
The Loquacious ( 愛說話 ) User: A Document-Independent Source of Terms for Query Expansion Diane Kelly et al. University of North Carolina at Chapel Hill.
Relevance Feedback Hongning Wang
{ Adaptive Relevance Feedback in Information Retrieval Yuanhua Lv and ChengXiang Zhai (CIKM ‘09) Date: 2010/10/12 Advisor: Dr. Koh, Jia-Ling Speaker: Lin,
1 Personalized IR Reloaded Xuehua Shen
The Effect of Database Size Distribution on Resource Selection Algorithms Luo Si and Jamie Callan School of Computer Science Carnegie Mellon University.
Text Information Management ChengXiang Zhai, Tao Tao, Xuehua Shen, Hui Fang, Azadeh Shakery, Jing Jiang.
Toward Entity Retrieval over Structured and Text Data Mayssam Sayyadian, Azadeh Shakery, AnHai Doan, ChengXiang Zhai Department of Computer Science University.
Context-Sensitive IR using Implicit Feedback Xuehua Shen, Bin Tan, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
哈工大信息检索研究室 HITIR ’ s Update Summary at TAC2008 Extractive Content Selection Using Evolutionary Manifold-ranking and Spectral Clustering Reporter: Ph.d.
A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval Chengxiang Zhai, John Lafferty School of Computer Science Carnegie.
University Of Seoul Ubiquitous Sensor Network Lab Query Dependent Pseudo-Relevance Feedback based on Wikipedia 전자전기컴퓨터공학 부 USN 연구실 G
Implementation Issues & IR Systems
Relevance Feedback Hongning Wang
Modeling Diversity in Information Retrieval
John Lafferty, Chengxiang Zhai School of Computer Science
Retrieval Utilities Relevance feedback Clustering
Retrieval Performance Evaluation - Measures
Preference Based Evaluation Measures for Novelty and Diversity
Presentation transcript:

Active Feedback: UIUC TREC 2003 HARD Track Experiments Xuehua Shen, ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign

Goal of Participation Our general goal is to test and extend language modeling approaches for a variety of different tasks Language Modeling Retrieval Methods HARD Active feedback RobustGenomicsWeb Robust feedback Semi-structured query model Relevance propagation model this talk notebook papers

Outline Active Feedback Three Methods HARD Track Experiment Design Results Conclusions & Future Work

What is Active Feedback? An IR system actively selects documents for obtaining relevance judgments If a user is willing to judge k documents, which k documents should we present in order to maximize learning effectiveness? Aim at minimizing a users effort…

Normal Relevance Feedback Feedback Judgments: d 1 + d 2 - … d k - Query Retrieval Engine Top K Results d d … d k 0.5 User Document collection

Active Feedback Feedback Judgments: d 1 + d 2 - … d k - Query Retrieval Engine Which k docs to present ? User Document collection Can we do better than just presenting top-K? (Consider redundancy…)

Active Feedback Methods Top-K (normal feedback) … Gapped Top-K K-cluster centroid Aiming at high diversity …

Evaluating Active Feedback in HARD Track Query Select 6 passages Clarification form User + Completed form Initial Results No feedback (Top-k, gapped, clustering) Feedback Results (doc-based, passage-based)

Retrieval Methods (Lemur toolkit) Query Q Document D Results Kullback-Leibler Divergence Scoring Feedback Docs F={d 1, …, d n } Active Feedback Default parameter settings Mixture Model Feedback Only learn from relevant docs

Results Top-k is always worse than gapped top-k and the clustering method Clustering generates fewer, but higher quality examples Passage-based query model updating performs better than document-based updating

Comparison of Three Active Feedback Methods CollectionActive FB Method #Rel Include judged docsExclude judged docs TREC2003 (Official) Top-K Gapped Clustering *0.514*0.326*0.503* AP88-89 Top-K Gapped *0.342* Clustering *0.328* Top-K is the worst! Clustering uses fewest relevant docs bold font = worst * = best

Appropriate Evaluation of Active Feedback New DB Original DB with judged docs Original DB without judged docs Cant tell if the ranking of un-judged documents is improved Different methods have different test documents See the learning effect more explicitly But the docs must be similar to original docs

Comparison of Different Test Data (Learning on AP88-89) Test DataActive FB Method AP88-89 Including judged docs Top-K Gapped *0.342* Clustering AP88-89 Excluding judged docs Top-K Gapped Clustering *0.328* AP90Top-K Gapped * Clustering *0.282 Top-K is consistently the worst! Clustering generates fewer, but higher quality examples

Effectiveness of Query Model Updating: Doc-based vs. Passage-based JudgmentsUpdating Method NoneBaseline (no updating) GappedDoc-based Passage-based Improvement+ 5.7%+2.7% ClusteringDoc-based Passage-based Improvement+5.4%+4.0% Mixture model query updating methods are effective Passage-based is consistently better than doc-based

Conclusions Introduced the active feedback problem Proposed and tested three methods for active feedback (top-k, gapped top-k, clustering) Studied the issue of evaluating active feedback methods Results show that –Presenting the top-k is not the best strategy –Clustering can generate fewer, higher quality feedback examples

Future Work Explore other methods for active feedback (e.g,. negative feedback, MMR method) Develop a general framework that –Combines all the utility factors (e.g., being informative and best for learning) –Can model different questions (e.g., model both term selection and relevance judgments) Further study how to evaluate active feedback methods