Zhixiang Chen & Xiannong Meng U.Texas-PanAm & Bucknell Univ.

Slides:



Advertisements
Similar presentations
Answering Approximate Queries over Autonomous Web Databases Xiangfu Meng, Z. M. Ma, and Li Yan College of Information Science and Engineering, Northeastern.
Advertisements

College of Information Technology & Design
Chapter 5: Introduction to Information Retrieval
Introduction to Information Retrieval
Optimizing search engines using clickthrough data
Query Chains: Learning to Rank from Implicit Feedback Paper Authors: Filip Radlinski Thorsten Joachims Presented By: Steven Carr.
Query Dependent Pseudo-Relevance Feedback based on Wikipedia SIGIR ‘09 Advisor: Dr. Koh Jia-Ling Speaker: Lin, Yi-Jhen Date: 2010/01/24 1.
Web Document Clustering: A Feasibility Demonstration Hui Han CSE dept. PSU 10/15/01.
Evaluating Search Engine
1 Statistical correlation analysis in image retrieval Reporter : Erica Li 2004/9/30.
T.Sharon - A.Frank 1 Internet Resources Discovery (IRD) IR Queries.
A machine learning approach to improve precision for navigational queries in a Web information retrieval system Reiner Kraft
A novel log-based relevance feedback technique in content- based image retrieval Reporter: Francis 2005/6/2.
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
1998/5/21by Chang I-Ning1 ImageRover: A Content-Based Image Browser for the World Wide Web Introduction Approach Image Collection Subsystem Image Query.
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
Query Reformulation: User Relevance Feedback. Introduction Difficulty of formulating user queries –Users have insufficient knowledge of the collection.
1 An Empirical Study on Large-Scale Content-Based Image Retrieval Group Meeting Presented by Wyman
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Online Learning for Web Query Generation: Finding Documents Matching a Minority Concept on the Web Rayid Ghani Accenture Technology Labs, USA Rosie Jones.
MARS: Applying Multiplicative Adaptive User Preference Retrieval to Web Search Zhixiang Chen & Xiannong Meng U.Texas-PanAm & Bucknell Univ.
Parallel and Distributed IR
An Overview of Relevance Feedback, by Priyesh Sudra 1 An Overview of Relevance Feedback PRIYESH SUDRA.
Personalized Ontologies for Web Search and Caching Susan Gauch Information and Telecommunications Technology Center Electrical Engineering and Computer.
Chapter 5: Information Retrieval and Web Search
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 30, (2014) BERLIN CHEN, YI-WEN CHEN, KUAN-YU CHEN, HSIN-MIN WANG2 AND KUEN-TYNG YU Department of Computer.
Quality-Aware Collaborative Question Answering: Methods and Evaluation Maggy Anastasia Suryanto, Ee-Peng Lim, Aixin Sun, and Roger H. L. Chiang. In Proceedings.
Search Engines and Information Retrieval Chapter 1.
1 Context-Aware Search Personalization with Concept Preference CIKM’11 Advisor : Jia Ling, Koh Speaker : SHENG HONG, CHUNG.
Bayesian Sets Zoubin Ghahramani and Kathertine A. Heller NIPS 2005 Presented by Qi An Mar. 17 th, 2006.
PERSONALIZED SEARCH Ram Nithin Baalay. Personalized Search? Search Engine: A Vital Need Next level of Intelligent Information Retrieval. Retrieval of.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
When Experts Agree: Using Non-Affiliated Experts To Rank Popular Topics Meital Aizen.
Exploring Online Social Activities for Adaptive Search Personalization CIKM’10 Advisor : Jia Ling, Koh Speaker : SHENG HONG, CHUNG.
Query Expansion By: Sean McGettrick. What is Query Expansion? Query Expansion is the term given when a search engine adding search terms to a user’s weighted.
11 A Hybrid Phish Detection Approach by Identity Discovery and Keywords Retrieval Reporter: 林佳宜 /10/17.
Probabilistic Query Expansion Using Query Logs Hang Cui Tianjin University, China Ji-Rong Wen Microsoft Research Asia, China Jian-Yun Nie University of.
Chapter 6: Information Retrieval and Web Search
Introduction to Digital Libraries hussein suleman uct cs honours 2003.
IR Theory: Relevance Feedback. Relevance Feedback: Example  Initial Results Search Engine2.
Enhancing Cluster Labeling Using Wikipedia David Carmel, Haggai Roitman, Naama Zwerdling IBM Research Lab (SIGIR’09) Date: 11/09/2009 Speaker: Cho, Chin.
1 Opinion Retrieval from Blogs Wei Zhang, Clement Yu, and Weiyi Meng (2007 CIKM)
Query Expansion By: Sean McGettrick. What is Query Expansion? Query Expansion is the term given when a search engine adding search terms to a user’s weighted.
A Novel Visualization Model for Web Search Results Nguyen T, and Zhang J IEEE Transactions on Visualization and Computer Graphics PAWS Meeting Presented.
Advantages of Query Biased Summaries in Information Retrieval by A. Tombros and M. Sanderson Presenters: Omer Erdil Albayrak Bilge Koroglu.
Text Categorization With Support Vector Machines: Learning With Many Relevant Features By Thornsten Joachims Presented By Meghneel Gore.
1 Adaptive Subjective Triggers for Opinionated Document Retrieval (WSDM 09’) Kazuhiro Seki, Kuniaki Uehara Date: 11/02/09 Speaker: Hsu, Yu-Wen Advisor:
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Refined Online Citation Matching and Adaptive Canonical Metadata Construction CSE 598B Course Project Report Huajing Li.
Information Retrieval Lecture 3 Introduction to Information Retrieval (Manning et al. 2007) Chapter 8 For the MSc Computer Science Programme Dell Zhang.
Bringing Order to the Web : Automatically Categorizing Search Results Advisor : Dr. Hsu Graduate : Keng-Wei Chang Author : Hao Chen Susan Dumais.
CS791 - Technologies of Google Spring A Web­based Kernel Function for Measuring the Similarity of Short Text Snippets By Mehran Sahami, Timothy.
1 CS 430 / INFO 430: Information Retrieval Lecture 20 Web Search 2.
User Modeling for Personal Assistant
Designing Cross-Language Information Retrieval System using various Techniques of Query Expansion and Indexing for Improved Performance  Hello everyone,
Simultaneous Support for Finding and Re-Finding
Lecture 12: Relevance Feedback & Query Expansion - II
HITS Hypertext-Induced Topic Selection
Information Retrieval and Web Search
IST 516 Fall 2011 Dongwon Lee, Ph.D.
Information Retrieval on the World Wide Web
Relevance Feedback Hongning Wang
Q4 Measuring Effectiveness
Web Information retrieval (Web IR)
Searching with context
Chapter 5: Information Retrieval and Web Search
Relevance and Reinforcement in Interactive Browsing
Retrieval Utilities Relevance feedback Clustering
Information Retrieval and Web Design
Presentation transcript:

Zhixiang Chen & Xiannong Meng U.Texas-PanAm & Bucknell Univ. MARS: Applying Multiplicative Adaptive User Preference Retrieval to Web Search Zhixiang Chen & Xiannong Meng U.Texas-PanAm & Bucknell Univ.

Outline of Presentation Introduction -- the vector model over R+ Multiplicative adaptive query expansion algorithm MARS -- meta-search engine Initial empirical results Conclusions

Introduction Vector model A document is represented by the vector d = (d1, … dn) where di’s are the relevance value of i-th index A user query is represented by q = (q1,…,qn) where qi’s are query terms Document d’ is preferred over document d iff q•d < q•d’

Introduction -- continued Relevance feedback to improve search accuracy In general, take user’s feedback, update the query vector to get closer to the target q(k+1) = q(k) + a1•d1 + … + as•ds Example: relevance feedback based on similarity Problem with linear adaptive query updating: converges too slowly

Multiplicative Adaptive Query Expansion Algorithm Linear adaptive yields some improvement, but it converges to an initially unknown target too slowly Multiplicative adaptive query expansion promotes or demotes the query terms by a constant factor in i-th round of feedback promotes: q(i,k+1) = (1+f(d)) • q(i,k) demotes: q(i, k+1) = q(i,k)/(1+f(d))

MA Algorithm -- continue while (the user judged a document d) { for each query term in q(k) if (d is judged relevant) // promote the term q(i,k+1) = (1+f(di)) • q(i,k) else if (d is judged irrelevant) // demote the term q(i, k+1) = q(i,k) / (1+f(di)) else // no opinion expressed, keep the term q(i, k+1) = q(i, k) }

MA Algorithm -- continue The f(di) can be any positive function In our experiments we used f(x) = 2.71828 • weight(x) where x is a term appeared in di We have detailed analysis of the performance of the MA algorithm in detail in another paper Overall, MA performed better than linear additive query updating such as Rocchio’s similarity based relevance feedback in terms of time complexity and search accuracy In this paper we present some experiment results

The Meta-search Engine MARS We implemented the algorithm MARS in our experimental search engine The meta-search engine has a number of components, each of which is implemented as a module It is very flexible to add or remove a component

The Meta-search Engine MARS -- continue

The Meta-search Engine MARS -- continue User types a query into the browser The QueryParser sends the query to the Dispatcher The Dispatcher determines whether this is an original query, or a refined one If it is the original, send the query to one of the search engines according to user choice If it is a refined one, apply the MA algorithm

The Meta-search Engine MARS -- continue The results either from MA or directly from other search engines are ranked according to the scores based on similarity The user can mark a document relevant or irrelevant by clicking the corresponding radio button at the MARS interface The algorithm MA refines document ranking by either promoting or demoting the query term

Initial Empirical Results We conducted two types of experiments to examine the performance of MARS The first is the response time of MARS The initial time retrieving results from external search engines The refine time needed for MARS to produce results Tested on a SPARC Ultra-10 with 128 M memory

Initial Empirical Results --continue Initial retrieval time: mean: 3.86 seconds standard deviation: 1.15 seconds 95% confidence interval 0.635 maximum: 5.29 seconds Refine time: mean: 0.986 seconds standard deviation: 0.427 seconds 95% confidence interval: 0.236 maximum: 1.44 seconds

Initial Empirical Results --continue The second is the search accuracy improvement define A: total set of documents returned R: the set of relevant documents returned Rm: set of relevant documents among top-m-ranked m: an integer between 1 and |A| recall rate = |Rm| / |R| precision = |Rm| / m

Initial Empirical Results --continue randomly selected 70+ words or phrases send each one to AltaVista, retrieving the first 200 results of each query manually examine results to mark documents as relevant or irrelevant compute the precision and recall use the same set of documents for MARS

Initial Empirical Results --continue

Initial Empirical Results --continue Results show that the extra processing time of MARS is not significant, relative to the whole search response time Results show that the search accuracy is improved by in both recall and precision General search terms improve more, specific terms improve less

Conclusions Linear adaptive query update is too slow to converge Multiplicative adaptive is faster to converge User inputs are limited to a few iterations of feedback The extra processing time required is not too significant Search accuracy in terms of precision and recall is improved