Introduction to Information Retrieval Introduction to Information Retrieval Information Retrieval and Web Search Lecture 9: Relevance feedback & query.

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan.
Advertisements

Information Retrieval and Data Mining (AT71. 07) Comp. Sc. and Inf
Chapter 5: Introduction to Information Retrieval
Introduction to Information Retrieval
CpSc 881: Information Retrieval
Relevance Feedback and Query Expansion
Kevin Knight,
K nearest neighbor and Rocchio algorithm
Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22.
Query Operations: Automatic Local Analysis. Introduction Difficulty of formulating user queries –Insufficient knowledge of the collection –Insufficient.
1 CS 430 / INFO 430 Information Retrieval Lecture 8 Query Refinement: Relevance Feedback Information Filtering.
CS276 Information Retrieval and Web Search Lecture 9.
CS276A Information Retrieval Lecture 9. Recap of the last lecture Results summaries Evaluating a search engine Benchmarks Precision and recall.
Learning Techniques for Information Retrieval Perceptron algorithm Least mean.
Chapter 5: Query Operations Baeza-Yates, 1999 Modern Information Retrieval.
1 Query Language Baeza-Yates and Navarro Modern Information Retrieval, 1999 Chapter 4.
Recall: Query Reformulation Approaches 1. Relevance feedback based vector model (Rocchio …) probabilistic model (Robertson & Sparck Jones, Croft…) 2. Cluster.
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
Query Reformulation: User Relevance Feedback. Introduction Difficulty of formulating user queries –Users have insufficient knowledge of the collection.
Information Retrieval - Query expansion
Learning Techniques for Information Retrieval We cover 1.Perceptron algorithm 2.Least mean square algorithm 3.Chapter 5.2 User relevance feedback (pp )
Query Operations: Automatic Global Analysis. Motivation Methods of local analysis extract information from local set of documents retrieved to expand.
Chapter 5: Information Retrieval and Web Search
PrasadL11QueryOps1 Query Operations Adapted from Lectures by Prabhakar Raghavan (Yahoo, Stanford) and Christopher Manning (Stanford)
Modern Information Retrieval Relevance Feedback
Query Expansion.
Document ranking Paolo Ferragina Dipartimento di Informatica Università di Pisa.
Information Retrieval and Web Search Relevance Feedback. Query Expansion Instructor: Rada Mihalcea Class web page:
COMP423.  Query expansion  Two approaches ◦ Relevance feedback ◦ Thesaurus-based  Most Slides copied from ◦
PrasadL11QueryOps1 Query Operations Adapted from Lectures by Prabhakar Raghavan (Google) and Christopher Manning (Stanford)
Xiaoying Gao Computer Science Victoria University of Wellington Intelligent Agents COMP 423.
Query Expansion By: Sean McGettrick. What is Query Expansion? Query Expansion is the term given when a search engine adding search terms to a user’s weighted.
Query Operations J. H. Wang Mar. 26, The Retrieval Process User Interface Text Operations Query Operations Indexing Searching Ranking Index Text.
1 Query Operations Relevance Feedback & Query Expansion.
Xiaoying Gao Computer Science Victoria University of Wellington Intelligent Agents COMP 423.
Chapter 6: Information Retrieval and Web Search
Introduction to Digital Libraries hussein suleman uct cs honours 2003.
Relevance Feedback Hongning Wang What we have learned so far Information Retrieval User results Query Rep Doc Rep (Index) Ranker.
University of Malta CSA3080: Lecture 6 © Chris Staff 1 of 20 CSA3080: Adaptive Hypertext Systems I Dr. Christopher Staff Department.
IR Theory: Relevance Feedback. Relevance Feedback: Example  Initial Results Search Engine2.
CS276 Information Retrieval and Web Search
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
Query Expansion By: Sean McGettrick. What is Query Expansion? Query Expansion is the term given when a search engine adding search terms to a user’s weighted.
Query Suggestion. n A variety of automatic or semi-automatic query suggestion techniques have been developed  Goal is to improve effectiveness by matching.
CS276A Text Information Retrieval, Mining, and Exploitation Lecture 8 31 Oct 2002.
Evidence from Behavior
Information Retrieval and Web Search Relevance Feedback. Query Expansion Instructor: Rada Mihalcea.
Relevance Feedback Hongning Wang
Lecture 11: Relevance Feedback & Query Expansion
Xiaoying Gao Computer Science Victoria University of Wellington COMP307 NLP 4 Information Retrieval.
Kevin Knight,
Relevance Feedback Prof. Marti Hearst SIMS 202, Lecture 24.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Query Refinement and Relevance Feedback.
1 CS 430: Information Discovery Lecture 21 Interactive Retrieval.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar.
Query expansion COMP423. Menu Query expansion Two approaches Relevance feedback Thesaurus-based Most Slides copied from
Lecture 9: Query Expansion. This lecture Improving results For high recall. E.g., searching for aircraft doesn’t match with plane; nor thermodynamic with.
RELEVANCE FEEDBACK & QUERY EXPANSION BY K.KARTHIKEYAN 1.
An Introduction to IR 8th Course, Chapter 9: Relevance feedback and query expansion.
Sampath Jayarathna Cal Poly Pomona
CS276 Information Retrieval and Web Search
Top-K documents Exact retrieval
Lecture 1: Introduction and the Boolean Model Information Retrieval
Query Expansion and Relevance Feedback
CSE 538 MRS BOOK – CHAPTER IX Relevance Feedback & Query Expansion
Lecture 12: Relevance Feedback & Query Expansion - II
COIS 442 Information Retrieval Foundations
Adapted from Lectures by
Relevance Feedback Hongning Wang
Presentation transcript:

Introduction to Information Retrieval Introduction to Information Retrieval Information Retrieval and Web Search Lecture 9: Relevance feedback & query expansion 1

Introduction to Information Retrieval Introduction  In most collections, the same concept may be referred to using different words.  This issue, known as synonymy, has an impact on the recall of most information retrieval systems.  For example, you would want a search for aircraft to match plane. 2

Introduction to Information Retrieval Introduction  Users often attempt to address this problem themselves by manually refining a query.  In this chapter we discuss ways in which a system can help with query refinement, either fully automatically or with the user in the loop. 3

Introduction to Information Retrieval Introduction 4

Introduction to Information Retrieval Relevance Feedback  Relevance feedback: user feedback on relevance of docs in initial set of results  User issues a (short, simple) query  The user marks some results as relevant or non-relevant.  The system computes a better representation of the information need based on feedback.  Relevance feedback can go through one or more iterations. Sec

Introduction to Information Retrieval Relevance Feedback  Idea: it may be difficult to formulate a good query when you don’t know the collection well, so iterate  We will use ad hoc retrieval to refer to regular retrieval without relevance feedback. Sec

Introduction to Information Retrieval Example 7

Introduction to Information Retrieval Relevance Feedback: Example Sec

Introduction to Information Retrieval Results for Initial Query Sec

Introduction to Information Retrieval Relevance Feedback Sec

Introduction to Information Retrieval Results after Relevance Feedback Sec

Introduction to Information Retrieval Initial query/results  Initial query: New space satellite applications , 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer , 07/09/91, NASA Scratches Environment Gear From Satellite Plan , 04/04/90, Science Panel Backs NASA Satellite Plan, But Urges Launches of Smaller Probes , 09/09/91, A NASA Satellite Project Accomplishes Incredible Feat: Staying Within Budget , 07/24/90, Scientist Who Exposed Global Warming Proposes Satellites for Climate Research , 08/22/90, Report Provides Support for the Critics Of Using Big Satellites to Study Climate , 04/13/87, Arianespace Receives Satellite Launch Pact From Telesat Canada , 12/02/87, Telecommunications Tale of Two Companies  User then marks relevant documents with “+” Sec

Introduction to Information Retrieval Expanded query after relevance feedback  new space  satellite application  nasa eos  launch aster  instrument arianespace  bundespost ss  rocket scientist  broadcast earth  oil measure Sec

Introduction to Information Retrieval Results for expanded query , 07/09/91, NASA Scratches Environment Gear From Satellite Plan , 08/13/91, NASA Hasn’t Scrapped Imaging Spectrometer , 08/07/89, When the Pentagon Launches a Secret Satellite, Space Sleuths Do Some Spy Work of Their Own , 07/31/89, NASA Uses ‘Warm’ Superconductors For Fast Circuit , 12/02/87, Telecommunications Tale of Two Companies , 07/09/91, Soviets May Adapt Parts of SS-20 Missile For Commercial Use , 07/12/88, Gaping Gap: Pentagon Lags in Race To Match the Soviets In Rocket Launchers , 06/14/90, Rescue of Satellite By Space Agency To Cost $90 Million Sec

Introduction to Information Retrieval Key concept: Centroid  The centroid is the center of mass of a set of points  Definition: Centroid where C is a set of documents. Sec

Introduction to Information Retrieval Rocchio Algorithm  The SMART (System for the Mechanical Analysis and Retrieval of Text) Information Retrieval System is an information retrieval system developed at Cornell University in the 1960s.  Rocchio Algorithm stemmed from SMART IR system. 16

Introduction to Information Retrieval Rocchio Algorithm Sec

Introduction to Information Retrieval The Theoretically Best Query x x x x o o o Optimal query x non-relevant documents o relevant documents o o o x x x x x x x x x x x x  x x Sec

Introduction to Information Retrieval Rocchio Algorithm  Problem: we don’t know the truly relevant docs  Used in practice:  D r = set of known relevant doc vectors  D nr = set of known irrelevant doc vectors  Different from C r and C nr  q m = modified query vector; q 0 = original query vector; α,β,γ: weights (hand-chosen or set empirically)  New query moves toward relevant documents and away from irrelevant documents Sec

Introduction to Information Retrieval Rocchio Algorithm Sec

Introduction to Information Retrieval Relevance feedback on initial query x x x x o o o Revised query x known non-relevant documents o known relevant documents o o o x x x x x x x x x x x x  x x Initial query  Sec

Introduction to Information Retrieval Relevance Feedback in vector spaces  Relevance feedback can improve both recall and precision.  But, in practice, it has been shown to be most useful for increasing recall in situations where recall is important. Sec

Introduction to Information Retrieval Positive vs Negative Feedback Sec

Introduction to Information Retrieval When does relevance feedback work?  A1: User has sufficient knowledge to be able to make an initial query which is at least somewhere close to the documents they desire.  A2: Relevant documents be similar to each other.  Term distribution in relevant documents will be similar  Term distribution in non-relevant documents will be different from those in relevant documents  Either: All relevant documents are tightly clustered around a single prototype.  Or: There are different prototypes, but they have significant vocabulary overlap.  Similarities between relevant and irrelevant documents are small Sec

Introduction to Information Retrieval Violation of A1  User does not have sufficient initial knowledge.  Example:  Mismatch of searcher’s vocabulary vs. collection vocabulary Sec

Introduction to Information Retrieval Violation of A2  There are several relevance prototypes.  Often: instances of a general concept Sec

Introduction to Information Retrieval Relevance Feedback: Problems  Long queries are inefficient for typical IR engine.  Long response times for user.  High cost for retrieval system.  Partial solution:  Only reweight certain prominent terms  Perhaps top 20 by term frequency  Relevance feedback is not necessarily popular with users.  Users are often reluctant to provide explicit feedback, or in general do not wish to prolong the search interaction. 27

Introduction to Information Retrieval Relevance Feedback on the Web  Some search engines offer a similar/related pages feature (this is a trivial form of relevance feedback)  Google (link-based)  Altavista  Stanford WebBase  But some don’t because it’s hard to explain to average user:  Alltheweb  bing  Yahoo  Excite initially had true relevance feedback, but abandoned it due to lack of use. Sec

Introduction to Information Retrieval Excite Relevance Feedback Spink et al  Only about 4% of query sessions from a user used relevance feedback option  Expressed as “More like this” link next to each result  About 70% of users only looked at first page of results and didn’t pursue things further  Relevance feedback improved results about 2/3 of the time Sec

Introduction to Information Retrieval Pseudo relevance feedback  Pseudo-relevance feedback automates the “manual” part of true relevance feedback.  Pseudo-relevance algorithm:  Retrieve a ranked list of hits for the user’s query  Assume that the top k documents are relevant.  Do relevance feedback (e.g., Rocchio)  Works very well on average  But can go horribly wrong for some queries.  For example, if the query is about copper mines and the top several documents are all about mines in Chile, then there may be query drift in the direction of documents on Chile. Sec

Introduction to Information Retrieval Indirect relevance feedback  On the web, DirectHit introduced a form of indirect relevance feedback.  DirectHit ranked documents higher that users look at more often.  Clicked on links are assumed likely to be relevant  Assuming the displayed summaries are good, etc.  Globally: Not necessarily user or query specific.  This is the general area of clickstream mining  Today – handled as part of machine-learned ranking 31

Introduction to Information Retrieval Query Expansion  In relevance feedback, users give additional input (relevant/non-relevant) on documents, which is used to reweight terms in the documents  In query expansion, users give additional input (good/bad search term) on words or phrases possibly suggesting additional query terms. Sec

Introduction to Information Retrieval Example 33

Introduction to Information Retrieval How do we augment the user query?  Manual thesaurus  human editors have built up sets of synonymous names for concepts, without designating a canonical term.  Can be query rather than just synonyms  Global Analysis: (static; of all documents in collection)  Automatically derived thesaurus  (co-occurrence statistics)  Refinements based on query log mining  Common on the web  Local Analysis: (dynamic)  Analysis of documents in result set Sec

Introduction to Information Retrieval Example of manual thesaurus Sec

Introduction to Information Retrieval Thesaurus-based query expansion  For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus  feline → feline cat  May weight added terms less than original query terms.  Generally increases recall  Widely used in many science/engineering fields  May significantly decrease precision, particularly with ambiguous terms.  There is a high cost of manually producing a thesaurus  And for updating it for scientific changes Sec

Introduction to Information Retrieval Automatic Thesaurus Generation  Attempt to generate a thesaurus automatically by analyzing the collection of documents  Fundamental notion: similarity between two words  Approach 1: Two words are similar if they co-occur in a document or paragraph.  Definition 2: Two words are similar if they occur in a given grammatical relation.  You can harvest, peel, eat, prepare, etc. apples and pears, so apples and pears must be similar. Sec

Introduction to Information Retrieval Co-occurrence Thesaurus  Simplest way to compute one is based on term-term similarities in C = AA T where A is term-document matrix.  w i,j = weight for (t i,d j )  A has length-normalized rows.  For each t i, pick terms with high values in C titi djdj N M What does C contain if A is a term- doc incidence (0/1) matrix? Sec

Introduction to Information Retrieval Automatic Thesaurus Generation Example Sec

Introduction to Information Retrieval Automatic Thesaurus Generation Discussion Term ambiguity may introduce irrelevant statistically correlated terms. “Apple computer”  “Apple red fruit computer” Problems: False positives: Words deemed similar that are not False negatives: Words deemed dissimilar that are similar Since terms are highly correlated anyway, expansion may not retrieve many additional documents. Sec