Download presentation
Presentation is loading. Please wait.
Published byHarriet Lee Modified over 9 years ago
1
CS276A Text Information Retrieval, Mining, and Exploitation Lecture 8 31 Oct 2002
2
Recap: IR based on Language Model query d1 d2 dn … Information need document collection generation … One night in a hotel, I saw this late night talk show where Sergey Brin popped on suggesting the web search tip that you should think of some words that would likely appear on pages that would answer your question and use those as your search terms – let’s exploit that idea!
3
Recap: Query generation probability (unigrams) Ranking formula The probability of producing the query given the language model of document d using MLE is: Unigram assumption: Given a particular language model, the query terms occur independently : language model of document d : raw tf of term t in document d : total number of tokens in document d
4
Recap: Query generation probability (mixture model) P(w|d) = P mle (w|M d ) + (1 – )P mle (w|M c ) Mixes the probability from the document with the general collection frequency of the word. Correctly setting is very important A high value of lambda makes the search “conjunctive-like” – suitable for short queries A low value is more suitable for long queries Can tune to optimize performance Perhaps make it dependent on document size (cf. Dirichlet prior or Witten-Bell smoothing)
5
Today’s topics Relevance Feedback Query Expansion
6
Relevance Feedback Relevance feedback = user feedback on relevance of initial set of results The user marks returned documents as relevant or non-relevant. The system computes a better representation of the information need based on feedback. Relevance feedback can go through one or more iterations.
7
Relevance Feedback: Example Image search engine (couldn’t find relevance feedback engine for text!) url: http://nayana.ece.ucsb.edu/imsearch/imsearch.ht ml
8
Initial Query
9
Results for Initial Query
10
Relevance Feedback
11
Results after Relevance Feedback
12
Rocchio Algorithm The Rocchio algorithm incorporates relevance feedback information into the vector space model. The optimal query vector for separating relevant and non-relevant documents: Unrealistic: we don’t know all relevant documents.
13
Rocchio Algorithm Used in practice: Typical weights: alpha = 8, beta = 64, gamma = 64 Tradeoff alpha vs beta/gamma: If we have a lot of judged documents, we want a higher beta/gamma. But we usually don’t …
14
Relevance Feedback in Probabilistic Information Retrieval How?
15
Relevance Feedback in Probabilistic Information Retrieval We can modify the query based on relevance feedback and apply standard model. Examples: Binary independence model Language model
16
Binary Independence Model Since x i is either 0 or 1:
17
Binary Independence Model Used as before.We assume this is =1 in simple retrieval. In relevance feedback, we have data to estimate probability of non-occurrence.
18
Binary Independence Model Note that we have 3 relevance “states” now: relevant, non-relevant and unjudged.
19
Positive vs Negative Feedback Positive feedback is more valuable than negative feedback. Many systems only allow positive feedback. Why?
20
Relevance Feedback: Assumptions A1: User has sufficient knowledge for initial query. A2: Relevance prototypes are “well-behaved”. Either: All relevant documents are similar to a single prototype. Or: There are different prototypes, but they have significant vocabulary overlap.
21
Violation of A1 User does not have sufficient initial knowledge. Examples: Misspellings (Brittany Speers) Cross-language information retrieval (hígado) Mismatch of searcher’s vocabulary vs collection vocabulary (e.g., regional differences, different fields of scientific study: genetics vs medicine, bernoulli naïve bayes vs binary independence model)
22
Violation of A2 There are several relevance prototypes. Examples: Burma/Myanmar Contradictory government policies Pop stars that worked at Burger King Often: instances of a general concept Good editorial content can address problem Report on contradictory government policies
23
Relevance Feedback on the Web Some search engines offer a similar/related pages feature (simplest form of relevance feedback) Google (link-based) Altavista Stanford web But some don’t because it’s hard to explain to average user: Alltheweb msn Yahoo Excite initially had true relevance feedback, but abandoned it due to lack of use.
24
Relevance Feedback: Cost Long queries are inefficient for typical IR engine. Long response times for user. High cost for retrieval system. Why?
25
Other Uses of Relevance Feedback Following a changing information need Maintaining an information filter (e.g., for a news feed) Active learning Topics for next quarter
26
Active Learning Goal: create a training set for a text classifier (or some other classification problem) One approach: uncertainty sampling At any point in learning, present document with highest uncertainty to user and get relevant/non-relevant judgment. (closest to p( R) = 0.5 ) Scarce resource is user’s time: maximize benefit of each decision. Active learning significantly reduces the number of labeled documents needed to learn a category.
27
Active Learning documents classifier selects most uncertain document most uncertain document retrain classifier user labels document somehow Build Initial classifier
28
Active Learning Could we use active learning for relevance feedback?
29
Relevance Feedback Summary Relevance feedback has been shown to be effective at improving relevance of results. Full relevance feedback is painful for the user. Full relevance feedback is not very efficient in most IR systems. Other types of interactive retrieval may improve relevance by as much with less work.
30
Forward Pointer: DirectHit DirectHit uses indirect relevance feedback. DirectHit ranks documents higher that users look at more often. Not user or query specific.
31
Pseudo Feedback Pseudo feedback attempts to automate the manual part of relevance feedback. Retrieve an initial set of relevant documents. Assume that top-ranked documents are relevant.
32
Pseudo Feedback documents retrieve documents highest ranked documents apply relevance feedback label top k docs relevant initial query
33
Pseudo Feedback Not surprisingly, the success of pseudo feedback depends on whether relevance assumption is true. If it is true, pseudo feedback can improve precision and recall dramatically. If it is not true, then the results will become less relevant in each iteration. (concept drift) Example: ambiguous words (“jaguar”) Bimodal distribution: depending on query, performance is dramatically better or dramatically worse. Unfortunately, hard to predict which will be the case.
34
Pseudo-Feedback: Performance
35
Query Expansion In relevance feedback, users give additional input (relevant/non-relevant) on documents. In query expansion, users give additional input (good/bad search term) on words or phrases.
36
Query Expansion: Example Also: see altavista, teoma
37
Types of Query Expansion Refinements based on query log mining Common on the web Global Analysis: Thesaurus-based Controlled vocabulary Maintained by editors (e.g., medline) Automatically derived thesaurus (co-occurrence statistics) Local Analysis: Analysis of documents in result set
38
Controlled Vocabulary
39
Automatic Thesaurus Generation High cost of manually producing a thesaurus Attempt to generate a thesaurus automatically by analyzing a collection of documents Two main approaches Co-occurrence based (co-occurring words are more likely to be similar) Shallow analysis of grammatical relations Entities that are grown, cooked, eaten, and digested are more likely to be food items. Co-occurrence based is more robust, grammatical relations are more accurate. Why?
40
Co-occurrence Thesaurus Simplest way to compute one is based on term-term similarities in C = AA T where A is term-document matrix. w i,j = (normalized) weighted count (t i, d j ) titi djdj n m
41
Automatic Thesaurus Generation Example
42
Automatic Thesaurus Generation Discussion Quality of associations is usually a problem. Very similar to LSI (why?) Problems: “False positives” Words deemed similar that are not “False negatives” Words deemed dissimilar that are similar
43
Query Expansion: Summary Query expansion is very effective in increasing recall. In most cases, precision is decreased, often significantly.
44
Sense-Based Retrieval In query expansion, new words are added to the query (disjunctively). Increase of matches. In sense-based retrieval, term matches are only counted if the same sense is used in query and document. Decrease of matches. Example: In sense-based retrieval, “jaguar” is only a match if it’s used in the “animal” sense in both query and document.
45
Sense-Based Retrieval: Results
46
Expansion vs. Sense-Based Retrieval Same type of information is used in pseudo relevance feedback and sense-based retrieval. But: disambiguation is expensive Indexing with senses is complicated Automatic sense-based retrieval only makes sense for long queries If senses are supplied in interactive loop, then it’s easier to add words rather than senses Why?
47
Exercise What is the factor of increase of the index size? How could you integrate confidence numbers for senses?
48
Interactive Retrieval Query expansion and relevance feedback are examples of interactive retrieval. Others: Query “editing” (adding and removing terms) Visualization-based, e.g., interaction with visual map Parametric search
49
Resources MG Ch. 4.7 MIR Ch. 5.2 – 5.4 Singhal, Mitra, Buckley: Learning routing queries in a query zone, ACM SIGIR, 1997. Yonggang Qiu, Hans-Peter Frei, Concept based query expansion, Sigir, 1993. Schuetze: Automatic Word Sense Discrimination, Computational Linguistics, 1998. Buckley, Singhal, Mitra, Salton, New retrieval approaches using smart: trec4, nist, 1996. Gerard Salton and Chris Buckley. Improving retrieval performance by relevance feedback. Journal of the American Society ]or In:formation Science, 41(4):288-297, 1990.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.