Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE) who in turn adapted them from Prof.

Similar presentations


Presentation on theme: "1 Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE) who in turn adapted them from Prof."— Presentation transcript:

1 1 Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE) who in turn adapted them from Prof. Dik Lee (Univ. of Science and Tech, Hong Kong)

2 2 Retrieval Models A retrieval model specifies the details of: –Document representation –Query representation –Retrieval function Determines a notion of relevance. Notion of relevance can be binary or continuous (i.e. ranked retrieval).

3 3 Classes of Retrieval Models Boolean models (set theoretic) –Extended Boolean Vector space models (statistical/algebraic) –Generalized VS –Latent Semantic Indexing Probabilistic models

4 4 Other Model Dimensions Logical View of Documents –Index terms –Full text –Full text + Structure (e.g. hypertext) User Task –Retrieval –Browsing

5 5 Retrieval Tasks Ad hoc retrieval: Fixed document corpus, varied queries. Filtering: Fixed query, continuous document stream. –User Profile: A model of relative static preferences. –Binary decision of relevant/not-relevant. Routing: Same as filtering but continuously supply ranked lists rather than binary filtering.

6 6 Common Preprocessing Steps Strip unwanted characters/markup (e.g. HTML tags, punctuation, numbers, etc.). Break into tokens (keywords) on whitespace. Stem tokens to “root” words –computational  comput Remove common stopwords (e.g. a, the, it, etc.). Detect common phrases (possibly using a domain specific dictionary). Build inverted index (keyword  list of docs containing it).

7 7 Boolean Model A document is represented as a set of keywords. Queries are Boolean expressions of keywords, connected by AND, OR, and NOT, including the use of brackets to indicate scope. –[[Rio & Brazil] | [Hilo & Hawaii]] & hotel & !Hilton] Output: Document is relevant or not. No partial matches or ranking.

8 8 Popular retrieval model because: –Easy to understand for simple queries. –Clean formalism. Boolean models can be extended to include ranking. Reasonably efficient implementations possible for normal queries. Boolean Retrieval Model

9 9 Boolean Models  Problems Very rigid: AND means all; OR means any. Difficult to express complex user requests. Difficult to control the number of documents retrieved. –All matched documents will be returned. Difficult to rank output. –All matched documents logically satisfy the query. Difficult to perform relevance feedback. –If a document is identified by the user as relevant or irrelevant, how should the query be modified?

10 10 Statistical Models A document is typically represented by a bag of words (unordered words with frequencies). Bag = set that allows multiple occurrences of the same element. User specifies a set of desired terms with optional weights : –Weighted query terms: Q = –Unweighted query terms: Q = –No Boolean conditions specified in the query.

11 11 Statistical Retrieval Retrieval based on similarity between query and documents. Output documents are ranked according to similarity to query. Similarity based on occurrence frequencies of keywords in query and document. Automatic relevance feedback can be supported: –Relevant documents “added” to query. –Irrelevant documents “subtracted” from query.

12 12 Issues for Vector Space Model How to determine important words in a document? –Word sense? –Word n-grams (and phrases, idioms,…)  terms How to determine the degree of importance of a term within a document and within the entire collection? How to determine the degree of similarity between a document and the query? In the case of the web, what is a collection and what are the effects of links, formatting information, etc.?

13 13 The Vector-Space Model Assume t distinct terms remain after preprocessing; call them index terms or the vocabulary. These “orthogonal” terms form a vector space. Dimension = t = |vocabulary| Each term, i, in a document or query, j, is given a real-valued weight, w ij. Both documents and queries are expressed as t-dimensional vectors: d j = (w 1j, w 2j, …, w tj )

14 14 Graphic Representation Example: D 1 = 2T 1 + 3T 2 + 5T 3 D 2 = 3T 1 + 7T 2 + T 3 Q = 0T 1 + 0T 2 + 2T 3 T3T3 T1T1 T2T2 D 1 = 2T 1 + 3T 2 + 5T 3 D 2 = 3T 1 + 7T 2 + T 3 Q = 0T 1 + 0T 2 + 2T 3 7 32 5 Is D 1 or D 2 more similar to Q? How to measure the degree of similarity? Distance? Angle? Projection? 1

15 15 Document Collection A collection of n documents can be represented in the vector space model by a term-document matrix. An entry in the matrix corresponds to the “weight” of a term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document. T 1 T 2 …. T t D 1 w 11 w 21 … w t1 D 2 w 12 w 22 … w t2 : : : : D n w 1n w 2n … w tn

16 16 Term Weights: Term Frequency More frequent terms in a document are more important, i.e. more indicative of the topic. f ij = frequency of term i in document j May want to normalize term frequency (tf) across the entire corpus: tf ij = f ij /

17 17 Term Weights: Inverse Document Frequency Terms that appear in many different documents are less indicative of overall topic. df i = document frequency of term i = number of documents containing term i idf i = inverse document frequency of term i, = log 2 (N/ df i ) (N: total number of documents) An indication of a term’s discrimination power. Log used to dampen the effect relative to tf.

18 18 TF-IDF Weighting A typical combined term importance indicator is tf-idf weighting: w ij = tf ij idf i = tf ij log 2 (N/ df i ) A term occurring frequently in the document but rarely in the rest of the collection is given high weight. Many other ways of determining term weights have been proposed. Experimentally, tf-idf has been found to work well.

19 19 Computing TF-IDF -- An Example Given a document containing terms with given frequencies: A(3), B(2), C(1) Assume collection contains 10,000 documents and document frequencies of these terms are: A(50), B(1300), C(250) Then: A: tf = 3/3; idf = log(10000/50) = 5.3; tf-idf = 5.3 B: tf = 2/3; idf = log(10000/1300) = 2.0; tf-idf = 1.3 C: tf = 1/3; idf = log(10000/250) = 3.7; tf-idf = 1.2

20 20 Query Vector Query vector is typically treated as a document and also tf-idf weighted. Alternative is for the user to supply weights for the given query terms.

21 21 Similarity Measure A similarity measure is a function that computes the degree of similarity between two vectors. Using a similarity measure between the query and each document: –It is possible to rank the retrieved documents in the order of presumed relevance. –It is possible to enforce a certain threshold so that the size of the retrieved set can be controlled.

22 22 Similarity Measure - Inner Product Similarity between vectors for the document d i and query q can be computed as the vector inner product: sim(d j,q) = d jq = w ij · w iq where w ij is the weight of term i in document j and w iq is the weight of term i in the query For binary vectors, the inner product is the number of matched query terms in the document (size of intersection). For weighted term vectors, it is the sum of the products of the weights of the matched terms.

23 23 Properties of Inner Product The inner product is unbounded. Favors long documents with a large number of unique terms. Measures how many terms matched but not how many terms are not matched.

24 24 Inner Product -- Examples Binary: –D = 1, 1, 1, 0, 1, 1, 0 –Q = 1, 0, 1, 0, 0, 1, 1 sim(D, Q) = 3 retrievaldatabase architecture computer text management information Size of vector = size of vocabulary = 7 0 means corresponding term not found in document or query Weighted: D 1 = 2T 1 + 3T 2 + 5T 3 D 2 = 3T 1 + 7T 2 + 1T 3 Q = 0T 1 + 0T 2 + 2T 3 sim(D 1, Q) = 2*0 + 3*0 + 5*2 = 10 sim(D 2, Q) = 3*0 + 7*0 + 1*2 = 2

25 25 Cosine Similarity Measure Cosine similarity measures the cosine of the angle between two vectors. Inner product normalized by the vector lengths. D 1 = 2T 1 + 3T 2 + 5T 3 CosSim(D 1, Q) = 10 /  (4+9+25)(0+0+4) = 0.81 D 2 = 3T 1 + 7T 2 + 1T 3 CosSim(D 2, Q) = 2 /  (9+49+1)(0+0+4) = 0.13 Q = 0T 1 + 0T 2 + 2T 3  t3t3 t1t1 t2t2 D1D1 D2D2 Q  D 1 is 6 times better than D 2 using cosine similarity but only 5 times better using inner product. CosSim(d j, q) =

26 26 Naïve Implementation Convert all documents in collection D to tf-idf weighted vectors, d j, for keyword vocabulary V. Convert query to a tf-idf-weighted vector q. For each d j in D do Compute score s j = cosSim(d j, q) Sort documents by decreasing score. Present top ranked documents to the user. Time complexity: O(|V|·|D|) Bad for large V & D ! |V| = 10,000; |D| = 100,000; |V|·|D| = 1,000,000,000

27 27 Comments on Vector Space Models Simple, mathematically based approach. Considers both local (tf) and global (idf) word occurrence frequencies. Provides partial matching and ranked results. Tends to work quite well in practice despite obvious weaknesses. Allows efficient implementation for large document collections.

28 28 Problems with Vector Space Model Missing semantic information (e.g. word sense). Missing syntactic information (e.g. phrase structure, word order, proximity information). Assumption of term independence (e.g. ignores synonomy). Lacks the control of a Boolean model (e.g., requiring a term to appear in a document). –Given a two-term query “A B”, may prefer a document containing A frequently but not B, over a document that contains both A and B, but both less frequently.

29 Why probabilities in IR? User Information Need Documents Document Representation Query Representation Query Representation How to match? In traditional IR systems, matching between each document and query is attempted in a semantically imprecise space of index terms. Probabilities provide a principled foundation for uncertain reasoning. Can we use probabilities to quantify our uncertainties? Uncertain guess of whether document has relevant content Understanding of user need is uncertain

30 Probabilistic IR topics Classical probabilistic retrieval model –Probability ranking principle, etc. –Binary independence model (≈ Naïve Bayes text cat) –(Okapi) BM25 Bayesian networks for text retrieval Language model approach to IR –An important emphasis in recent work Probabilistic methods are one of the oldest but also one of the currently hottest topics in IR. –Traditionally: neat ideas, but didn’t win on performance –It may be different now.

31 Rethink the document ranking problem We have a collection of documents User issues a query A list of documents needs to be returned Ranking method is the core of an IR system:Ranking method is the core of an IR system: –In what order do we present documents to the user? –We want the “best” document to be first, second best second, etc…. Idea: Rank by probability of relevance of the document w.r.t. information needIdea: Rank by probability of relevance of the document w.r.t. information need –P(R=1|document i, query)

32 For events A and B: Bayes’ Rule Odds: Prior Recall a few probability basics Posterior

33 “If a reference retrieval system’s response to each request is a ranking of the documents in the collection in order of decreasing probability of relevance to the user who submitted the request, where the probabilities are estimated as accurately as possible on the basis of whatever data have been made available to the system for this purpose, the overall effectiveness of the system to its user will be the best that is obtainable on the basis of those data.” [1960s/1970s] S. Robertson, W.S. Cooper, M.E. Maron; van Rijsbergen (1979:113); Manning & Schütze (1999:538) The Probability Ranking Principle

34 Probability Ranking Principle Let x represent a document in the collection. Let R represent relevance of a document w.r.t. given (fixed) query and let R=1 represent relevant and R=0 not relevant. p(x|R=1), p(x|R=0) - probability that if a relevant (not relevant) document is retrieved, it is x. Need to find p(R=1|x) - probability that a document x is relevant. p(R=1),p(R=0) - prior probability of retrieving a relevant or non-relevant document

35 Probability Ranking Principle (PRP) Simple case: no selection costs or other utility concerns that would differentially weight errors PRP in action: Rank all documents by p(R=1|x) Theorem: Using the PRP is optimal, in that it minimizes the loss (Bayes risk) under 1/0 loss –Provable if all probabilities correct, etc. [e.g., Ripley 1996]

36 Probability Ranking Principle More complex case: retrieval costs. –Let d be a document –C – cost of retrieving a relevant document –C’ – cost of retrieving a non-relevant document Probability Ranking Principle: if for all d’ not yet retrieved, then d is the next document to be retrieved We won’t further consider cost/utility from now on

37 Probability Ranking Principle How do we compute all those probabilities? –Do not know exact probabilities, have to use estimates –Binary Independence Model (BIM) – which we discuss next – is the simplest model Questionable assumptions –“Relevance” of each document is independent of relevance of other documents. Really, it’s bad to keep on returning duplicates –Boolean model of relevance –A single step information need Seeing a range of results might let user refine query

38 Probabilistic Retrieval Strategy Estimate how terms contribute to relevance –How do things like tf, df, and document length influence your judgments about document relevance? A more nuanced answer is the Okapi formulae (Spärck Jones / Robertson) Combine to find document relevance probability Order documents by decreasing probability

39 Probabilistic Ranking Basic concept: “ For a given query, if we know some documents that are relevant, terms that occur in those documents should be given greater weighting in searching for other relevant documents. By making assumptions about the distribution of terms and applying Bayes Theorem, it is possible to derive weights theoretically.” Van Rijsbergen

40 Binary Independence Model Traditionally used in conjunction with PRP “Binary” = Boolean: documents are represented as binary incidence vectors of terms: – – iff term i is present in document x. “Independence”: terms occur in documents independently Different documents can be modeled as the same vector

41 Binary Independence Model Queries: binary term incidence vectors Given query q, –for each document d need to compute p(R=1|q,d). –replace with computing p(R=1|q,x) where x is binary term incidence vector representing d. –Interested only in ranking Will use odds and Bayes’ Rule:

42 Binary Independence Model 42

43 Binary Independence Model Using Independence Assumption: Constant for a given query Needs estimation

44 Binary Independence Model Since x i is either 0 or 1 : Let Assume, for all terms not occurring in the query (q i =0)

45 documentrelevant (R=1)not relevant (R=0) term presentx i = 1pipi riri term absentx i = 0(1 – p i )(1 – r i )

46 All matching terms Non-matching query terms Binary Independence Model All matching terms All query terms

47 Binary Independence Model Constant for each query Only quantity to be estimated for rankings Retrieval Status Value:

48 Binary Independence Model All boils down to computing RSV. So, how do we compute c i ’ s from our data ? The c i are log odds ratios They function as the term weights in this model

49 Binary Independence Model Estimating RSV coefficients in theory For each term i look at this table of document counts: Estimates: For now, assume no zero terms. If zero, then Adjust to Non-zero.

50 Estimation – key challenge If non-relevant documents are approximated by the whole collection, then r i (prob. of occurrence in non-relevant documents for query) is n/N and

51 Estimation – key challenge p i (probability of occurrence in relevant documents) cannot be approximated as easily p i can be estimated in various ways: –from relevant documents if know some Relevance weighting can be used in a feedback loop –constant (Croft and Harper combination match) – then just get idf weighting of terms (with p i =0.5) –proportional to prob. of occurrence in collection Greiff (SIGIR 1998) argues for 1/3 + 2/3 df i /N

52 Probabilistic Relevance Feedback 1.Guess a preliminary probabilistic description of R=1 documents and use it to retrieve a first set of documents 2.Interact with the user to refine the description: learn some definite members with R=1 and R=0 3.Reestimate p i and r i on the basis of these –Or can combine new information with original guess (use Bayesian prior): 4.Repeat, thus generating a succession of approximations to relevant documents κ is prior weight

53 53 Iteratively estimating p i and r i (= Pseudo-relevance feedback) 1.Assume that p i is constant over all x i in query and r i as before –p i = 0.5 (even odds) for any given doc 2.Determine guess of relevant document set: –V is fixed size set of highest ranked documents on this model 3.We need to improve our guesses for p i and r i, so –Use distribution of x i in docs in V. Let V i be set of documents containing x i p i = |V i | / |V| –Assume if not retrieved then not relevant r i = (n i – |V i |) / (N – |V|) 4.Go to 2. until converges then return ranking

54 54 Avoiding zeros in estimating p i and r i 1.In the above iterative process, in order to avoid zeros for V or Vi, we can adjust the estimates as follows 2.We may also adjust the constant 0.5 to something variable:

55 PRP and BIM Getting reasonable approximations of probabilities is possible. Requires restrictive assumptions: –Term independence –Terms not in query don’t affect the outcome –Boolean representation of documents/queries/relevance –Document relevance values are independent Some of these assumptions can be removed Problem: either require partial relevance information or only can derive somewhat inferior term weights

56 Removing term independence In general, index terms aren’t independent Dependencies can be complex van Rijsbergen (1979) proposed model of simple tree dependencies –Exactly Friedman and Goldszmidt’s Tree Augmented Naive Bayes (AAAI 13, 1996) Each term dependent on one other In 1970s, estimation problems held back success of this model

57 Resources S. E. Robertson and K. Spärck Jones. 1976. Relevance Weighting of Search Terms. Journal of the American Society for Information Sciences 27(3): 129–146. C. J. van Rijsbergen. 1979. Information Retrieval. 2nd ed. London: Butterworths, chapter 6. [Most details of math] http://www.dcs.gla.ac.uk/Keith/Preface.html N. Fuhr. 1992. Probabilistic Models in Information Retrieval. The Computer Journal, 35(3),243–255. [Easiest read, with BNs] F. Crestani, M. Lalmas, C. J. van Rijsbergen, and I. Campbell. 1998. Is This Document Relevant?... Probably: A Survey of Probabilistic Models in Information Retrieval. ACM Computing Surveys 30(4): 528–552. http://www.acm.org/pubs/citations/journals/surveys/1998-30-4/p528-crestani/ [Adds very little material that isn’t in van Rijsbergen or Fuhr ]


Download ppt "1 Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE) who in turn adapted them from Prof."

Similar presentations


Ads by Google