Download presentation
Presentation is loading. Please wait.
Published byCaren McCoy Modified over 8 years ago
1
Probabilistic Retrieval LBSC 708A/CMSC 838L Session 4, October 2, 2001 Philip Resnik
2
Questions Adjustments to syllabus Probability basics Probabilistic retrieval Comparison with vector space model Agenda
3
Muddiest Points The math! Two views of an idea: formulae and matrices Pivoted document length normalization Latent Semantic Indexing
4
Why Similarity-Based Ranking? Similarity is a useful predictor of relevance (13) –Users can then recognize documents with utility Ranked lists avoid all-or-nothing retrieval (3) –More nuanced than presence or absence of words Easy to implement (2)
5
Probability Basics What is probability? –Statistical: relative frequency as n –Subjective: degree of belief Notion of a probability “space” –Elementary outcomes, –Events, F –Probability measure, p Every probabilistic model has underlying it an algebraic foundation
6
Notion of “probability mass” Imagine a finite amount of “stuff” Associate the number 1 with the total amount Distribute
7
Independence A and B are independent iff P(A and B) = P(A) P(B) Ex: –P(“being brown eyed”) = 85/100 –P(“being a doctor”) = 1/1000 –P(“being a brown eyed doctor”) = 85/100,000
8
More on independence Suppose –P(“having a B.S. degree”) = 2/10 –P(“being a doctor”) = 1/1000 Would you expect –P(“having a B.S. degree and being a doctor”) = 2/10,000 ??? Extreme example: –P(“being a doctor”) = 1/1000 –P(“having studied anatomy”) = 12/1000
9
Conditional Probability P(A | B) P(A and B) / P(B) A B A and B P(A) = prob of A relative to the whole space P(A|B) = prob of A considering only the cases where B is known to be true
10
More on Conditional Probability Suppose –P(“having studied anatomy”) = 12/1000 –P(“being a doctor and having studied anatomy”) = 1/1000 Consider –P(“being a doctor” | “having studied anatomy”) = 1/12 But if you assume all doctors have studied anatomy –P(“having studied anatomy” | “being a doctor”) = 1 Useful restatement of definition: P(A and B) = P(A|B) x P(B)
11
Bayes’s Theorem: Notation Consider a set of hypotheses: H1, H2, H3 Consider some observable evidence, O P(O|H1) = probability of O being observed if we knew H1 were true P(O|H2) = probability of O being observed if we knew H2 were true P(O|H3) = probability of O being observed if we knew H3 were true
12
Bayes’s Theorem: example Let –O = “Joe earns more than $70,000/year” –H1 = “Joe is a doctor” –H2 = “Joe is a college professor” –H3 = “Joe works in food services” Suppose we do a survey and we find out –P(O|H1) = 0.6 –P(O|H2) = 0.07 –P(O|H3) = 0.001 What should be our guess about Joe’s profession?
13
Bayes’s Theorem (finally!) What’s P(H1|O)? P(H2|O)? P(H3|O)? Theorem: P(H | O) = P(O | H) x P(H) P(O) Posterior probability Prior probability Notice that the prior is very important!
14
Example, cont’d Suppose we also have good data about priors: –P(O|H1) = 0.6P(H1) = 0.0001 doctor –P(O|H2) = 0.07P(H2) = 0.001 prof –P(O|H3) = 0.001P(H3) = 0.2 food We can calculate –P(H1|O) = 0.00006/ P(“earning > $70K/year”) –P(H2|O) = 0.0007/ P(“earning > $70K/year”) –P(H3|O) = 0.0002/ P(“earning > $70K/year”)
15
Summary of Probability Concepts Interpretations of probability Independence Conditional Probability Bayes’s theorem
16
Questions Probability basics Probabilistic retrieval –Language modeling –Inference networks Comparison with vector space model Agenda
17
Probability Ranking Principle A useful ranking criterion –Maximize probability that relevant docs precede others Binary relevance & independence assumptions –Each document is either relevant or it is not –Relevance of one doc reveals nothing about another Theorem (provable from assumptions): –Documents should be ranked in order of decreasing probability of relevance to the query, P(d relevant-to q)
18
Probabilistic Retrieval Strategy Estimate how terms contribute to relevance –How do TF, DF, and length influence your judgments about document relevance? (Okapi) Combine to find document relevance probability Order documents by decreasing probability
19
Binary Independence Model Basis for computing probability of relevance –Simple computation based on term weights Depends on two new assumptions –Presence of one term tells nothing about another “Term independence” –No prior knowledge about any document “Uniform prior”: P(d) is the same for all d
20
Where do the probabilities fit? Comparison Function Representation Function Query Formulation Human Judgment Representation Function Retrieval Status Value Utility Query Information NeedDocument Query RepresentationDocument Representation Query Processing Document Processing sim(d,q) P(d is Rel | q)
21
Language Modeling Traditional generative model: generates strings Example: Iwish I wish I wish I wish I wish I wish I wish … *wish I wish
22
Stochastic Language Models Models probability of generating any string 0.2the 0.1a 0.01man 0.01woman 0.03said 0.02likes … themanlikesthewoman 0.20.010.020.20.01 multiply Model M P(s | M)
23
Language Models, cont’d Models probability of generating any string 0.2the 0.1a 0.01man 0.01woman 0.03said 0.02likes … Model M1 0.2the 0.1yon 0.001class 0.01maiden 0.03sayst 0.02pleaseth … Model M2 maidenclasspleasethyonthe 0.00050.010.0001 0.2 0.010.00010.020.10.2 P(s|M2) > P(s|M1)
24
Using Language Models in IR Treat each document as the basis for a model Rank document d based on P(d | q) P(d | q) = P(q | d) x P(d) / P(q) –P(q) is the same for all documents, so ignore –P(d) [the prior] is often treated as the same for all d But we could use criteria like authority, length, genre –P(q | d) is the probability of q given d’s model Very general formal approach based on HMMs
25
Inference Networks A flexible way of combining term weights –Boolean model –Binary independence model –Probabilistic models with weaker assumptions Key concept: rank based on P(d | q) –P(d | q) = P(q | d) x P(d) / P(q) Efficient large-scale implementation –InQuery text retrieval system from U Mass
26
A Boolean Inference Net bat d1d2d3d4 catfathatmatpatratvat ANDOR sat AND I Information need
27
A Binary Independence Network bat d1d2d3d4 catfathatmatpatratvatsat query
28
Probability Computation Turn on exactly one document at a time –Boolean: Every connected term turns on –Binary Ind: Connected terms gain their weight Compute the query value –Boolean: AND and OR nodes use truth tables –Binary Ind: Fraction of the possible weight
29
A Critique Most of the assumptions are not satisfied! –Searchers want utility, not relevance –Relevance is not binary –Terms are clearly not independent –Documents are often not independent The best known term weights are quite ad hoc –Unless some relevant documents are known
30
But It Works! Ranked retrieval paradigm is powerful –Well suited to human search strategies Probability theory has explanatory power –At least we know where the weak spots are –Probabilities are good for combining evidence Inference networks are extremely flexible –Easily accommodates newly developed models Good implementations exist –Effective, efficient, and large-scale
31
Comparison With Vector Space Similar in some ways –Term weights can be based on frequency –Terms often used as if they were independent Different in others –Based on probability rather than similarity –Intuitions are probabilistic rather than geometric
32
Two Minute Paper Which assumption underlying the probabilistic retrieval model causes you the most concern, and why? What was the muddiest point in today’s lecture? Have you started Homework 2?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.