Download presentation
1
Probabilistic Information Retrieval
[ ProbIR ] Suman K Mitra DAIICT, Gandhinagar
2
Acknowledgment Alexander Dekhtyar, University of Maryland
Mandar Mitra, ISI, Kolkata Prasenjit Majumder, DAIICT, Gandhinagar
3
Why use Probabilities? Information Retrieval deals with uncertain information Probability is a measure of uncertainty Probabilistic Ranking Principle provable minimization of risk Probabilistic Inference To justify your decision
4
Document Representation
How good the representation is? How exact the representation is? How well is the query matched ? How relevant is the result to the query? 1 Document Representation Document Collection 3 Query 2 Query Representation 4 Basic IR System
5
Approaches and main Contributors
Probability Ranking Principle – Robertson 1970 onwards Information Retrieval as Probabilistic Inference – Van Rijsbergen et al onwards Probabilistic Indexing – Fuhr et al onwards Bayesian Nets in Information Retrieval – Turtle, Croft 1990 onwards Probabilistic Logic Programming in Information Retrieval – Fuhr et al onwards
6
Probability Ranking Principle
Collection of documents Representation of documents User uses a query Representation of query A set of documents to return Question: In what order documents to present to user? Logically: Best document first and then next best and so on Requirement: A formal way to judge the goodness of documents with respect to query Possibility: Probability of relevance of the document with respect to query
7
Probability Ranking Principle
If a retrieval system’s response to each request is a ranking of the documents in the collections in order of decreasing probability of goodness to the user who submitted the request ... … where the probabilities are estimated as accurately as possible on the basis of whatever data made available to the system for this purpose ... … then the overall effectiveness of the system to its users will be the best that is obtainable on the basis of that data. W. S. Cooper
8
Probability Basics Bayes’ Rule Let a and b are two events
Odds of an event a is defined as
9
Conditional probability satisfies all axioms of probability
(ii) (iii) If are mutually exclusive events then Hence (i) Hence (ii)
10
[ ‘s are all mutually exclusive]
Hence (iii)
11
Probability Ranking Principle
Let x be a document in the collection. Let R represent relevance of a document w.r.t. given (fixed) query and let NR represent non-relevance. p(R|x) - probability that a retrieved document x is relevant. p(NR|x) - probability that a retrieved document x is non-relevant. p(R),p(NR) - prior probability of retrieving a relevant and non-relevant document respectively p(x|R), p(x|NR) - probability that if a relevant (non-relevant) document is retrieved, it is x.
12
Probability Ranking Principle
Ranking Principle (Bayes’ Decision Rule): If p(R|x) > p(NR|x) then x is relevant, otherwise x is not relevant
13
Probability Ranking Principle
Is PRP minimizes the average probability of error? Actual X is R X is NR Decision 2 1 If we decide NR If we decide R p(error) is minimal when all p(error|x) are minimal. Bayes’ decision rule minimizes each p(error|x).
14
X is either in R or NR Constant
15
Minimization of p(error) Minimization of 2p(error)
p(x) is same for any x Define are nothing but the decision for x to be in R and NR respectively and : The decision minimizes p(error) and Hence
16
Probability Ranking Principle
Issues How do we compute all those probabilities? Cannot compute exact probabilities, have to use estimates from the (ground truth) data. (Binary Independence Retrieval) (Bayesian Networks??) Assumptions “Relevance” of each document is independent of relevance of other documents. Most applications are for Boolean model.
17
Probability Ranking Principle
Actual X is R X is NR Decision Simple case: no selection costs. x is relevant iff p(R|x) > p(NR|x) (Bayes’ Decision Rule) PRP: Rank all documents by p(R|x).
18
Probability Ranking Principle
Actual X is R X is NR Decision More complex case: retrieval costs. C - cost of retrieval of relevant document C’ - cost of retrieval of non-relevant document let d, be a document Probability Ranking Principle: if for all d’ not yet retrieved, then d is the next document to be retrieved
19
Binary Independence Model
20
Binary Independence Model
Traditionally used in conjunction with PRP “Binary” = Boolean: documents are represented as binary vectors of terms: iff term i is present in document x. “Independence”: terms occur in documents independently Different documents can be modeled as same vector.
21
Binary Independence Model
Queries: binary vectors of terms Given query q, for each document d, need to compute p(R|q,d). replace with computing p(R|q,x) where x is vector representing d Interested only in ranking Use odds:
22
Binary Independence Model
Constant for each query Needs estimation Using Independence Assumption: So :
23
Binary Independence Model
Since xi is either 0 or 1: Let Assume, for all terms not occurring in the query (qi=0)
24
Binary Independence Model
All matching terms Non-matching query terms All matching terms All query terms
25
Only quantity to be estimated
Binary Independence Model Only quantity to be estimated for rankings Constant for each query Retrieval Status Value:
26
So, how do we compute ci’s from our data ?
Binary Independence Model All boils down to computing RSV. So, how do we compute ci’s from our data ?
27
Binary Independence Model
Estimating RSV coefficients. For each term i look at the following table: Estimates:
28
PRP and BIR: The lessons
Getting reasonable approximations of probabilities is possible. Simple methods work only with restrictive assumptions: term independence terms not in query do not affect the outcome boolean representation of documents/queries document relevance values are independent Some of these assumptions can be removed
29
Probabilistic weighting scheme
Add 0.5 with each term log function of ratio of probabilities may lead to positive or negative of infinity
30
Probabilistic weighting scheme [S.Robertson]
In general form, the weighting function is : are constants. q : within query frequency (wqf), f : within document frequency (wdf), n : number of documents in the collection indexed by this term, N : total number of documents in the collection, s : number of relevant documents indexed by this term, S : total number of relevant documents, L : normalised document length (i.e. the length of this document divided by the average length of documents in the collection).
31
Probabilistic weighting scheme [S.Robertson] BM11
Stephen Robertson's BM11 uses the general form for weights, but adds an extra item to the sum of term weights to give the overall document score This term is 0 when L=1 : number of terms in the query (the query length), : another constant.
32
Probabilistic weighting scheme
BM15 BM15 is same as BM11 with term replaced by
33
Probabilistic weighting scheme
BM25 BM25 combines the B11 and B15 with a scaling factor, b, which turns BM15 into BM11 as it moves from 0 to 1 b= BM11 b= BM15 General Form Default values used:
34
Bayesian Networks
35
A possible way could be Bayesian Networks
1. Independent Assumption Independent Can they be dependent? 2. Binary Assumption = 0 or 1 Can it be 0, 1, , …..n ? A possible way could be Bayesian Networks
36
Bayesian Network (BN) Basics
Bayesian Network (BN) is a hybrid system of probability theory and graph theory. Graph theory provides the user to build an interface to model highly interacting variables. On the other hand probability theory ensures the system as a whole is consistent, and provides ways to interface models to data. Modeling of JPD in a compact way by using graph (s) to reflect conditional independence relationships BN BN : Representation Nodes Random variables Arcs Conditional independence (or causality) Undirected arcs MRF Directed arcs BNs BN : Advantages Arc from node A to B implies : A causes B Compact representation of JPD of nodes Easier to learn (Fit to data)
37
P(U | B) = i P(xi | Par(xi ), , Mi , G)
G = Global structure (DAG – Directed Acyclic Graph) that contains a node for each variable xi U, i = 1,2, ….., n edges represent the probabilistic dependencies between nodes of variables xi U M = set of local structures {M1 , M2 , …., Mn }. n mappings for n variables. Mi maps each values of {xi , Par(xi )} to parameter . Here Par(xi ) denotes the set of parent nodes of xi in G. The joint probability distribution over U can be decomposed by the global structure G as P(U | B) = i P(xi | Par(xi ), , Mi , G) x4 x2 x3 x1
38
BN : An Example * By Kevin Murphy True False On Off Yes No Yes No
Cloud True False p(C=T) p(C=F) 1/2 Sprinkler C p(S=On) p(S=Off) T 0.1 0.9 F 0.5 On Off Rain Yes No Yes No Wet Grass C p(R=Y) p(R=N) T 0.8 0.2 F S R p(W=Y) p(W=N) On Y 0.99 0.01 On N 0.9 0.1 Off Y Off N 0.0 1.0 p(C,S,R,W) = p(C ) p(S|C) p(R|C,S) p(W|C,S,R) p(C,S,R,W) = p(C ) p(S|C) p(R|C) p(W|S,R)
39
BN : As Model BN : Notations
Hybrid system of probability theory and graph theory Graph theory provides the user to build an interface to model the highly interactive variables Probability theory provides ways to interface model to data BN : Notations : Structure of the Network : Set of parameters that encode local prob. dist. again has two parts G and M G is the global structure (DAG) M is the mapping for n variables (arcs)
40
BN : Learning Structure : DAG Parameter : CPD Known Structure Unknown
Parameter can only be learnt if structure is either known or learnt earlier
41
BN : Learning Structure is known and full data is observed (nothing missing) Parameter learning MLE, MAP Structure is known and full data is NOT observed (data missing) Parameter learning EM
42
BN : Learning Learning Structure
To find the best DAG that fits the data Objective Function: Constant and indep. of Search Algorithm : NP hard Performance criteria used : MIC, BIC
43
References (basics) S. E. Roberctson, “The Probability Ranking Principle in IR”, Journal of Documentation, 33, , 1977. K. S. Jones, S. Walker and S. E. Roberctson, “A Probabilistic model for information retrieval: development and comparative experiments-Part-1”, Information Processing and Management, 36, , 2000. K. S. Jones, S. Walker and S. E. Roberctson, “A Probabilistic model for information retrieval: development and comparative experiments-Part-2”, Information Processing and Management, 36, , 2000. S. E. Roberctson and H. Zaragoza, “The Probabilistic relevance framework: BM25 and beyond Ranking Principle in IR”, Foundation and Trends in Information Retrieval, 3, , 2009. S. E. Roberctson, C. J. Van Rijsbergen and M. F. Porter, “Probabilistic models of indexing and searching”, Information Retrieval Research, Oddy et al. (Ed.s), 36, 35-56, 1981. N. Fuhr and C. Buckley, “A probabilistic learning approach for document indexing”, ACM Tran. On Information systems, 9, , 1991. H. R. Turtle and W. B. Croft, “Evaluation of an inference network based retrieval model, ACM Tran. On Information systems, 7, , 1991.
44
Thanks You Discussions
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.