What is coming… n Today: u Probabilistic models u Improving classical models F Latent Semantic Indexing F Relevance feedback (Chapter 5) n Monday Feb 5.

Slides:



Advertisements
Similar presentations
Modern information retrieval Modelling. Introduction IR systems usually adopt index terms to process queries IR systems usually adopt index terms to process.
Advertisements

Probabilistic Information Retrieval Part I: Survey Alexander Dekhtyar department of Computer Science University of Maryland.
Bayesian Network and Influence Diagram A Guide to Construction And Analysis.
BAYESIAN NETWORKS. Bayesian Network Motivation  We want a representation and reasoning system that is based on conditional independence  Compact yet.
The Probabilistic Model. Probabilistic Model n Objective: to capture the IR problem using a probabilistic framework; n Given a user query, there is an.
Introduction to Information Retrieval (Part 2) By Evren Ermis.
Introduction of Probabilistic Reasoning and Bayesian Networks
Modern Information Retrieval by R. Baeza-Yates and B. Ribeiro-Neto
From: Probabilistic Methods for Bioinformatics - With an Introduction to Bayesian Networks By: Rich Neapolitan.
Web Search - Summer Term 2006 II. Information Retrieval (Basics Cont.)
IR Models: Overview, Boolean, and Vector
ISP 433/533 Week 2 IR Models.
Models for Information Retrieval Mainly used in science and research, (probably?) less often in real systems But: Research results have significance for.
Incorporating Language Modeling into the Inference Network Retrieval Framework Don Metzler.
Information Retrieval Modeling CS 652 Information Extraction and Integration.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 11: Probabilistic Information Retrieval.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Probabilistic Information Retrieval.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Probabilistic Information Retrieval.
Bayesian Belief Networks
Modern Information Retrieval Chapter 2 Modeling. Can keywords be used to represent a document or a query? keywords as query and matching as query processing.
Chapter 2Modeling 資工 4B 陳建勳. Introduction.  Traditional information retrieval systems usually adopt index terms to index and retrieve documents.
Modeling Modern Information Retrieval
Vector Space Model CS 652 Information Extraction and Integration.
1 CS 430 / INFO 430 Information Retrieval Lecture 10 Probabilistic Information Retrieval.
Ranking by Odds Ratio A Probability Model Approach let be a Boolean random variable: document d is relevant to query q otherwise Consider document d as.
Retrieval Models II Vector Space, Probabilistic.  Allan, Ballesteros, Croft, and/or Turtle Properties of Inner Product The inner product is unbounded.
Modern Information Retrieval Chapter 2 Modeling. Can keywords be used to represent a document or a query? keywords as query and matching as query processing.
IR Models: Review Vector Model and Probabilistic.
Other IR Models Non-Overlapping Lists Proximal Nodes Structured Models Retrieval: Adhoc Filtering Browsing U s e r T a s k Classic Models boolean vector.
1 Bayesian Networks Chapter ; 14.4 CS 63 Adapted from slides by Tim Finin and Marie desJardins. Some material borrowed from Lise Getoor.
Recuperação de Informação. IR: representation, storage, organization of, and access to information items Emphasis is on the retrieval of information (not.
Modeling (Chap. 2) Modern Information Retrieval Spring 2000.
IRDM WS Chapter 4: Advanced IR Models 4.1 Probabilistic IR Principles Probabilistic IR with Term Independence Probabilistic.
Information Retrieval Chapter 2: Modeling 2.1, 2.2, 2.3, 2.4, 2.5.1, 2.5.2, Slides provided by the author, modified by L N Cassel September 2003.
IR Models J. H. Wang Mar. 11, The Retrieval Process User Interface Text Operations Query Operations Indexing Searching Ranking Index Text quer y.
Chapter. 02: Modeling Contenue... 19/10/2015Dr. Almetwally Mostafa 1.
Bayesian Networks for Data Mining David Heckerman Microsoft Research (Data Mining and Knowledge Discovery 1, (1997))
Bayesian Statistics and Belief Networks. Overview Book: Ch 13,14 Refresher on Probability Bayesian classifiers Belief Networks / Bayesian Networks.
Information Retrieval CSE 8337 Spring 2005 Modeling Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates and Berthier.
1 University of Palestine Topics In CIS ITBS 3202 Ms. Eman Alajrami 2 nd Semester
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
Link-based and Content-based Evidential Information in a Belief Network Model I. Silva, B. Ribeiro-Neto, P. Calado, E. Moura, N. Ziviani Best Student Paper.
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
Language Models LBSC 796/CMSC 828o Session 4, February 16, 2004 Douglas W. Oard.
1 Patrick Lambrix Department of Computer and Information Science Linköpings universitet Information Retrieval.
C.Watterscsci64031 Probabilistic Retrieval Model.
Information Retrieval Chap. 02: Modeling - Part 2 Slides from the text book author, modified by L N Cassel September 2003.
The Boolean Model Simple model based on set theory
Web-Mining Agents Probabilistic Information Retrieval Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny (Übungen)
Set Theoretic Models 1. IR Models Non-Overlapping Lists Proximal Nodes Structured Models Retrieval: Adhoc Filtering Browsing U s e r T a s k Classic Models.
Information Retrieval CSE 8337 Spring 2005 Modeling (Part II) Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates.
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
Introduction n IR systems usually adopt index terms to process queries n Index term: u a keyword or group of selected words u any word (more general) n.
Probabilistic Model n Objective: to capture the IR problem using a probabilistic framework n Given a user query, there is an ideal answer set n Querying.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture Probabilistic Information Retrieval.
Information Retrieval Models School of Informatics Dept. of Library and Information Studies Dr. Miguel E. Ruiz.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Introduction to Information Retrieval Probabilistic Information Retrieval Chapter 11 1.
Probabilistic Retrieval LBSC 708A/CMSC 838L Session 4, October 2, 2001 Philip Resnik.
Recuperação de Informação B Modern Information Retrieval Cap. 2: Modeling Section 2.8 : Alternative Probabilistic Models September 20, 1999.
Class #16 – Tuesday, October 26
Recuperação de Informação B
CS 430: Information Discovery
Recuperação de Informação B
Recuperação de Informação B
Recuperação de Informação B
Berlin Chen Department of Computer Science & Information Engineering
Recuperação de Informação B
Information Retrieval and Web Design
Advanced information retrieval
Presentation transcript:

What is coming… n Today: u Probabilistic models u Improving classical models F Latent Semantic Indexing F Relevance feedback (Chapter 5) n Monday Feb 5 u Chapter 5 continued n Wednesday Feb 7 u Web Search Engines F Chapter 13 & Google paper

Annoucement: Free Food Event n Where & When: GWC 487, Tuesday Feb 6 th, 12:15-1:30 n What: Pizza, Softdrinks n Catch: A pitch on going to graduate school at ASU CSE… u Meet admissions committee, some graduate students, some faculty members n Silver-lining: Did we mention Pizza? u Also not as boring as time-sharing condo presentations. Make Sure to Come!!

Problems with Vector Model n No semantic basis! u Keywords are plotted as axes F But are they really independent? F Or they othogonal? u No support for boolean queries F How do you ask for papers that don’t contain a keyword?

Probabilistic Model n Objective: to capture the IR problem using a probabilistic framework n Given a user query, there is an ideal answer set u Querying as specification of the properties of this ideal answer set (clustering) n But, what are these properties? u Guess at the beginning what they could be (i.e., guess initial description of ideal answer set) u Improve by iteration

Probabilistic Model n An initial set of documents is retrieved somehow n User inspects these docs looking for the relevant ones (in truth, only top need to be inspected) n IR system uses this information to refine description of ideal answer set n By repeting this process, it is expected that the description of the ideal answer set will improve u Have always in mind the need to guess at the very beginning the description of the ideal answer set u Description of ideal answer set is modeled in probabilistic terms

Probabilistic Ranking Principle n Given a user query q and a document dj, u estimate the probability that the user will find the document dj interesting (i.e., relevant). F The model assumes that this probability of relevance depends on the query and the document representations only. F Ideal answer set is referred to as R and should maximize the probability of relevance. Documents in the set R are predicted to be relevant. n But, u how to compute probabilities? u what is the sample space?

The Ranking n Probabilistic ranking computed as: u sim(q,dj) = P(dj relevant-to q) / P(dj non-relevant-to q) F This is the odds of the document dj being relevant F Taking the odds minimize the probability of an erroneous judgement n Definition: u wij  {0,1} u P(R | vec(dj)) : probability that given doc is relevant u P(  R | vec(dj)) : probability doc is not relevant

The Ranking n sim(dj,q) = P(R | vec(dj)) / P(  R | vec(dj)) = [P(vec(dj) | R) * P(R)] [P(vec(dj) |  R) * P(  R)] ~ P(vec(dj) | R) P(vec(dj) |  R) u P(vec(dj) | R) : probability of randomly selecting the document dj from the set R of relevant documents Bayes Rule P(R ) and P(  R ) Same for all docs

Bayesian Inference Schools of thought in probability u freqüentist u epistemological

Basic Probability Basic Axioms: F 0 < P(A) < 1 ; F P(certain)=1; F P(A V B)=P(A)+P(B) if A and B are mutually exclusive

Basic Probability Conditioning F P(A)=P(A  B)+P(A  ¬B) F P(A)=   i P(A  B i ), where B i,  i is a set of exhaustive and mutually exclusive events F P(A) + P(¬A) = 1 n Independence F P(A|K) belief in A given the knowledge K F if P(A|B)=P(A), we say:A and B are independent F if P(A|B  C)= P(A|C), we say: A and B are conditionally independent, given C F P(A  B)=P(A|B)P(B) F P(A)=   i P(A | B i )P(B i )

Bayesian Inference Bayes’ Rule : the heart of Bayesian techniques P(H|e) = P(e|H)P(H) / P(e) Where, H : a hypothesis and e is an evidence P(H) : prior probability P(H|e) : posterior probability P(e|H) : probability of e if H is true P(e) : a normalizing constant, then we write: P(H|e) ~ P(e|H)P(H)

The Ranking n sim(dj,q)~ P(vec(dj) | R) P(vec(dj) |  R) Where vec(dj) is of the form (k1,  k2,k3....kt) Using pair-wise independence assumption among keywords ~ [  P(ki | R)] * [  P(  ki | R)] [  P(ki |  R)] * [  P(  ki |  R)] n P(ki | R) : probability that the index term ki is present in a document randomly selected from the set R of relevant documents For keywords that are present in dj For keywords that are NOT present in dj

The Ranking n sim(dj,q)~ log [  P(ki | R)] * [  P(  kj | R)] [  P(ki |  R)] * [  P(  kj |  R)] ~ K * [ log  P(ki | R) + P(  ki | R) log  P(ki |  R) ] P(  ki |  R) ~  wiq * wij * (log P(ki | R) + log P(ki |  R) ) P(  ki | R) P(  ki |  R) where P(  ki | R) = 1 - P(ki | R) P(  ki |  R) = 1 - P(ki |  R)

The Initial Ranking n sim(dj,q)~ ~  wiq * wij * (log P(ki | R) + log P(ki |  R) ) P(  ki | R) P(  ki |  R) u Probabilities P(ki | R) and P(ki |  R) ? n Estimates based on assumptions: u P(ki | R) = 0.5 u P(ki |  R) = ni N where ni is the number of docs that contain ki u Use this initial guess to retrieve an initial ranking u Improve upon this initial ranking

Improving the Initial Ranking n sim(dj,q)~ ~  wiq * wij * (log P(ki | R) + log P(ki |  R) ) P(  ki | R) P(  ki |  R) n Let u V : set of docs initially retrieved u Vi : subset of docs retrieved that contain ki n Reevaluate estimates: u P(ki | R) = Vi V u P(ki |  R) = ni - Vi N - V n Repeat recursively Relevance Feedback..

Improving the Initial Ranking n sim(dj,q)~ ~  wiq * wij * (log P(ki | R) + log P(ki |  R) ) P(  ki | R) P(  ki |  R) n To avoid problems with V=1 and Vi=0: u P(ki | R) = Vi V + 1 u P(ki |  R) = ni - Vi N - V + 1 n Also, u P(ki | R) = Vi + ni/N V + 1 u P(ki |  R) = ni - Vi + ni/N N - V + 1 Relevance Feedback..

Pluses and Minuses n Advantages: u Docs ranked in decreasing order of probability of relevance n Disadvantages: u need to guess initial estimates for P(ki | R) u method does not take into account tf and idf factors

Brief Comparison of Classic Models n Boolean model does not provide for partial matches and is considered to be the weakest classic model n Salton and Buckley did a series of experiments that indicate that, in general, the vector model outperforms the probabilistic model with general collections n This seems also to be the view of the research community

Alternative Probabilistic Models n Probability Theory u Semantically clear u Computationally clumsy n Why Bayesian Networks? u Clear formalism to combine evidences u Modularize the world (dependencies) u Bayesian Network Models for IR F Inference Network (Turtle & Croft, 1991) F Belief Network (Ribeiro-Neto & Muntz, 1996)

Bayesian Networks Definition: Bayesian networks are directed acyclic graphs (DAGS) in which the nodes represent random variables, the arcs portray causal relationships between these variables, and the strengths of these causal influences are expressed by conditional probabilities.

Bayesian Networks y i : parent nodes (in this case, root nodes) x : child node y i cause x Y the set of parents of x The influence of Y on x can be quantified by any function F(x,Y) such that   x F(x,Y) = 1 0 < F(x,Y) < 1 For example, F(x,Y)=P(x|Y)

Bayesian Networks Given the dependencies declared in a Bayesian Network, the expression for the joint probability can be computed as a product of local conditional probabilities, for example, P(x 1, x 2, x 3, x 4, x 5 )= P(x 1 ) P(x 2 | x 1 ) P(x 3 | x 1 ) P(x 4 | x 2, x 3 ) P(x 5 | x 3 ). P(x 1 ) : prior probability of the root node

Bayesian Networks In a Bayesian network each variable x is conditionally independent of all its non-descendants, given its parents. For example: P(x 4, x 5 | x 2, x 3 )= P(x 4 | x 2, x 3 ) P( x 5 | x 3 )

An Example Bayes Net Typically, networks written in causal direction wind up being most compact need least number of probabilities to be specified

Two Models “Inference Network model” “Belief network model”

Comparison F Inference Network Model is the first and well known Used in Inquery system F Belief Network adopts a set-theoretic view a clearly defined sample space a separation between query and document portions is able to reproduce any ranking produced by the Inference Network while the converse is not true (for example: the ranking of the standard vector model)

Belief Network Model n As the Inference Network Model u Epistemological view of the IR problem u Random variables associated with documents, index terms and queries n Contrary to the Inference Network Model u Clearly defined sample space u Set-theoretic view u Different network topology

Belief Network Model The Probability Space Define: K={k 1, k 2,...,k t } the sample space (a concept space) u  K a subset of K (a concept) k i an index term (an elementary concept) k=(k 1, k 2,...,k t ) a vector associated to each u such that g i (k)=1  k i  u k i a binary random variable associated with the index term k i, (k i = 1  g i (k)=1  k i  u)

Belief Network Model A Set-Theoretic View Define: a document d j and query q as concepts in K a generic concept c in K a probability distribution P over K, as P(c)=   u P(c|u) P(u) P(u)=(1/2) t P(c) is the degree of coverage of the space K by c

Belief Network Model Network topology query side document side

Belief Network Model Assumption P(d j |q) is adopted as the rank of the document d j with respect to the query q. It reflects the degree of coverage provided to the concept d j by the concept q.

Belief Network Model The rank of d j P(d j |q) = P(d j  q) / P(q) ~ P(d j  q) ~   u P(d j  q | u) P(u) ~   u P(d j | u) P(q | u) P(u) ~   k P(d j | k) P(q | k) P(k)

Belief Network Model For the vector model Define Define a vector k i given by k i = k | ((g i (k)=1)  (  j  i g j (k)=0))  in the state k i only the node k i is active and all the others are inactive

Belief Network Model For the vector model Define (w i,q / |q|) if k = k i  g i (q)=1 P(q | k) = 0 if k  k i v g i (q)=0 P(¬q | k) = 1 - P(q | k)  (w i,q / |q|) is a normalized version of weight of the index term k i in the query q

Belief Network Model For the vector model Define (w i,j / |d j |) if k = k i  g i (d j )=1 P(d j | k) = 0 if k  k i v g i (d j )=0 P(¬ d j | k) = 1 - P(d j | k)  (w i,j / |d j |) is a normalized version of the weight of the index term k i in the document d,j

Inference Network Model n Epistemological view of the IR problem n Random variables associated with documents, index terms and queries n A random variable associated with a document d j represents the event of observing that document

Inference Network Model Nodes documents (d j ) index terms (k i ) queries (q, q 1, and q 2 ) user information need (I) Edges from d j to its index term nodes k i indicate that the observation of d j increase the belief in the variables k i.

Inference Network Model d j has index terms k 2, k i, and k t q has index terms k 1, k 2, and k i q 1 and q 2 model boolean formulation q 1 =((k 1  k 2 ) v k i ); I = (q v q 1 )

Inference Network Model Definitions: k 1, d j,, and q random variables. k=(k 1, k 2,...,k t ) a t-dimensional vector k i,  i  {0, 1}, then k has 2 t possible states d j,  j  {0, 1};  q  {0, 1} The rank of a document d j is computed as P(q  d j ) q and d j, are short representations for q=1 and d j =1 (d j stands for a state where d j = 1 and  l  j  d l =0, because we observe one document at a time)

Inference Network Model P(q  d j )=   k P(q  d j | k) P(k) =   k P(q  d j  k) =   k P(q | d j  k) P(d j  k) =   k P(q | k) P(k | d j ) P( d j ) P(¬(q  d j )) = 1 - P(q  d j )

Inference Network Model As the instantiation of d j makes all index term nodes mutually independent P(k | d j ) can be a product,then P(q  d j )=   k [ P(q | k) (   i|g i (k)=1 P(k i | d j ) ) (   i|g i (k)=0 P(¬k i | d j ) ) P( d j ) ] remember that: g i (k)= 1 if k i =1 in the vector k 0 otherwise

Inference Network Model The prior probability P(d j ) reflects the probability associated to the event of observing a given document d j F Uniformly for N documents P(d j ) = 1/N P(¬d j ) = 1 - 1/N F Based on norm of the vector d j P(d j )= 1/|d j | P(¬d j ) = 1 - 1/|d j |

Inference Network Model For the Boolean Model P(d j ) = 1/N 1 if g i (d j )=1 P(k i | d j ) = 0 otherwise P(¬k i | d j ) = 1 - P(k i | d j )  only nodes associated with the index terms of the document d j are activated

Inference Network Model For the Boolean Model 1 if  q cc | (q cc  q dnf )  (  k i, g i (k)= g i (q cc ) P(q | k) = 0 otherwise P(¬q | k) = 1 - P(q | k)  one of the conjunctive components of the query must be matched by the active index terms in k

Inference Network Model For a tf-idf ranking strategy P(d j )= 1 / |d j | P(¬d j ) = / |d j |  prior probability reflects the importance of document normalization

Inference Network Model For a tf-idf ranking strategy P(k i | d j ) = f i,j P(¬k i | d j )= 1- f i,j  the relevance of the a index term k i is determined by its normalized term-frequency factor f i,j = freq i,j / max freq l,j

Inference Network Model For a tf-idf ranking strategy Define a vector k i given by k i = k | ((g i (k)=1)  (  j  i g j (k)=0))  in the state k i only the node k i is active and all the others are inactive

Inference Network Model For a tf-idf ranking strategy idf i if k = k i  g i (q)=1 P(q | k) = 0 if k  k i v g i (q)=0 P(¬q | k) = 1 - P(q | k)  we can sum up the individual contributions of each index term by its normalized idf

Inference Network Model For a tf-idf ranking strategy As P(q|k)=0  k  k i, we can rewrite P(q  d j ) as P(q  d j ) =   ki [ P(q | k i ) P(k i | d j ) (   l|l  i P(¬k l | d j ) ) P( d j ) ] = (   i P(¬k l | d j ) ) P( d j )   ki [ P(k i | d j ) P(q | k i ) / P(¬k i | d j ) ]

Inference Network Model For a tf-idf ranking strategy Applying the previous probabilities we have P(q  d j ) = C j (1/|d j |)   i [ f i,j idf i (1/(1- f i,j )) ]  C j vary from document to document  the ranking is distinct of the one provided by the vector model

Inference Network Model Combining evidential source Let I = q v q 1 P(I  d j )=   k P(I | k) P(k | d j ) P( d j ) =   k [1 - P(¬q|k)P(¬q 1 | k)] P(k| d j ) P( d j )  it might yield a retrieval performance which surpasses the retrieval performance of the query nodes in isolation (Turtle & Croft)

Bayesian Network Models Computational costs F Inference Network Model one document node at a time then is linear on number of documents F Belief Network only the states that activate each query term are considered F The networks do not impose additional costs because the networks do not include cycles.

Bayesian Network Models Impact The major strength is net combination of distinct evidential sources to support the rank of a given document.