4. Boolean and Vector Space Retrieval Models

Slides:



Advertisements
Similar presentations
Text Categorization.
Advertisements

Boolean and Vector Space Retrieval Models
Chapter 5: Introduction to Information Retrieval
Modern information retrieval Modelling. Introduction IR systems usually adopt index terms to process queries IR systems usually adopt index terms to process.
Lecture 11 Search, Corpora Characteristics, & Lucene Introduction.
Intelligent Information Retrieval 1 Vector Space Model for IR: Implementation Notes CSC 575 Intelligent Information Retrieval These notes are based, in.
Ranking models in IR Key idea: We wish to return in order the documents most likely to be useful to the searcher To do this, we want to know which documents.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model.
TF-IDF David Kauchak cs160 Fall 2009 adapted from:
Learning for Text Categorization
IR Models: Overview, Boolean, and Vector
Information Retrieval Ling573 NLP Systems and Applications April 26, 2011.
1 Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof Raymond Mooney (UTexas), Prof. Joydeep Ghosh (UT ECE) who.
Boolean, Vector Space, Probabilistic
Database Management Systems, R. Ramakrishnan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides.
T.Sharon - A.Frank 1 Internet Resources Discovery (IRD) IR Queries.
Ch 4: Information Retrieval and Text Mining
CS276 Information Retrieval and Web Mining
Hinrich Schütze and Christina Lioma
Boolean, Vector Space, Probabilistic
Retrieval Models II Vector Space, Probabilistic.  Allan, Ballesteros, Croft, and/or Turtle Properties of Inner Product The inner product is unbounded.
1 Web Information Retrieval Web Science Course. 2.
Recuperação de Informação. IR: representation, storage, organization of, and access to information items Emphasis is on the retrieval of information (not.
Chapter 5: Information Retrieval and Web Search
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1 Ad indexes.
CSE 6331 © Leonidas Fegaras Information Retrieval 1 Information Retrieval and Web Search Engines Leonidas Fegaras.
Information Retrieval Chapter 2: Modeling 2.1, 2.2, 2.3, 2.4, 2.5.1, 2.5.2, Slides provided by the author, modified by L N Cassel September 2003.
Weighting and Matching against Indices. Zipf’s Law In any corpus, such as the AIT, we can count how often each word occurs in the corpus as a whole =
Term Frequency. Term frequency Two factors: – A term that appears just once in a document is probably not as significant as a term that appears a number.
Chapter 6: Information Retrieval and Web Search
Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: [Note: Some.
1 Computing Relevance, Similarity: The Vector Space Model.
Introduction to Digital Libraries hussein suleman uct cs honours 2003.
Ranking in Information Retrieval Systems Prepared by: Mariam John CSE /23/2006.
CPSC 404 Laks V.S. Lakshmanan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides at UC-Berkeley.
Information Retrieval Model Aj. Khuanlux MitsophonsiriCS.426 INFORMATION RETRIEVAL.
1 University of Palestine Topics In CIS ITBS 3202 Ms. Eman Alajrami 2 nd Semester
Vector Space Models.
CIS 530 Lecture 2 From frequency to meaning: vector space models of semantics.
Lecture 6: Scoring, Term Weighting and the Vector Space Model
Information Retrieval and Web Search IR models: Vector Space Model Instructor: Rada Mihalcea [Note: Some slides in this set were adapted from an IR course.
Introduction n IR systems usually adopt index terms to process queries n Index term: u a keyword or group of selected words u any word (more general) n.
Xiaoying Gao Computer Science Victoria University of Wellington COMP307 NLP 4 Information Retrieval.
1 Boolean Model. 2 A document is represented as a set of keywords. Queries are Boolean expressions of keywords, connected by AND, OR, and NOT, including.
Information Retrieval and Web Search IR models: Vector Space Model Term Weighting Approaches Instructor: Rada Mihalcea.
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
3: Search & retrieval: Structures. The dog stopped attacking the cat, that lived in U.S.A. collection corpus database web d1…..d n docs processed term-doc.
1 Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE) who in turn adapted them from Prof.
Automated Information Retrieval
Plan for Today’s Lecture(s)
7CCSMWAL Algorithmic Issues in the WWW
Ch 6 Term Weighting and Vector Space Model
Information Retrieval and Web Search
Information Retrieval and Web Search
Basic Information Retrieval
Representation of documents and queries
From frequency to meaning: vector space models of semantics
CS 430: Information Discovery
6. Implementation of Vector-Space Retrieval
Chapter 5: Information Retrieval and Web Search
Hankz Hankui Zhuo Text Categorization Hankz Hankui Zhuo
5. Vector Space and Probabilistic Retrieval Models
Boolean and Vector Space Retrieval Models
Recuperação de Informação B
Recuperação de Informação B
Information Retrieval and Web Design
Advanced information retrieval
Presentation transcript:

4. Boolean and Vector Space Retrieval Models Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE) who in turn adapted them from Prof. Dik Lee (Univ. of Science and Tech, Hong Kong) These notes are based, in part, on notes by Dr. Raymond J. Mooney at the University of Texas at Austin.

Retrieval Models A retrieval model specifies the details of: Document representation Query representation Retrieval function Determines a notion of relevance. Notion of relevance can be binary or continuous (i.e. ranked retrieval).

Classes of Retrieval Models Boolean models (set theoretic) Vector space models (statistical/algebraic) Latent Semantic Indexing Probabilistic models

Other Model Dimensions Logical View of Documents Index terms Full text Full text + Structure (e.g. hypertext) User Task Retrieval Browsing

Common Preprocessing Steps Strip unwanted characters/markup (e.g. HTML tags, punctuation, numbers, etc.). Break into tokens (keywords) on whitespace. Stem tokens to “root” words computational  comput Remove common stopwords (e.g. a, the, it, etc.). Detect common phrases (possibly using a domain specific dictionary). Build inverted index (keyword  list of docs containing it).

1. Boolean Model A document is represented as a set of keywords. Queries are Boolean expressions of keywords, connected by AND, OR, and NOT, including the use of brackets to indicate scope. “Brutus AND Caesar AND NOT Calpurnia” Output: Document is relevant or not. No partial matches or ranking.

Boolean Model - Pros Popular retrieval model because: Easy to understand for simple queries. Clean formalism. Primary commercial retrieval tool for over 3 decades Many professional searchers (e.g., lawyers) still like Boolean queries. You know exactly what you are getting Boolean models can be extended to include ranking (e.g. chronological order). Reasonably efficient implementations possible for normal queries (by query optimization).

Boolean Models  Cons Very rigid: AND means all; OR means any. Difficult to express complex user requests. Difficult to control the number of documents retrieved. All matched documents will be returned. Difficult to rank output. All matched documents logically satisfy the query. Difficult to perform relevance feedback. If a document is identified by the user as relevant or irrelevant, how should the query be modified?

e.g. (Cat OR Dog) AND (Collar OR Leash) Each of the following combinations works:

Statistical Models A document is typically represented by a bag of words (unordered words with frequencies). Bag = set that allows multiple occurrences of the same element. User specifies a set of desired terms with optional weights: Weighted query terms: Q = < database 0.5; text 0.8; information 0.2 > Unweighted query terms: Q = < database; text; information > No Boolean conditions specified in the query.

Statistical Retrieval Retrieval based on similarity between query and documents. Output documents are ranked according to similarity to query. Similarity based on occurrence frequencies of keywords in query and document. Automatic relevance feedback can be supported: Relevant documents “added” to query. Irrelevant documents “subtracted” from query.

2. Vector Space Model Vocabulary V = the set of terms left after pre-processing the text (tokenization, stop-word removal, stemming, ...). Each document or query is represented as a |V| = n dimensional vector: dj = [w1j, w2j, ..., wnj]. wij is the weight of term i in document j. the terms in V form the orthogonal dimensions of a vector space Document = Bag of words: Vector representation doesn’t consider the ordering of words: John is quicker than Mary vs. Mary is quicker than John.

Document Collection A collection of n documents can be represented in the vector space model by a term-document matrix. An entry in the matrix corresponds to the “weight” of a term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document. T1 T2 …. Tt D1 w11 w21 … wt1 D2 w12 w22 … wt2 : : : : Dn w1n w2n … wtn

Document Vectors and Indexes Conceptually, the index can be viewed as a document-term matrix Each document is represented as an n-dimensional vector (n = no. of terms in the dictionary) Term weights represent the scalar value of each dimension in a document The inverted file structure is an “implementation model” used in practice to store the information captured in this conceptual representation The dictionary Document Ids nova galaxy heat hollywood film role diet fur A 1.0 0.5 0.3 B 0.5 1.0 C 1.0 0.8 0.7 D 0.9 1.0 0.5 E 1.0 1.0 F 0.9 1.0 G 0.5 0.7 0.9 H 0.6 1.0 0.3 0.2 0.8 I 0.7 0.5 0.1 0.3 Term Weights (in this case normalized) a document vector

Issues for Vector Space Model How to determine important words in a document? Word sense? Word n-grams (and phrases, idioms,…)  terms How to determine the degree of importance of a term within a document and within the entire collection? How to determine the degree of similarity between a document and the query? In the case of the web, what is the collection and what are the effects of links, formatting information, etc.?

The Vector-Space Model Assume t distinct terms remain after preprocessing; call them index terms or the vocabulary. These “orthogonal” terms form a vector space. Dimensionality = t = |vocabulary| Each term, i, in a document or query, j, is given a real-valued weight, wij. Both documents and queries are expressed as t-dimensional vectors: dj = (w1j, w2j, …, wtj)

Graphic Representation Example: D1 = 2T1 + 3T2 + 5T3 D2 = 3T1 + 7T2 + T3 Q = 0T1 + 0T2 + 2T3 T3 T1 T2 D1 = 2T1+ 3T2 + 5T3 D2 = 3T1 + 7T2 + T3 Q = 0T1 + 0T2 + 2T3 7 3 2 5 Is D1 or D2 more similar to Q? How to measure the degree of similarity? Distance? Angle? Projection?

Term Weights: Term Frequency More frequent terms in a document are more important, i.e. more indicative of the topic. fij = frequency of term i in document j May want to normalize term frequency (tf) by dividing by the frequency of the most common term in the document: tfij = fij / maxi{fij}

Term Weights: Inverse Document Frequency Terms that appear in many different documents are less indicative of overall topic. df i = document frequency of term i = number of documents containing term i idfi = inverse document frequency of term i, = log2 (N/ df i) (N: total number of documents) An indication of a term’s discrimination power. Log used to dampen the effect relative to tf.

Inverse Document Frequency IDF provides high values for rare words and low values for common words Note: log10 is used in the examples.

wij = tfij idfi = tfij log2 (N/ dfi) TF-IDF Weighting A typical combined term importance indicator is tf-idf weighting: wij = tfij idfi = tfij log2 (N/ dfi) A term occurring frequently in the document but rarely in the rest of the collection is given high weight. Many other ways of determining term weights have been proposed. Experimentally, tf-idf has been found to work well.

Computing TF-IDF -- An Example Given a document containing terms with given frequencies: A(3), B(2), C(1) Assume collection contains 10,000 documents and document frequencies of these terms are: A(50), B(1300), C(250) Then: A: tf = 3/3; idf = log2(10000/50) = 7.6; tf-idf = 7.6 B: tf = 2/3; idf = log2 (10000/1300) = 2.9; tf-idf = 2.0 C: tf = 1/3; idf = log2 (10000/250) = 5.3; tf-idf = 1.8

Another tf x idf Example   Doc 1 Doc 2 Doc 3 Doc 4 Doc 5 Doc 6 df idf = log2(N/df) T1 2 4 1 3 1.00 T2 T3 1.58 T4 5 0.58 T5 T6 7 0.26 T7 T8 The initial Term x Doc matrix (Inverted Index) Documents represented as vectors of words   Doc 1 Doc 2 Doc 3 Doc 4 Doc 5 Doc 6 T1 0.00 2.00 4.00 1.00 T2 3.00 T3 1.58 3.17 T4 1.75 0.58 2.92 2.34 T5 6.34 T6 0.53 1.84 0.26 0.79 T7 T8 tf x idf Term x Doc matrix

tf x idf Normalization Normalize the term weights (so longer documents are not unfairly given more weight) normalize usually means force all values to fall within a certain range, usually between 0 and 1, inclusive this is more ad hoc than normalization based on vector norms, but the basic idea is the same:

Alternative TF.IDF Weighting Schemes Many search engines allow for different weightings for queries vs. documents: A very standard weighting scheme is: Document: logarithmic tf, no idf, and cosine normalization Query: logarithmic tf, idf, no normalization

Query Vector Query vector is typically treated as a document and also tf-idf weighted. Alternative is for the user to supply weights for the given query terms.

Similarity Measure A similarity measure is a function that computes the degree of similarity between two vectors. Using a similarity measure between the query and each document: It is possible to rank the retrieved documents in the order of presumed relevance. It is possible to enforce a certain threshold so that the size of the retrieved set can be controlled.

Similarity Measure - Inner Product Similarity between vectors for the document di and query q can be computed as the vector inner product (a.k.a. dot product): sim(dj,q) = dj•q = where wij is the weight of term i in document j and wiq is the weight of term i in the query For binary vectors, the inner product is the number of matched query terms in the document (size of intersection). For weighted term vectors, it is the sum of the products of the weights of the matched terms.

Properties of Inner Product The inner product is unbounded. Favors long documents with a large number of unique terms. Measures how many terms matched but not how many terms are not matched.

Inner Product -- Examples architecture management information retrieval database computer text Binary: D = 1, 1, 1, 0, 1, 1, 0 Q = 1, 0 , 1, 0, 0, 1, 1 sim(D, Q) = 3 Size of vector = size of vocabulary = 7 0 means corresponding term not found in document or query Weighted: D1 = 2T1 + 3T2 + 5T3 D2 = 3T1 + 7T2 + 1T3 Q = 0T1 + 0T2 + 2T3 sim(D1 , Q) = 2*0 + 3*0 + 5*2 = 10 sim(D2 , Q) = 3*0 + 7*0 + 1*2 = 2

Cosine Similarity Measure 2 t3 t1 t2 D1 D2 Q 1 Cosine similarity measures the cosine of the angle between two vectors. Inner product normalized by the vector lengths. CosSim(dj, q) = D1 = 2T1 + 3T2 + 5T3 CosSim(D1 , Q) = 10 / (4+9+25)(0+0+4) = 0.81 D2 = 3T1 + 7T2 + 1T3 CosSim(D2 , Q) = 2 / (9+49+1)(0+0+4) = 0.13 Q = 0T1 + 0T2 + 2T3 D1 is 6 times better than D2 using cosine similarity but only 5 times better using inner product.