Presentation is loading. Please wait.

Presentation is loading. Please wait.

Indexing Implementation and Indexing Models

Similar presentations


Presentation on theme: "Indexing Implementation and Indexing Models"— Presentation transcript:

1 Indexing Implementation and Indexing Models
CSC 575 Intelligent Information Retrieval

2 Lexical analysis and stop words
Information need Collections Pre-process text input Parse Query Index How is the index constructed? Rank Result Sets

3 Indexing Implementation
Bitmaps For each term, allocate vector with 1 bit per document If feature present in document n, set nth bit to 1, otherwise 0 Boolean operations very fast Space efficient for common terms, but inefficient for rare terms (why?) Difficult to add/delete documents (why?) Not widely used Signature files (Also called superimposed coding) For each term, allocate fixed size s-bit vector (signature) Define hash function: word  1..2s Each term then has s-bit signature (may not be unique) OR the term signatures to form document signature Lookup signature for query term. If all corresponding 1-bits on in document signature, document probably contains that term Inverted files … Intelligent Information Retrieval

4 Indexing Implementation
Inverted files Primary data structure for text indexes Source file: collection, organized by document Inverted file: collection organized by term (one record per term, listing locations where term occurs) Query: traverse lists for each query term OR: the union of component lists AND: an intersection of component lists Based on the view of documents as vectors in n-dimensional space n = number of index terms used for indexing Each document is a bag of words (vector) with a direction and a magnitude The Vector-Space Model for IR Intelligent Information Retrieval

5 The Vector Space Model Vocabulary V = the set of terms left after pre-processing the text (tokenization, stop-word removal, stemming, ...). Each document or query is represented as a |V| = n dimensional vector: dj = [w1j, w2j, ..., wnj]. wij is the weight of term i in document j. the terms in V form the orthogonal dimensions of a vector space Document = Bag of words: Vector representation doesn’t consider the ordering of words: John is quicker than Mary vs. Mary is quicker than John.

6 Document Vectors and Indexes
Conceptually, the index can be viewed as a document-term matrix Each document is represented as an n-dimensional vector (n = no. of terms in the dictionary) Term weights represent the scalar value of each dimension in a document The inverted file structure is an “implementation model” used in practice to store the information captured in this conceptual representation The dictionary Document Ids nova galaxy heat hollywood film role diet fur A B C D E F G H I Term Weights (in this case normalized) a document vector Intelligent Information Retrieval

7 Example: Documents and Query in 3D Space
Documents in term space Terms are usually stems Documents (and the query) are represented as vectors of terms Query and Document weights based on length and direction of their vector Why use this representation? A vector distance measure between the query and documents can be used to rank retrieved documents Intelligent Information Retrieval

8 Recall: Inverted Index Construction
Invert documents into a big index vector file “inverted” so that rows become columns and columns become rows Basic idea: list all the tokens in the collection for each token, list all the docs it occurs in (together with frequency info.) Sparse Matrix Representation: In practice this data is very sparse; we do not need to store all the 0’s. Hence, the sorted array implementation … Intelligent Information Retrieval

9 How Are Inverted Files Created
Sorted Array Implementation Documents are parsed to extract tokens. These are saved with the Document ID. Doc 1 Doc 2 Now is the time for all good men to come to the aid of their country It was a dark and stormy night in the country manor. The time was past midnight Intelligent Information Retrieval

10 How Inverted Files are Created
After all documents have been parsed and the inverted file is sorted (with duplicates retained for within document frequency stats) If frequency information is not needed, then inverted file can be sorted with duplicates removed. Intelligent Information Retrieval

11 How Inverted Files are Created
Multiple term entries for a single document are merged Within-document term frequency information is compiled If proximity operators are needed, then the location of each occurrence of the term must also be stored. Terms are usually represented by unique integers to fix and minimize storage space. Intelligent Information Retrieval

12 How Inverted Files are Created
Then the file can be split into a Dictionary and a Postings file Notes: The links between postings for a term is usually implemented as a linked list. The dictionary is enhanced with some term statistics such as Document frequency and the total frequency in the collection. Intelligent Information Retrieval

13 Inverted Indexes and Queries
Permit fast search for individual terms For each term, you get a hit list consisting of: document ID frequency of term in doc (optional) position of term in doc (optional) These lists can be used to solve quickly Boolean queries: country ==> {d1, d2} manor ==> {d2} country AND manor ==> {d2} Full advantage of this structure can taken by statistical ranking algorithms such as the vector space model in case of Boolean queries, term or document frequency information is not used (just set operations performed on hit lists) We will look at the vector model later; for now let’s examine Boolean queries more closely Intelligent Information Retrieval

14 Scalability Issues: Number of Postings
An Example: Number of docs = m = 1M Each doc has 1K terms Number of distinct terms = n = 500K 600 million postings entries Intelligent Information Retrieval

15 Bottleneck Parse and build postings entries one doc at a time
Sort postings entries by term (then by doc within each term) Doing this with random disk seeks would be too slow – must sort N=600M records If every comparison took 2 disk seeks (10 milliseconds each), and N items could be sorted with N log2N comparisons, how long would this take? Intelligent Information Retrieval

16 Sorting with fewer disk seeks
12-byte (4+4+4) records (term, doc, freq) These are generated as we parse docs Must now sort 600M such 12-byte records by term Define a Block (e.g., ~ 10M) records Sort within blocks first, then merge the blocks into one long sorted order. Blocked Sort-Based Indexing (BSBI) Intelligent Information Retrieval

17 Problem with sort-based algorithm
Sec. 4.3 Problem with sort-based algorithm Assumption: we can keep the dictionary in memory. We need the dictionary (which grows dynamically) in order to implement a term to termID mapping. Actually, we could work with term,docID postings instead of termID,docID postings . . . . . . but then intermediate files become very large. (We would end up with a scalable, but very slow index construction method.)

18 SPIMI: Single-pass in-memory indexing
Sec. 4.3 SPIMI: Single-pass in-memory indexing Key idea 1: Generate separate dictionaries for each block – no need to maintain term-termID mapping across blocks. Key idea 2: Don’t sort. Accumulate postings in postings lists as they occur. With these two ideas we can generate a complete inverted index for each block. These separate indexes can then be merged into one big index.

19 Distributed indexing For web-scale indexing
must use a distributed computing cluster Individual machines are fault-prone Can unpredictably slow down or fail How do we exploit such a pool of machines? Maintain a master machine directing the indexing job – considered “safe”. Break up indexing into sets of (parallel) tasks. Master machine assigns each task to an idle machine from a pool. Intelligent Information Retrieval

20 Parallel tasks Use two sets of parallel tasks
Parsers Inverters Break the input document corpus into splits Each split is a subset of documents E.g., corresponding to blocks in BSBI Master assigns a split to an idle parser machine Parser reads a document at a time and emits (term, doc) pairs writes pairs into j partitions Each partition is for a range of terms’ first letters (e.g., a-f, g-p, q-z) – here j = 3. Inverter collects all (term, doc) pairs for a partition; sorts and writes to postings list Intelligent Information Retrieval

21 Data flow Master assign assign Postings Parser a-f g-p q-z Inverter
Sec. 4.4 Data flow Master assign assign Postings Parser a-f g-p q-z Inverter a-f Parser a-f g-p q-z Inverter g-p splits Inverter q-z Parser a-f g-p q-z Map phase Reduce phase Segment files Intelligent Information Retrieval

22 Dynamic indexing Problem: Simplest Approach Docs come in over time
postings updates for terms already in dictionary new terms added to dictionary Docs get deleted Simplest Approach Maintain a “big” main index New docs go into a “small” auxiliary index Search across both, merge results Deletions Invalidation bit-vector for deleted docs Filter docs output on a search result by this invalidation bit-vector Periodically, re-index into one main index Intelligent Information Retrieval

23 Index on disk vs. memory Most retrieval systems keep the dictionary in memory and the postings on disk Web search engines frequently keep both in memory massive memory requirement feasible for large web service installations less so for commercial usage where query loads are lighter Intelligent Information Retrieval

24 Retrieval From Indexes
Given the large indexes in IR applications, searching for keys in the dictionaries becomes a dominant cost Two main choices for dictionary data structures: Hashtables or Trees Using Hashing requires the derivation of a hash function mapping terms to locations may require collision detection and resolution for non-unique hash values Using Trees Binary search trees nice properties, easy to implement, and effective enhancements such as B+ trees can improve search effectiveness but, requires the storage of keys in each internal node Intelligent Information Retrieval

25 Hashtables Each vocabulary term is hashed to an integer Pros: Cons:
Sec. 3.1 Hashtables Each vocabulary term is hashed to an integer (We assume you’ve seen hashtables before) Pros: Lookup is faster than for a tree: O(1) Cons: No easy way to find minor variants: judgment/judgement No prefix search [tolerant retrieval] If vocabulary keeps growing, need to occasionally do the expensive operation of rehashing everything

26 Trees Simplest: binary tree More usual: B-trees
Sec. 3.1 Trees Simplest: binary tree More usual: B-trees Trees require a standard ordering of characters and hence strings … but we typically have one Pros: Solves the prefix problem (e.g., terms starting with hyp) Cons: Slower: O(log M) [and this requires balanced tree] Rebalancing binary trees is expensive But B-trees mitigate the rebalancing problem

27 Tree: binary tree Sec. 3.1 Root a-m n-z a-hu hy-m n-sh si-z aardvark
zygot aardvark huygens sickle

28 Sec. 3.1 Tree: B-tree a-hu n-z hy-m Definition: Every internal node has a number of children in the interval [a,b] where a, b are appropriate natural numbers, e.g., [2,4].

29 Recall: Steps in Basic Automatic Indexing
Parse documents to recognize structure Scan for word tokens Stopword removal Stem words Weight words Intelligent Information Retrieval

30 Indexing Models (aka “Term Weighting”)
Basic issue: which terms should be used to index a document, and how much should it count? Some approaches binary weights Terms either appear or they don’t; no frequency information used. term frequency Either raw term counts or (more often) term counts divided by total frequency of the term across all documents TF.IDF (inverse document frequency model) Term discrimination model Signal-to-noise ratio (based on information theory) Probabilistic term weights Intelligent Information Retrieval

31 Binary Weights Only the presence (1) or absence (0) of a term is included in the vector This representation can be particularly useful, since the documents (and the query) can be viewed as simple bit strings. This allows for query operations be performed using logical bit operations. Intelligent Information Retrieval

32 Binary Weights: Matching of Documents & Queries
In the case of binary weights, matching between documents and queries can be seen as the size of the intersection of two sets (of terms): |Q Ç D|. This in turn can be used to rank the relevance of documents to a query. D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 t2 t3 t1 Intelligent Information Retrieval

33 Beyond Binary Weight More generally, similarity between the query and the document can be seen as the dot product of two vectors: Q · D (this is also called simple matching) Note that if both Q and D are binary this is the same as: |Q Ç D| Given two vectors X and Y: Simple matching measures the similarity between X and Y as the dot product of X and Y: Intelligent Information Retrieval

34 Raw Term Weights The frequency of occurrence for the term in each document is included in the vector Now the notion of simple matching (dot product) incorporates the term weights from both the query and the documents. Using raw term weights provides the ability to better distinguish among retrieved documents Note: Although “term frequency” is commonly used to mean raw occurrence count, technically it implies that raw count is divided by the document length (total no. of term occurrences in the document). Intelligent Information Retrieval

35 Term Weights: TF More frequent terms in a document are more important, i.e. more indicative of the topic. fij = frequency of term i in document j. May want to normalize term frequency (tf) by dividing by the frequency of the most common term in the document: tfij = fij / maxi{fij} Or sublinear tf scaling: tfij = 1 + log fij

36 Normalized Similarity Measures
With or without normalized weights, it is possible to incorporate normalization into various similarity measures Example (Vector Space Model) in simple matching, the dot product of two vectors measures the similarity of these vectors the normalization can be achieved by dividing the dot product by the product of the norms of the two vectors given a vector the norm of X is: the similarity of vectors X and Y is: Note: this measures the cosine of the angle between two vectors; it is thus called the normalized cosine similarity measure. Intelligent Information Retrieval

37 Normalized Similarity Measures
Using normalized cosine similarity Note that the relative ranking among documents has changed! Intelligent Information Retrieval

38 tf x idf Weighting tf x idf measure:
term frequency (tf) inverse document frequency (idf) -- a way to deal with the problems of the Zipf distribution Recall the Zipf distribution Want to weight terms highly if they are frequent in relevant documents … BUT infrequent in the collection as a whole Goal: assign a tf x idf weight to each term in each document Intelligent Information Retrieval

39 tf x idf Intelligent Information Retrieval

40 Inverse Document Frequency
IDF provides high values for rare words and low values for common words Intelligent Information Retrieval

41 tf x idf normalization Normalize the term weights (so longer documents are not unfairly given more weight) normalize usually means force all values to fall within a certain range, usually between 0 and 1, inclusive this is more ad hoc than normalization based on vector norms, but the basic idea is the same: Intelligent Information Retrieval

42 tf x idf Example The initial Term x Doc matrix (Inverted Index)
Doc 1 Doc 2 Doc 3 Doc 4 Doc 5 Doc 6 df idf = log2(N/df) T1 2 4 1 3 1.00 T2 T3 1.58 T4 5 0.58 T5 T6 7 0.26 T7 T8 The initial Term x Doc matrix (Inverted Index) Documents represented as vectors of words Doc 1 Doc 2 Doc 3 Doc 4 Doc 5 Doc 6 T1 0.00 2.00 4.00 1.00 T2 3.00 T3 1.58 3.17 T4 1.75 0.58 2.92 2.34 T5 6.34 T6 0.53 1.84 0.26 0.79 T7 T8 tf x idf Term x Doc matrix

43 Alternative TF.IDF Weighting Schemes
Many search engines allow for different weightings for queries vs. documents: A very standard weighting scheme is: Document: logarithmic tf, no idf, and cosine normalization Query: logarithmic tf, idf, no normalization

44 Keyword Discrimination Model
The Vector representation of documents can be used as the source of another approach to term weighting Question: what happens if we removed one of the words used as dimensions in the vector space? If the average similarity among documents changes significantly, then the word was a good discriminator If there is little change, the word is not as helpful and should be weighted less Note that the goal is to have a representation that makes it easier for a queries to discriminate among documents Average similarity can be measured after removing each word from the matrix Any of the similarity measures can be used (we will look at a variety of other similarity measures later). Intelligent Information Retrieval

45 Keyword Discrimination
Measuring average similarity (assume there are N documents) sim(D1,D2) = similarity score for pair of documents D1 and D2 Better way to calculate AVG-SIM Calculate centroid D* (avg. document vector = Sum vectors / N) Then: Computationally Expensive Intelligent Information Retrieval

46 Keyword Discrimination
Discrimination value (discriminant) and term weights Computing Term Weights New weight for a term k in a document i is the original term frequency of k in i time the discriminant value: disck > 0 ==> termk is a good discriminant disck < 0 ==> termk is a poor discriminant disck = 0 ==> termk is indifferent Intelligent Information Retrieval

47 Keyword Discrimination - Example
Using Normalized Cosine Note: D* for each of the SIMk is now computed with only two terms Intelligent Information Retrieval

48 Keyword Discrimination - Example
This shows that t1 tends to be a poor discriminator, while t3 is a good discriminator. The new term weight will now reflect the discrimination value for these terms. Note that further normalization can be done to make all term weights positive. Intelligent Information Retrieval

49 Signal-To-Noise Ratio
Based on work of Shannon in 1940’s on Information Theory Developed a model of communication of messages across a noisy channel Goal is to devise an encoding of messages that is most robust in the face of channel noise In IR, messages describe the content of documents Amount of information about the document from a word is inversely proportional to its probability of occurrence The least informative words are those that occur approximately uniformly across the corpus of documents a word that occurs with the similar frequency across many documents (e.g., “the”, “and”, etc.) is less informative than one that occurs with high frequency in one or two documents Shannon used entropy (a logarithmic measure) to measure average information gain with noise defined as its inverse Intelligent Information Retrieval

50 Signal-To-Noise Ratio
pk = Prob(term k occurs in document i) = tfik / tfk Infok = - pk log pk Noisek = - pk log (1/pk) Note: here we always take logs to be base 2. Note: NOISE is the negation of AVG-INFO, so only one of these needs to be computed in practice. The weight of term k in document i Intelligent Information Retrieval

51 Signal-To-Noise Ratio - Example
pk = tfik / tfk Note: By definition, if the term k does not appear in the document, we assume Info(k) = 0 for that doc. This is the “entropy” of term k in the collection Intelligent Information Retrieval

52 Signal-To-Noise Ratio - Example
The weight of term k in document i Additional normalization can be performed to have values in the range [0,1] Intelligent Information Retrieval

53 Probabilistic Term Weights
Probabilistic model makes explicit distinctions between occurrences of terms in relevant and non-relevant documents If we know pi: probability of term xi appears in relevant doc. qi: probability of term xi appears in non-relevant doc. with binary and independence assumption, the the weight of term xi in document Dk is: Estimates of pi and qi requires relevance information: using test queries and test collections to “train” the values of pi and qi other AI/learning technique? Intelligent Information Retrieval

54 Phrase Indexing Both statistical and syntactic methods have been used to identify “good” phrases Proven techniques include finding all word pairs that occur more than n times in the corpus or using a part-of-speech tagger to identify simple noun phrases Phrases can have an impact on effectiveness and efficiency phrase indexing will speed up phrase queries improve precision by disambiguating the word senses: e.g, “grass field” v. “magnetic field” effectiveness not straightforward and depends on retrieval model e.g. for “information retrieval”, how much do individual words count? Intelligent Information Retrieval

55 Associating Weights with Phrases
Typical Approach (Salton and McGill ) Compute pairwise co-occurrence for high-frequency words If co-occurrence value is less than some threshold a, do not consider the pair any further For qualifying pairs of terms (ti,tj) , compute the cohesion value where s is a size factor determined by the size of the vocabulary; OR If cohesion is above a threshold b, retain phrase as a valid index phrase Weight in the index will be a function of cohesion value (Salton and McGill, 1983) (Rada, 1986) Intelligent Information Retrieval

56 Concept Indexing More complex indexing could include concept or thesaurus classes One approach is to use a controlled vocabulary (or subject codes) and map specific terms to “concept classes” Automatic concept generation can use classification or clustering to determine concept classes Automatic Concept Indexing Words, phrases, synonyms, linguistic relations can all be evidence used to infer presence of the concept e.g. the concept “automobile” can be inferred based on the presence of the words “vehicle”, “transportation”, “driving”, etc. One approach is to represent each word as a “concept vector” each dimension represents a weight for a concept associated with the term phrases or index items can be represented as weighted averages of concept vectors for the terms in them Another approach: Latent Semantic Indexing (LSI) Intelligent Information Retrieval

57 Next Retrieval Models and Ranking Algorithms
Boolean Matching and Boolean Queries Vector Space Model and Similarity Ranking Extended Boolean Models Basic Probabilistic Models Implementation Issues for Ranking Systems Intelligent Information Retrieval


Download ppt "Indexing Implementation and Indexing Models"

Similar presentations


Ads by Google