Presentation is loading. Please wait.

Presentation is loading. Please wait.

Information Retrieval Christopher Manning and Prabhakar Raghavan

Similar presentations


Presentation on theme: "Information Retrieval Christopher Manning and Prabhakar Raghavan"— Presentation transcript:

1 Information Retrieval Christopher Manning and Prabhakar Raghavan
Lecture 3: Dictionaries and tolerant retrieval

2 Recap of the previous lecture
Ch. 2 Recap of the previous lecture The type/token distinction Terms are normalized types put in the dictionary Tokenization problems: Hyphens, apostrophes, compounds, Chinese Term equivalence classing: Numbers, case folding, stemming, lemmatization Skip pointers Encoding a tree-like structure in a postings list Biword indexes for phrases Positional indexes for phrases/proximity queries

3 This lecture Dictionary data structures “Tolerant” retrieval
Ch. 3 This lecture Dictionary data structures “Tolerant” retrieval Wild-card queries Spelling correction Soundex

4 Dictionary data structures for inverted indexes
Sec. 3.1 Dictionary data structures for inverted indexes The dictionary data structure stores the term vocabulary, document frequency, pointers to each postings list … in what data structure?

5 Dictionary data structures
Given an inverted index and a query, our first task is to determine whether each query term exists in the vocabulary and if so, identify the pointer to the corresponding postings. This vocabulary lookup operation uses a classical data structure called the dictionary and has two broad classes of solutions: hashing search trees. Some IR systems use hashes, some trees

6 A naïve dictionary An array of struct: char[20] int Postings *
Sec. 3.1 A naïve dictionary An array of struct: char[20] int Postings * 20 bytes 4/8 bytes /8 bytes How do we store a dictionary in memory efficiently? How do we quickly look up elements at query time?

7 Hashes Each vocabulary term is hashed to an integer Pros: Cons:
Sec. 3.1 Hashes Each vocabulary term is hashed to an integer (We assume you’ve seen hashtables before) Pros: Lookup is faster than for a tree: O(1) Cons: No easy way to find minor variants: judgment/judgement No prefix search [tolerant retrieval] If vocabulary keeps growing, need to occasionally do the expensive operation of rehashing everything

8 Tree: binary tree Sec. 3.1 Root a-m n-z a-hu hy-m n-sh si-z sickle
zygot huygens aardvark

9 Sec. 3.1 Tree: B-tree a-hu n-z hy-m Definition: Every internal nodel has a number of children in the interval [a,b] where a, b are appropriate natural numbers, e.g., [2,4].

10 Trees Simplest: binary tree More usual: B-trees
Sec. 3.1 Trees Simplest: binary tree More usual: B-trees Trees require a standard ordering of characters and hence strings … but we standardly have one Pros: Solves the prefix problem (terms starting with hyp) Cons: Slower: O(log M) [and this requires balanced tree] Rebalancing binary trees is expensive But B-trees mitigate the rebalancing problem

11 Wild-card queries

12 Wild-card queries Wildcard queries are used in any of the following situations: the user is uncertain of the spelling of a query term (e.g., Sydney vs. Sidney, which leads to the wildcard query S*dney); the user is aware of multiple variants of spelling a term and (consciously) seeks documents containing any of the variants (e.g., color vs. colour); the user seeks documents containing variants of a term that would be caught by stemming, but is unsure whether the search engine performs stemming (e.g., judicial vs. judiciary, leading to the wildcard query judicia*); the user is uncertain of the correct rendition of a foreign word or phrase (e.g., the query Universit* Stuttgart).

13 A TRAILING WILDCARD A query such as mon* is known as a trailing wildcard query, because the * symbol occurs only once, at the end of the search string A search tree on the dictionary is a convenient way of handling trailing wildcard queries: We walk down the tree following the symbols m, o and n in turn, at which point we can enumerate the set W of terms in the dictionary with the prefix mon. Finally, we use |W| lookups on the standard inverted index to retrieve all documents containing any term in W.

14 LEADING WILDCARD QUERIES
First, consider leading wildcard queries, or queries of the form *mon. Consider a reverse B-tree on the dictionary – one in which each root-to-leaf path of the B-tree corresponds to a term in the dictionary written backwards: thus, the term lemon would, in the B-tree, be represented by the path root-n-o-m-e-l. A walk down the reverse B-tree then enumerates all terms R in the vocabulary with a given prefix.

15 MORE GENERAL CASE: WILDCARD QUERIES
In fact, using a regular B-tree together with a reverse B-tree, we can handle an even more general case: wildcard queries in which there is a single * symbol,such as se*mon. To do this, we use the regular B-tree to enumerate the set W of dictionary terms beginning with the prefix se, then the reverse B-tree to enumerate the set R of terms ending with the suffix mon. Next, we take the intersection W⋂ R of these two sets, to arrive at the set of terms that begin with the prefix se and end with the suffix mon.

16 MORE GENERAL CASE: WILDCARD QUERIES
Finally, we use the standard inverted index to retrieve all documents containing any terms in this intersection. We can thus handle wildcard queries that contain a single * symbol using two B-trees, the normal B-tree and a reverse B-tree.

17 General wildcard queries
We now study two techniques for handling general wildcard queries. Both techniques share a common strategy: express the given wildcard query qw as a Boolean query Q on a specially constructed index, such that the answer to Q is a superset of the set of vocabulary terms matching qw. Then, we check each term in the answer to Q against qw, discarding those vocabulary terms that do not match qw. At this point we have the vocabulary terms matching qw and can resort to the standard inverted index.

18 Sec. 3.2 Wild-card queries: * mon*: find all docs containing any word beginning “mon”. Easy with binary tree (or B-tree) lexicon: retrieve all words in range: mon ≤ w < moo *mon: find words ending in “mon”: harder Maintain an additional B-tree for terms backwards. Can retrieve all words in range: nom ≤ w < non. Exercise: from this, how can we enumerate all terms meeting the wild-card query pro*cent ?

19 Sec. 3.2 Query processing At this point, we have an enumeration of all terms in the dictionary that match the wild-card query. We still have to look up the postings for each enumerated term. E.g., consider the query: se*ate AND fil*er This may result in the execution of many Boolean AND queries.

20 B-trees handle *’s at the end of a query term
Sec. 3.2 B-trees handle *’s at the end of a query term How can we handle *’s in the middle of query term? co*tion We could look up co* AND *tion in a B-tree and intersect the two term sets Expensive The solution: transform wild-card queries so that the *’s occur at the end This gives rise to the Permuterm Index.

21 PERMUTERM INDEX first special index for general wildcard queries is the permuterm index,a form of inverted index. First, we introduce a special symbol $ into our character set, to mark the end of a term (hello$) Next, we construct a permuterm index, in which the various rotations of each term (augmented with $) all link to the original vocabulary term. We refer to the set of rotated terms in the permuterm index as the per-muterm vocabulary.

22 PERMUTERM INDEX

23 PERMUTERM INDEX But what about a query such as fi*mo*er?
In this case we first enumerate the terms in the dictionary that are in the permuterm index of er$fi*. Not all such dictionary terms will have the string mo in the middle - we filter these out by exhaustive enumeration,checking each candidate to see if it contains mo. In this example, the term fishmonger would survive this filtering but filibuster would not. We then run the surviving terms through the standard inverted index for document retrieval.

24 PERMUTERM INDEX One disadvantage of the permuterm index is that its dictionary becomes quite large, including as it does all rotations of each term. Notice the close interplay between the B-tree and the permuterm index above. Indeed, it suggests that the structure should perhaps be viewed as a permuterm B-tree However, we follow traditional terminology here in describing the permuterm index as distinct from the B-tree that allows us to select the rotations with a given prefix.

25 Permuterm index For term hello, index under: Queries:
Sec Permuterm index For term hello, index under: hello$, ello$h, llo$he, lo$hel, o$hell where $ is a special symbol. Queries: X lookup on X$ X* lookup on $X* *X lookup on X$* *X* lookup on X* X*Y lookup on Y$X* X*Y*Z ??? Exercise! Query = hel*o X=hel, Y=o Lookup o$hel*

26 Permuterm query processing
Sec Permuterm query processing Rotate query wild-card to the right Now use B-tree lookup as before. Permuterm problem: ≈ quadruples lexicon size Empirical observation for English.

27 k-GRAM INDEX A k-gram is a sequence of k characters.
Thus cas, ast and stl are all 3-grams occurring in the term castle. We use a special character $ to denote the beginning or end of a term, so the full set of 3-grams generated for castle is: $ca, cas, ast, stl, tle, le$. In a k-gram index, the dictionary contains all k-grams that occur in any term in the vocabulary. Each postings list points from a k-gram to all vocabulary terms containing that k-gram

28 k-GRAM INDEX For instance, the 3-gram etr would point to vocabulary terms such as metric and retrieval How does such an index help us with wildcard queries? Consider the wildcard query re*ve. We are seeking documents containing any term that begins with re and ends with ve. Accordingly, we run the Boolean query $re AND ve$. This is looked up in the 3-gram index and yields a list of matching terms such as relive, remove and retrieve. Each of these matching terms is then looked up in the standard inverted index to yield documents matching the query.

29 a difficulty with the use of k-gram indexes
There is however a difficulty with the use of k-gram indexes, that demands one further step of processing. Consider using the 3-gram index described above for the query red*. we first issue the Boolean query $re AND red to the 3-gram index. This leads to a match on terms such as retired, which contain the conjunction of the two 3-grams $re and red, yet do not match the original wildcard query red*. To cope with this, we introduce a post-filtering step, in which the terms enumerated by the Boolean query on the 3-gram index are checked individually against the original query red*. This is a simple string-matching operation and weeds out terms such as retired that do not match the original query. Terms that survive are then searched in the standard inverted index as usual.

30 a difficulty with the use of k-gram indexes
We have seen that a wildcard query can result in multiple terms being enumerated, each of which becomes a single-term query on the standard inverted index. Search engines do allow the combination of wildcard queries using Boolean operators, for example, re*d AND fe*ri. What is the appropriate semantics for such a query? Since each wildcard query turns into a disjunction of single-term queries, the appropriate interpretation of this example is that we have a conjunction of disjunctions: we seek all documents that contain any term matching re*d and any term matching fe*ri.

31 a difficulty with the use of k-gram indexes
Even without Boolean combinations of wildcard queries, the processing of a wildcard query can be quite expensive, because of the added lookup in the special index, filtering and finally the standard inverted index

32 Bigram (k-gram) indexes
Sec Bigram (k-gram) indexes Enumerate all k-grams (sequence of k chars) occurring in any term e.g., from text “April is the cruelest month” we get the 2-grams (bigrams) $ is a special word boundary symbol Maintain a second inverted index from bigrams to dictionary terms that match each bigram. $a,ap,pr,ri,il,l$,$i,is,s$,$t,th,he,e$,$c,cr,ru, ue,el,le,es,st,t$, $m,mo,on,nt,h$

33 Sec Bigram index example The k-gram index finds terms based on a query consisting of k-grams (here k=2). $m mace madden mo among amortize on among moon

34 Processing wild-cards
Sec Processing wild-cards Query mon* can now be run as $m AND mo AND on Gets terms that match AND version of our wildcard query. But we’d enumerate moon. Must post-filter these terms against query. Surviving enumerated terms are then looked up in the term-document inverted index. Fast, space efficient (compared to permuterm).

35 Processing wild-card queries
Sec Processing wild-card queries As before, we must execute a Boolean query for each enumerated, filtered term. Wild-cards can result in expensive query execution (very large disjunctions…) pyth* AND prog* If you encourage “laziness” people will respond! Which web search engines allow wildcard queries? Search Type your search terms, use ‘*’ if you need to. E.g., Alex* will match Alexander.

36 Spelling correction

37 Spell correction Two principal uses Two main flavors:
Sec. 3.3 Spell correction Two principal uses Correcting document(s) being indexed Correcting user queries to retrieve “right” answers Two main flavors: Isolated word Check each word on its own for misspelling Will not catch typos resulting in correctly spelled words e.g., from  form Context-sensitive Look at surrounding words, e.g., I flew form Heathrow to Narita.

38 Document correction Especially needed for OCR’ed documents
Sec. 3.3 Document correction Especially needed for OCR’ed documents Correction algorithms are tuned for this: rn/m Can use domain-specific knowledge E.g., OCR can confuse O and D more often than it would confuse O and I (adjacent on the QWERTY keyboard, so more likely interchanged in typing). But also: web pages and even printed material has typos Goal: the dictionary contains fewer misspellings But often we don’t change the documents but aim to fix the query-document mapping

39 Query mis-spellings Our principal focus here We can either
Sec. 3.3 Query mis-spellings Our principal focus here E.g., the query Alanis Morisett We can either Retrieve documents indexed by the correct spelling, OR Return several suggested alternative queries with the correct spelling Did you mean … ?

40 Isolated word correction
Sec Isolated word correction Given a lexicon and a character sequence Q, return the words in the lexicon closest to Q What’s “closest”? We’ll study several alternatives Edit distance (Levenshtein distance) Weighted edit distance n-gram overlap

41 Sec Edit distance Given two strings S1 and S2, the minimum number of operations to convert one to the other Operations are typically character-level Insert, Delete, Replace, (Transposition) E.g., the edit distance from dof to dog is 1 From cat to act is 2 (Just 1 with transpose.) from cat to dog is 3. Generally found by dynamic programming.

42 Weighted edit distance
Sec Weighted edit distance As above, but the weight of an operation depends on the character(s) involved Meant to capture OCR or keyboard errors, e.g. m more likely to be mis-typed as n than as q Therefore, replacing m by n is a smaller edit distance than by q This may be formulated as a probability model Requires weight matrix as input Modify dynamic programming to handle weights

43 Sec Using edit distances Given query, first enumerate all character sequences within a preset (weighted) edit distance (e.g., 2) Intersect this set with list of “correct” words Show terms you found to user as suggestions Alternatively, We can look up all possible corrections in our inverted index and return all docs … slow We can run with a single most likely correction The alternatives disempower the user, but save a round of interaction with the user

44 Weighted edit distance algorithm
It is well-known how to compute the (weighted) edit distance between two strings in time O(|s1| × |s2|), where |si | denotes the length of a string si. The idea is to use the dynamic programming algorithm in Figure 3.5, where the characters in s1 and s2 are given in array form. The algorithm fills the (integer) entries in a matrix m whose two dimensions equal the lengths of the two strings whose edit distances is being computed; the (i, j) entry of the matrix will hold (after the algorithm is executed) the edit distance between the strings consisting of the first i characters of s1 and the first j characters of s2.

45 Weighted edit distance algorithm
The central dynamic programming step is depicted in Lines 8-10 of Figure 3.5, where the three quantities whose minimum is taken correspond to substituting a character in s1, inserting a character in s1 and inserting a character in s2. s2[j],m[i−1, j]+1 and m[i, j−1]+1. The cells with numbers in italics depict the path by which we determine the Levenshtein distance. The spelling correction problem however demands more than computing edit distance: given a set S of strings (corresponding to terms in the vocabulary) and a query string q, we seek the string(s) in V of least edit distance from q.

46 compute the edit distance from q to each string in V, before selecting the string(s) of minimum edit distance.

47 Edit distance to all dictionary terms?
Sec Edit distance to all dictionary terms? Given a (mis-spelled) query – do we compute its edit distance to every dictionary term? Expensive and slow Alternative? How do we cut the set of candidate dictionary terms? One possibility is to use n-gram overlap for this This can also be used by itself for spelling correction. Alternative is to generate everything up to edit distance k and then intersect. Fine for distance 1; okay for distance 2. This is generally enough (Norvig).

48 Sec n-gram overlap Enumerate all the n-grams in the query string as well as in the lexicon Use the n-gram index (recall wild-card search) to retrieve all lexicon terms matching any of the query n-grams Threshold by number of matching n-grams Variants – weight by keyboard layout, etc.

49 Example with trigrams Suppose the text is november
Sec Example with trigrams Suppose the text is november Trigrams are nov, ove, vem, emb, mbe, ber. The query is december Trigrams are dec, ece, cem, emb, mbe, ber. So 3 trigrams overlap (of 6 in each term) How can we turn this into a normalized measure of overlap?

50 One option – Jaccard coefficient
Sec One option – Jaccard coefficient A commonly-used measure of overlap Let X and Y be two sets; then the J.C. is Equals 1 when X and Y have the same elements and zero when they are disjoint X and Y don’t have to be of the same size Always assigns a number between 0 and 1 Now threshold to decide if you have a match E.g., if J.C. > 0.8, declare a match

51 Standard postings “merge” will enumerate …
Sec Matching trigrams Consider the query lord – we wish to identify words matching 2 of its 3 bigrams (lo, or, rd) lo alone lord sloth or border lord morbid rd ardent border card Standard postings “merge” will enumerate … Adapt this to using Jaccard (or another) measure.

52 Context-sensitive spell correction
Sec Context-sensitive spell correction Text: I flew from Heathrow to Narita. Consider the phrase query “flew form Heathrow” We’d like to respond Did you mean “flew from Heathrow”? because no docs matched the query phrase.

53 Context-sensitive correction
Sec Context-sensitive correction Need surrounding context to catch this. First idea: retrieve dictionary terms close (in weighted edit distance) to each query term Now try all possible resulting phrases with one word “fixed” at a time flew from heathrow fled form heathrow flea form heathrow Hit-based spelling correction: Suggest the alternative that has lots of hits.

54 Sec Exercise Suppose that for “flew form Heathrow” we have 7 alternatives for flew, 19 for form and 3 for heathrow. How many “corrected” phrases will we enumerate in this scheme?

55 Sec Another approach Break phrase query into a conjunction of biwords (Lecture 2). Look for biwords that need only one term corrected. Enumerate phrase matches and … rank them!

56 General issues in spell correction
Sec General issues in spell correction We enumerate multiple alternatives for “Did you mean?” Need to figure out which to present to the user Use heuristics The alternative hitting most docs Query log analysis + tweaking For especially popular, topical queries Spell-correction is computationally expensive Avoid running routinely on every query? Run only on queries that matched few docs

57 Soundex

58 Sec. 3.4 Soundex Class of heuristics to expand a query into phonetic equivalents Language specific – mainly for names E.g., chebyshev  tchebycheff Invented for the U.S. census … in 1918

59 Soundex – typical algorithm
Sec. 3.4 Soundex – typical algorithm Turn every token to be indexed into a 4-character reduced form Do the same with query terms Build and search an index on the reduced forms (when the query calls for a soundex match)

60 Soundex – typical algorithm
Sec. 3.4 Soundex – typical algorithm Retain the first letter of the word. Change all occurrences of the following letters to '0' (zero):   'A', E', 'I', 'O', 'U', 'H', 'W', 'Y'. Change letters to digits as follows: B, F, P, V  1 C, G, J, K, Q, S, X, Z  2 D,T  3 L  4 M, N  5 R  6

61 Will hermann generate the same code?
Sec. 3.4 Soundex continued Remove all pairs of consecutive digits. Remove all zeros from the resulting string. Pad the resulting string with trailing zeros and return the first four positions, which will be of the form <uppercase letter> <digit> <digit> <digit>. E.g., Herman becomes H655. Will hermann generate the same code?

62 What queries can we process?
We have Positional inverted index with skip pointers Wild-card index Spell-correction Soundex Queries such as (SPELL(moriset) /3 toron*to) OR SOUNDEX(chaikofski)

63 Resources Peter Norvig: How to write a spelling corrector
Sec. 3.5 Resources IIR 3, MG 4.2 Efficient spell retrieval: K. Kukich. Techniques for automatically correcting words in text. ACM Computing Surveys 24(4), Dec 1992. J. Zobel and P. Dart.  Finding approximate matches in large lexicons.  Software - practice and experience 25(3), March Mikael Tillenius: Efficient Generation and Ranking of Spelling Error Corrections. Master’s thesis at Sweden’s Royal Institute of Technology. Nice, easy reading on spell correction: Peter Norvig: How to write a spelling corrector


Download ppt "Information Retrieval Christopher Manning and Prabhakar Raghavan"

Similar presentations


Ads by Google