Download presentation
Presentation is loading. Please wait.
1
Information Retrieval in Practice
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008
2
Information Needs An information need is the underlying cause of the query that a person submits to a search engine sometimes called information problem to emphasize that information need is generally related to a task Categorized using variety of dimensions e.g., number of relevant documents being sought type of information that is needed type of task that led to the requirement for information
3
Queries and Information Needs
A query can represent very different information needs May require different search techniques and ranking algorithms to produce the best rankings A query can be a poor representation of the information need User may find it difficult to express the information need User is encouraged to enter short queries both by the search engine interface, and by the fact that long queries don’t work
4
Interaction Interaction with the system occurs
during query formulation and reformulation while browsing the result Key aspect of effective retrieval users can’t change ranking algorithm but can change results through interaction helps refine description of information need e.g., same initial query, different information needs how does user describe what they don’t know?
5
ASK Hypothesis Belkin et al (1982) proposed a model called Anomalous State of Knowledge ASK hypothesis: difficult for people to define exactly what their information need is, because that information is a gap in their knowledge Search engine should look for information that fills those gaps Interesting ideas, little practical impact (yet)
6
Keyword Queries Query languages in the past were designed for professional searchers (intermediaries)
7
Keyword Queries Simple, natural language queries were designed to enable everyone to search Current search engines do not perform well (in general) with natural language queries People trained (in effect) to use keywords compare average of about 2.3 words/web query to average of 30 words/CQA query Keyword selection is not always easy query refinement techniques can help
8
Query-Based Stemming Make decision about stemming at query time rather than during indexing improved flexibility, effectiveness Query is expanded using word variants documents are not stemmed e.g., “rock climbing” expanded with “climb”, not stemmed to “climb”
9
Stem Classes A stem class is the group of words that will be transformed into the same stem by the stemming algorithm generated by running stemmer on large corpus e.g., Porter stemmer on TREC News
10
Stem Classes Stem classes are often too big and inaccurate
Modify using analysis of word co-occurrence Assumption: Word variants that could substitute for each other should co-occur often in documents
11
Modifying Stem Classes
12
Modifying Stem Classes
Dices’ Coefficient is an example of a term association measure where nx is the number of windows containing x Two vertices are in the same connected component of a graph if there is a path between them forms word clusters Example output of modification
13
Spell Checking Important part of query processing
10-15% of all web queries have spelling errors Errors include typical word processing errors but also many other types, e.g.
14
Spell Checking Basic approach: suggest corrections for words not found in spelling dictionary Suggestions found by comparing word to words in dictionary using similarity measure Most common similarity measure is edit distance number of operations required to transform one word into the other
15
Edit Distance Damerau-Levenshtein distance
counts the minimum number of insertions, deletions, substitutions, or transpositions of single characters required e.g., Damerau-Levenshtein distance 1 distance 2
16
Edit Distance Number of techniques used to speed up calculation of edit distances restrict to words starting with same character restrict to words of same or similar length restrict to words that sound the same Last option uses a phonetic code to group words e.g. Soundex
17
Soundex Code
18
Spelling Correction Issues
Ranking corrections “Did you mean...” feature requires accurate ranking of possible corrections Context Choosing right suggestion depends on context (other words) e.g., lawers → lowers, lawyers, layers, lasers, lagers but trial lawers → trial lawyers Run-on errors e.g., “mainscourcebank” missing spaces can be considered another single character error in right framework
19
Noisy Channel Model User chooses word w based on probability distribution P(w) called the language model can capture context information, e.g. P(w1|w2) User writes word, but noisy channel causes word e to be written instead with probability P(e|w) called error model represents information about the frequency of spelling errors
20
Noisy Channel Model Need to estimate probability of correction
P(w|e) = P(e|w)P(w) Estimate language model using context e.g., P(w) = λP(w) + (1 − λ)P(w|wp) wp is previous word e.g., “fish tink” “tank” and “think” both likely corrections, but P(tank|fish) > P(think|fish)
21
Noisy Channel Model Language model probabilities estimated using corpus and query log Both simple and complex methods have been used for estimating error model simple approach: assume all words with same edit distance have same probability, only edit distance 1 and 2 considered more complex approach: incorporate estimates based on common typing errors
22
Example Spellcheck Process
23
The Thesaurus Used in early search engines as a tool for indexing and query formulation specified preferred terms and relationships between them also called controlled vocabulary Particularly useful for query expansion adding synonyms or more specific terms using query operators based on thesaurus improves search effectiveness
24
MeSH Thesaurus
25
Query Expansion A variety of automatic or semi-automatic query expansion techniques have been developed goal is to improve effectiveness by matching related terms semi-automatic techniques require user interaction to select best expansion terms Query suggestion is a related technique alternative queries, not necessarily more terms
26
Query Expansion Approaches usually based on an analysis of term co-occurrence either in the entire document collection, a large collection of queries, or the top-ranked documents in a result list query-based stemming also an expansion technique Automatic expansion based on general thesaurus not effective does not take context into account
27
Term Association Measures
Dice’s Coefficient Mutual Information
28
Term Association Measures
Mutual Information measure favors low frequency terms Expected Mutual Information Measure (EMIM) actually only 1 part of full EMIM, focused on word occurrence
29
Term Association Measures
Pearson’s Chi-squared (χ2) measure compares the number of co-occurrences of two words with the expected number of co-occurrences if the two words were independent normalizes this comparison by the expected number also limited form focused on word co-occurrence
30
Association Measure Summary
31
Association Measure Example
Most strongly associated words for “tropical” in a collection of TREC news stories. Co-occurrence counts are measured at the document level.
32
Association Measure Example
Most strongly associated words for “fish” in a collection of TREC news stories.
33
Association Measure Example
Most strongly associated words for “fish” in a collection of TREC news stories. Co-occurrence counts are measured in windows of 5 words.
34
Association Measures Associated words are of little use for expanding the query “tropical fish” Expansion based on whole query takes context into account e.g., using Dice with term “tropical fish” gives the following highly associated words: goldfish, reptile, aquarium, coral, frog, exotic, stripe, regent, pet, wet Impractical for all possible queries, other approaches used to achieve this effect
35
Other Approaches Pseudo-relevance feedback Context vectors
expansion terms based on top retrieved documents for initial query Context vectors Represent words by the words that co-occur with them e.g., top 35 most strongly associated words for “aquarium” (using Dice’s coefficient): Rank words for a query by ranking context vectors
36
Other Approaches Query logs
Best source of information about queries and related terms short pieces of text and click data e.g., most frequent words in queries containing “tropical fish” from MSN log: stores, pictures, live, sale, types, clipart, blue, freshwater, aquarium, supplies query suggestion based on finding similar queries group based on click data
37
Relevance Feedback User identifies relevant (and maybe non-relevant) documents in the initial result list System modifies query using terms from those documents and re-ranks documents example of simple machine learning algorithm using training data but, very little training data Pseudo-relevance feedback just assumes top-ranked documents are relevant – no user input
38
Relevance Feedback Example
Top 10 documents for “tropical fish”
39
Relevance Feedback Example
If we assume top 10 are relevant, most frequent terms are (with frequency): a (926), td (535), href (495), http (357), width (345), com (343), nbsp (316), www (260), tr (239), htm (233), class (225), jpg (221) too many stopwords and HTML expressions Use only snippets and remove stopwords tropical (26), fish (28), aquarium (8), freshwater (5), breeding (4), information (3), species (3), tank (2), Badman’s (2), page (2), hobby (2), forums (2)
40
Relevance Feedback Example
If document 7 (“Breeding tropical fish”) is explicitly indicated to be relevant, the most frequent terms are: breeding (4), fish (4), tropical (4), marine (2), pond (2), coldwater (2), keeping (1), interested (1) Specific weights and scoring methods used for relevance feedback depend on retrieval model
41
Relevance Feedback Both relevance feedback and pseudo-relevance feedback are effective, but not used in many applications pseudo-relevance feedback has reliability issues, especially with queries that don’t retrieve many relevant documents Some applications use relevance feedback filtering, “more like this” Query suggestion more popular may be less accurate, but can work if initial query fails
42
Context and Personalization
If a query has the same words as another query, results will be the same regardless of who submitted the query why the query was submitted where the query was submitted what other queries were submitted in the same session These other factors (the context) could have a significant impact on relevance difficult to incorporate into ranking
43
User Models Generate user profiles based on documents that the person looks at such as web pages visited, messages, or word processing documents on the desktop Modify queries using words from profile Generally not effective imprecise profiles, information needs can change significantly
44
Query Logs Query logs provide important contextual information that can be used effectively Context in this case is previous queries that are the same previous queries that are similar query sessions including the same query Query history for individuals could be used for caching
45
Local Search Location is context
Local search uses geographic information to modify the ranking of search results location derived from the query text location of the device where the query originated e.g., “underworld 3 cape cod” “underworld 3” from mobile device in Hyannis
46
Local Search Identify the geographic region associated with web pages
use location metadata that has been manually added to the document, or identify locations such as place names, city names, or country names in text Identify the geographic region associated with the query 10-15% of queries contain some location reference Rank web pages using location information in addition to text and link-based features
47
Extracting Location Information
Type of information extraction ambiguity and significance of locations are issues Location names are mapped to specific regions and coordinates Matching done by inclusion, distance
48
Snippet Generation Query-dependent document summary
Simple summarization approach rank each sentence in a document using a significance factor select the top sentences for the summary first proposed by Luhn in the 1950’s
49
Sentence Selection Significance factor for a sentence is calculated based on the occurrence of significant words If fd,w is the frequency of word w in document d, then w is a significant word if it is not a stopword and where sd is the number of sentences in document d text is bracketed by significant words (limit on number of non-significant words in bracket)
50
Sentence Selection Significance factor for bracketed text spans is computed by dividing the square of the number of significant words in the span by the total number of words e.g., Significance factor = 42/7 = 2.3
51
Snippet Generation Involves more features than just significance factor e.g. for a news story, could use whether the sentence is a heading whether it is the first or second line of the document the total number of query terms occurring in the sentence the number of unique query terms in the sentence the longest contiguous run of query words in the sentence a density measure of query words (significance factor) Weighted combination of features used to rank sentences
52
Snippet Generation Web pages are less structured than news stories
can be difficult to find good summary sentences Snippet sentences are often selected from other sources metadata associated with the web page e.g., <meta name="description" content= ...> external sources such as web directories e.g., Open Directory Project, Snippets can be generated from text of pages like Wikipedia
53
Snippet Guidelines All query terms should appear in the summary, showing their relationship to the retrieved page When query terms are present in the title, they need not be repeated allows snippets that do not contain query terms Highlight query terms in URLs Snippets should be readable text, not lists of keywords
54
Advertising Sponsored search – advertising presented with search results Contextual advertising – advertising presented when browsing web pages Both involve finding the most relevant advertisements in a database An advertisement usually consists of a short text description and a link to a web page describing the product or service in more detail
55
Searching Advertisements
Factors involved in ranking advertisements similarity of text content to query bids for keywords in query popularity of advertisement Small amount of text in advertisement dealing with vocabulary mismatch is important expansion techniques are effective
56
Example Advertisements
Advertisements retrieved for query “fish tank”
57
Searching Advertisements
Pseudo-relevance feedback expand query and/or document using the Web use ad text or query for pseudo-relevance feedback rank exact matches first, followed by stem matches, followed by expansion matches Query reformulation based on search sessions learn associations between words and phrases based on co-occurrence in search sessions
58
Clustering Results Result lists often contain documents realted to different aspects of the query topic Clustering is used to group related documents to simplify browsing Example clusters for query “tropical fish”
59
Result List Example Top 10 documents for “tropical fish”
60
Clustering Results Requirements Efficiency Easy to understand
must be specific to each query and are based on the top-ranked documents for that query typically based on snippets Easy to understand Can be difficult to assign good labels to groups Monothetic vs. polythetic classification
61
Types of Classification
Monothetic every member of a class has the property that defines the class typical assumption made by users easy to understand Polythetic members of classes share many properties but there is no single defining property most clustering algorithms (e.g. K-means) produce this type of output
62
Classification Example
Possible monothetic classification {D1,D2} (labeled using a) and {D2,D3} (labeled e) Possible polythetic classification {D2,D3,D4}, D1 labels?
63
Result Clusters Simple algorithm Refinements
group based on words in snippets Refinements use phrases use more features whether phrases occurred in titles or snippets length of the phrase collection frequency of the phrase overlap of the resulting clusters,
64
Faceted Classification
A set of categories, usually organized into a hierarchy, together with a set of facets that describe the important properties associated with the category Manually defined potentially less adaptable than dynamic classification Easy to understand commonly used in e-commerce
65
Example Faceted Classification
Categories for “tropical fish”
66
Example Faceted Classification
Subcategories and facets for “Home & Garden”
67
Cross-Language Search
Query in one language, retrieve documents in multiple other languages Involves query translation, and probably document translation Query translation can be done using bilingual dictionaries Document translation requires more sophisticated statistical translation models similar to some retrieval models
68
Cross-Language Search
69
Statistical Translation Models
Models require parallel corpora for training probability estimates based on aligned sentences Translation of unusual words and phrases is a problem also use transliteration techniques e.g., Qathafi, Kaddafi, Qadafi, Gadafi, Gaddafi, Kathafi, Kadhafi, Qadhafi, Qazzafi, Kazafi, Qaddafy, Qadafy, Quadhaffi, Gadhdhafi, al-Qaddafi, Al-Qaddafi
70
Translation Web search engines also use translation
e.g. for query “pecheur france” translation link translates web page uses statistical machine translation models
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.