Download presentation
Presentation is loading. Please wait.
Published byGordon Richards Modified over 9 years ago
1
15-381 Artificial Intelligence Information Retrieval (How to Power a Search Engine) Jaime Carbonell 20 September 2001 Topics Covered: “Bag of Words” Hypothesis Vector Space Model & Cosine Similarity Query Expansion Methods
2
Information Retrieval: The Challenge (1) Text DB includes: (1) Rainfall measurements in the Sahara continue to show a steady decline starting from the first measurements in 1961. In 1996 only 12mm of rain were recorded in upper Sudan, and 1mm in Southern Algiers... (2) Dan Marino states that professional football risks loosing the number one position in heart of fans across this land. Declines in TV audience ratings are cited... (3) Alarming reductions in precipitation in desert regions are blamed for desert encroachment of previously fertile farmland in Northern Africa. Scientists measured both yearly precipitation and groundwater levels...
3
Information Retrieval: The Challenge (2) User query states: "Decline in rainfall and impact on farms near Sahara" Challenges How to retrieve (1) and (3) and not (2)? How to rank (3) as best? How to cope with no shared words?
4
Information Retrieval Assumptions (1) Basic IR task There exists a document collection {D j } Users enters at hoc query Q Q correctly states user’s interest User wants {D i } < {D j } most relevant to Q
5
"Shared Bag of Words" assumption Every query = {w i } Every document = {w k }...where w i & w k in same Σ All syntax is irrelevant (e.g. word order) All document structure is irrelevant All meta-information is irrelevant (e.g. author, source, genre) => Words suffice for relevance assessment Information Retrieval Assumption (2)
6
Information Retrieval Assumption (3) Retrieval by shared words If Q and D j share some w i, then Relevant(Q, D j ) If Q and D j share all w i, then Relevant(Q, D j ) If Q and D j share over K% of w i, then Relevant(Q, D j )
7
Boolean Queries (1) Industrial use of Silver Q: silver R: "The Count’s silver anniversary..." "Even the crash of ’87 had a silver lining..." "The Lone Ranger lived on in syndication..." "Sliver dropped to a new low in London..."... Q: silver AND photography R: "Posters of Tonto and the Lone Ranger..." "The Queen’s Silver Anniversary photos..."...
8
Boolean Queries (2) Q: (silver AND (NOT anniversary) AND (NOT lining) AND emulsion) OR (AgI AND crystal AND photography)) R: "Silver Iodide Crystals in Photography..." "The emulsion was worth its weight in silver..."...
9
Boolean Queries (3) Boolean queries are: a) easy to implement b) confusing to compose c) seldom used (except by librarians) d) prone to low recall e) all of the above
10
Beyond the Boolean Boondoggle (1) Desiderata (1) Query must be natural for all users Sentence, phrase, or word(s) No AND’s, OR’s, NOT’s,... No parentheses (no structure) System focus on important words Q: I want laser printers now
11
Beyond the Boolean Boondoggle (2) Desiderata (2) Find what I mean, not just what I say Q: cheap car insurance (pAND (pOR "cheap" [1.0] "inexpensive" [0.9] "discount" [0.5)] (pOR "car" [1.0] "auto" [0.8] "automobile" [0.9] "vehicle" [0.5]) (pOR "insurance" [1.0] "policy" [0.3]))
12
The Vector Space Model (1) Let Σ = [w 1, w 2,... w n ] Let D j = [c(w 1, D j ), c(w 2, D j ),... c(w n, D j )] Let Q = [c(w 1, Q), c(w 2, Q),... c(w n, Q)]
13
The Vector Space Model (2) Initial Definition of Similarity: S I (Q, D j ) = Q. D j Normalized Definition of Similarity: S N (Q, D j ) = (Q. D j )/(|Q| x |D j |) = cos(Q, D j )
14
The Vector Space Model (3) Relevance Ranking If S N (Q, D i ) > S N (Q, D j ) Then D i is more relevant than D i to Q Retrieve(k,Q,{D j }) = Arg max k [cos(Q, D j )] D j in {D j }
15
Refinements to VSM (2) Stop-Word Elimination Discard articles, auxiliaries, prepositions,... typically 100-300 most frequent small words Reduce document length by 30-40% Retrieval accuracy improves slightly (5- 10%)
16
Refinements to VSM (3) Proximity Phrases E.g.: "air force" => airforce Found by high-mutual information p(w 1 w 2 ) >> p(w 1 )p(w 2 ) p(w 1 & w 2 in k-window) >> p(w 1 in k-window) p(w 2 in same k-window) Retrieval accuracy improves slightly (5-10%) Too many phrases => inefficiency
17
Refinements to VSM (4) Words => Terms term = word | stemmed word | phrase Use exactly the same VSM method on terms (vs words)
18
Evaluating Information Retrieval (1) Contingency table: relevantnot-relevant retrievedab not retrievedcd
19
Evaluating Information Retrieval (2) P = a/(a+b)R = a/(a+c) Accuracy = (a+d)/(a+b+c+d) F1 = 2PR/(P+R) Miss = c/(a+c) = 1 - R (false negative) F/A = b/(a+b+c+d) (false positive)
20
Query Expansion (1) Observations: Longer queries often yield better results User’s vocabulary may differ from document vocabulary Q: how to avoid heart disease D: "Factors in minimizing stroke and cardiac arrest: Recommended dietary and exercise regimens" Maybe longer queries have more chances to help recall.
21
Query Expansion (2) Bridging the Gap Human query expansion (user or expert) Thesaurus-based expansion Seldom works in practice (unfocused) Relevance feedback –Widen a thin bridge over vocabulary gap –Adds words from document space to query Pseudo-Relevance feedback Local Context analysis
22
Relevance Feedback Rocchio Formula Q’ = F[Q, D ret ] F = weighted vector sum, such as: W(t,Q’) = αW(t,Q) + βW(t,D rel ) - γW(t,D irr )
23
Term Weighting Methods (1) Salton’s Tf*IDf Tf = term frequency in a document Df = document frequency of term = # documents in collection with this term IDf = Df -1
24
Term Weighting Methods (2) Salton’s Tf*IDf TfIDf = f 1 (Tf)*f 2 (IDf) E.g. f 1 (Tf) =Tf*ave(|D j |)/|D| E.g. f 2 (IDf) = log 2 (IDF) f 1 and f 2 can differ for Q and D
25
Efficient Implementations of VSM (1) Build an Inverted Index (next slide) Filter all 0-product terms Precompute IDF, per-document TF …but remove stopwords first.
26
Efficient Implementations of VSM (3) [term IDF term i, <doc i, freq(term, doc i ) doc j, freq(term, doc j )...>] or: [term IDF term i, <doc i, freq(term, doc i ), [pos 1,i, pos 2,i,...] doc j, freq(term, doc j ), [pos 1,j, pos 2,j,...]...>] pos l,1 indicates the first position of term in document j and so on.
27
Generalized Vector Space Model (1) Principles Define terms by their occurrence patterns in documents Define query terms in the same way Compute similarity by document-pattern overlap for terms in D and Q Use standard Cos similarity and either binary or TfIDf weights
28
Generalized Vector Space Model (2) Advantages Automatically calculates partial similarity If "heart disease" and "stroke" and "ventricular" co-occur in many documents, then if the query contains only one of these terms, documents containing the other will receive partial credit proportional to their document co-occurrence ratio. No need to do query expansion or relevance feedback
29
GVSM, How it Works (1) Represent the collection as vector of documents: Let C = [D 1, D 2,..., D m ] Represent each term by its distributional frequency: Let t i = [Tf(t i, D 1 ), Tf(t i, D 2 ),..., Tf(t i, D m )] Term-to-term similarity is computed as: Sim(t i, t j ) = cos(vec(t i ), vec(t j )) Hence, highly co-occurring terms like "Arafat" and "PLO" will be treated as near-synonyms for retrieval
30
GVSM, How it Works (2) And query-document similarity is computed as before: Sim(Q,D) = cos(vec(Q)), vec(D)), except that instead of the dot product calculation, we use a function of the term-to-term similarity computation above, For instance: Sim(Q,D) = Σ i [Max j (sim(q i, d j )] or normalizing for document & query length: Sim norm (Q, D) =
31
A Critique of Pure Relevance (1) IR Maximizes Relevance Precision and recall are relevance measures Quality of documents retrieved is ignored
32
A Critique of Pure Relevance (2) Other Important Factors What about information novelty, timeliness, appropriateness, validity, comprehensibility, density, medium,...?? In IR, we really want to maximize: P(U(f i,..., f n ) | Q & {C} & U & H) where Q = query, {C} = collection set, U = user profile, H = interaction history...but we don’t yet know how. Darn.
33
Maximal Marginal Relevance (1) A crude first approximation: novelty => minimal-redundancy Weighted linear combination: (redundancy = cost, relevance = benefit) Free parameters: k and λ
34
Maximal Marginal Relevance (2) MMR(Q, C, R) = Argmax k d i in C [λS(Q, d i ) - (1-λ)max d j in R (S(d i, d j ))]
35
Maximal Marginal Relevance (MMR) (3) COMPUTATION OF MMR RERANKING 1. Standard IR Retrieval of top-N docs Let D r = IR(D, Q, N) 2. Rank max sim(d i ε D r, Q) as top doc, i.e. Let Ranked = {d i } 3. Let D r = D r \{d i } 4. While D r is not empty, do: a. Find d i with max MMR(D r, Q. Ranked) b. Let Ranked = Ranked.d i c. Let D r = D r \{d i }
36
Maximal Marginal Relevance (MMR) (4) Applications: Ranking retrieved documents from IR Engine Ranking passages for inclusion in Summaries
37
Document Summarization in a Nutshell (1) Types of Summaries TaskQuery-relevant (focused) Query-free (generic) INDICATIVE, for Filtering (Do I read further?) To filter search engine results Short abstracts CONTENTFUL, for reading in lieu of full doc. To solve problems for busy professionals Executive summaries
38
Summarization as Passage Retrieval (1) For Query-Driven Summaries 1. Divide document into passages e.g, sentences, paragraphs, FAQ-pairs,.... 2. Use query to retrieve most relevant passages, or better, use MMR to avoid redundancy. 3. Assemble retrieved passages into a summary.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.