Scoring and Ranking 198:541. Scoring Thus far, our queries have all been Boolean Docs either match or not Good for expert users with precise understanding.

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 7: Scoring and results assembly.
Advertisements

Ranking models in IR Key idea: We wish to return in order the documents most likely to be useful to the searcher To do this, we want to know which documents.
Introduction to Information Retrieval (Manning, Raghavan, Schutze) Chapter 6 Scoring term weighting and the vector space model.
TF-IDF David Kauchak cs160 Fall 2009 adapted from:
From last time What’s the real point of using vector spaces?: A user’s query can be viewed as a (very) short document. Query becomes a vector in the same.
Information Retrieval IR 7. Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations.
Precision and Recall.
ADFOCS 2004 Prabhakar Raghavan Lecture 3. Zones A zone is an identified region within a doc E.g., Title, Abstract, Bibliography Generally culled from.
Information Retrieval Lecture 6. Recap of lecture 4 Query expansion Index construction.
Parametric search and zone weighting Lecture 6. Recap of lecture 4 Query expansion Index construction.
Introduction to Information Retrieval and Web Search Lecture 8: Evaluation and Result Summaries.
Information Retrieval IR 5. Plan Last lecture Index construction This lecture Parametric and field searches Zones in documents Wild card queries Scoring.
CS276 Information Retrieval and Web Mining
CS276A Information Retrieval Lecture 8. Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations.
Hinrich Schütze and Christina Lioma
Information Retrieval using the Boolean Model. Query Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia? Could grep all.
Information Retrieval IR 6. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index support.
Vector Space Model : TF - IDF
CES 514 Data Mining March 11, 2010 Lecture 5: scoring, term weighting, vector space model (Ch 6)
CS276 Information Retrieval and Web Search
Evaluation of Image Retrieval Results Relevant: images which meet user’s information need Irrelevant: images which don’t meet user’s information need Query:
Chapter 5: Information Retrieval and Web Search
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
CS276A Information Retrieval Lecture 7. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index.
CS276A Text Information Retrieval, Mining, and Exploitation Lecture 4 15 Oct 2002.
Documents as vectors Each doc j can be viewed as a vector of tf.idf values, one component for each term So we have a vector space terms are axes docs live.
Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1 Ad indexes.
Evaluation David Kauchak cs458 Fall 2012 adapted from:
Boolean and Vector Space Models
Evaluation David Kauchak cs160 Fall 2009 adapted from:
Scoring, Term Weighting, and Vector Space Model Lecture 7: Scoring, Term Weighting and the Vector Space Model Web Search and Mining 1.
Introduction to Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar.
Information Retrieval Lecture 6. Recap of the last lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index support.
Web Search and Text Mining Lecture 3. Outline Distributed programming: MapReduce Distributed indexing Several other examples using MapReduce Zones in.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 7 9/13/2011.
Information Retrieval Lecture 7. Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations.
Information Retrieval Lecture 2 Introduction to Information Retrieval (Manning et al. 2007) Chapter 6 & 7 For the MSc Computer Science Programme Dell Zhang.
Basic ranking Models Boolean and Vector Space Models.
Advanced topics in Computer Science Jiaheng Lu Department of Computer Science Renmin University of China
Term Frequency. Term frequency Two factors: – A term that appears just once in a document is probably not as significant as a term that appears a number.
Lecture 5 Term and Document Frequency CS 6633 資訊檢索 Information Retrieval and Web Search 資電 125 Based on ppt files by Hinrich Schütze.
Information Retrieval and Web Search IR models: Vectorial Model Instructor: Rada Mihalcea Class web page: [Note: Some.
Ranking in Information Retrieval Systems Prepared by: Mariam John CSE /23/2006.
Web search basics (Recap) The Web Web crawler Indexer Search User Indexes Query Engine 1.
Information Retrieval and Web Mining Lecture 6. This lecture Parametric and field searches Zones in documents Scoring documents: zone weighting Index.
Evaluation of (Search) Results How do we know if our results are any good? Evaluating a search engine  Benchmarks  Precision and recall Results summaries:
Introduction to Information Retrieval Introduction to Information Retrieval COMP4210: Information Retrieval and Search Engines Lecture 5: Scoring, Term.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 5 9/6/2011.
Vector Space Models.
Lecture 6: Scoring, Term Weighting and the Vector Space Model
Information Retrieval Techniques MS(CS) Lecture 7 AIR UNIVERSITY MULTAN CAMPUS Most of the slides adapted from IIR book.
Information Retrieval and Web Search IR models: Vector Space Model Instructor: Rada Mihalcea [Note: Some slides in this set were adapted from an IR course.
Evaluation. The major goal of IR is to search document relevant to a user query. The evaluation of the performance of IR systems relies on the notion.
Information Retrieval Quality of a Search Engine.
Information Retrieval Lecture 3 Introduction to Information Retrieval (Manning et al. 2007) Chapter 8 For the MSc Computer Science Programme Dell Zhang.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 10 Evaluation.
CS276 Information Retrieval and Web Search Lecture 7: Vector space retrieval.
Term weighting and Vector space retrieval
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 9: Scoring, Term Weighting and the Vector Space Model.
Information Retrieval and Web Search IR models: Vector Space Model Term Weighting Approaches Instructor: Rada Mihalcea.
CS276 Information Retrieval and Web Search Lecture 8: Evaluation.
Introduction to Information Retrieval Introduction to Information Retrieval Information Retrieval and Web Search Lecture 8: Evaluation.
Ch 6 Term Weighting and Vector Space Model
7CCSMWAL Algorithmic Issues in the WWW
אחזור מידע, מנועי חיפוש וספריות
Information Retrieval and Web Search
Lecture 8: Evaluation Hankz Hankui Zhuo
Term Frequency–Inverse Document Frequency
Presentation transcript:

Scoring and Ranking 198:541

Scoring Thus far, our queries have all been Boolean Docs either match or not Good for expert users with precise understanding of their needs and the corpus Applications can consume 1000’s of results Not good for (the majority of) users with poor Boolean formulation of their needs Most users don’t want to wade through 1000’s of results – cf. use of web search engines

Scoring We wish to return in order the documents most likely to be useful to the searcher How can we rank order the docs in the corpus with respect to a query? Assign a score – say in [0,1] for each doc on each query Begin with a perfect world – no spammers Nobody stuffing keywords into a doc to make it match queries More on “adversarial IR” under web search

Linear zone combinations First generation of scoring methods: use a linear combination of Booleans: E.g., Score = 0.6* + 0.3* * * Each expression such as takes on a value in {0,1}. Then the overall score is in [0,1]. For this example the scores can only take on a finite set of values – what are they?

Linear zone combinations In fact, the expressions between <> on the last slide could be any Boolean query Who generates the Score expression (with weights such as 0.6 etc.)? In uncommon cases – the user, in the UI Most commonly, a query parser that takes the user’s Boolean query and runs it on the indexes for each zone

General idea We are given a weight vector whose components sum up to 1. There is a weight for each zone/field. Given a Boolean query, we assign a score to each doc by adding up the weighted contributions of the zones/fields. Typically – users want to see the K highest- scoring docs.

Where do these weights come from? Machine learned relevance Given A test corpus A suite of test queries A set of relevance judgments Learn a set of weights such that relevance judgments matched Can be formulated as ordinal regression

Full text queries We just scored the Boolean query bill OR rights Most users more likely to type bill rights or bill of rights How do we interpret these full text queries? No Boolean connectives Of several query terms some may be missing in a doc Only some query terms may occur in the title, etc.

Full text queries To use document combinations for free text queries, we need A way of assigning a score to a pair Zero query terms in the document should mean a zero score More query terms in the document should mean a higher score Scores don’t have to be Boolean Will look at some alternatives now

Incidence matrices Recall: Document (or a zone in it) is binary vector X in {0,1} v Query is a vector Score: Overlap measure:

Example On the query ides of march, Shakespeare’s Julius Caesar has a score of 3 All other Shakespeare plays have a score of 2 (because they contain march) or 1 Thus in a rank order, Julius Caesar would come out tops

Overlap matching What’s wrong with the overlap measure? It doesn’t consider: Term frequency in document Term scarcity in collection (document mention frequency) of is more common than ides or march Length of documents (And queries: score not normalized)

Scoring: density-based Thus far: position and overlap of terms in a doc – title, author etc. Obvious next: idea if a document talks about a topic more, then it is a better match This applies even when we only have a single query term. Document relevant if it has a lot of the terms This leads to the idea of term weighting.

Term weighting

Term-document count matrices Consider the number of occurrences of a term in a document: Bag of words model Document is a vector in ℕ v : a column below

Bag of words view of a doc Thus the doc John is quicker than Mary. is indistinguishable from the doc Mary is quicker than John.

Counts vs. frequencies Consider again the ides of march query. Julius Caesar has 5 occurrences of ides No other play has ides march occurs in over a dozen All the plays contain of By this scoring measure, the top-scoring play is likely to be the one with the most ofs

Digression: terminology WARNING: In a lot of IR literature, “frequency” is used to mean “count” Thus term frequency in IR literature is used to mean number of occurrences in a doc Not divided by document length (which would actually make it a frequency) We will conform to this misnomer In saying term frequency we mean the number of occurrences of a term in a document.

Term frequency tf Long docs are favored because they’re more likely to contain query terms Can fix this to some extent by normalizing for document length But is raw tf the right measure?

Weighting term frequency: tf What is the relative importance of 0 vs. 1 occurrence of a term in a doc 1 vs. 2 occurrences 2 vs. 3 occurrences … Unclear: while it seems that more is better, a lot isn’t proportionally better than a few Can just use raw tf Another option commonly used in practice:

Score computation Score for a query q = sum over terms t in q: [Note: 0 if no query terms in document] Can use wf instead of tf in the above Still doesn’t consider term scarcity in collection (ides is rarer than of)

Weighting should depend on the term overall Which of these tells you more about a doc? 10 occurrences of hernia? 10 occurrences of the? Would like to attenuate the weight of a common term But what is “common”? Suggest looking at collection frequency (cf ) The total number of occurrences of the term in the entire collection of documents

Document frequency But document frequency (df ) may be better: df = number of docs in the corpus containing the term Wordcfdf ferrari insurance Document/collection frequency weighting is only possible in known (static) collection. So how do we make use of df ?

tf x idf term weights tf x idf measure combines: term frequency (tf ) or wf, some measure of term density in a doc inverse document frequency (idf ) measure of informativeness of a term: its rarity across the whole corpus could just be raw count of number of documents the term occurs in (idf i = 1/df i ) but by far the most commonly used version is: See Kishore Papineni, NAACL 2, 2002 for theoretical justification

Summary: tf x idf (or tf.idf) Assign a tf.idf weight to each term i in each document d Increases with the number of occurrences within a doc Increases with the rarity of the term across the whole corpus What is the wt of a term that occurs in all of the docs?

Inverse Document Frequency IDF provides high values for rare words and low values for common words For a collection of documents

Real-valued term-document matrices Function (scaling) of count of a word in a document: Bag of words model Each is a vector in ℝ v Here log-scaled tf.idf Note can be >1!

Documents as vectors Each doc j can now be viewed as a vector of wf  idf values, one component for each term So we have a vector space terms are axes docs live in this space even with stemming, may have 20,000+ dimensions (The corpus of documents gives us a matrix, which we could also view as a vector space in which words live – transposable data)

Why turn docs into vectors? First application: Query-by-example Given a doc d, find others “like” it. Now that d is a vector, find vectors (docs) “near” it.

Intuition Postulate: Documents that are “close together” in the vector space talk about the same things. t1t1 d2d2 d1d1 d3d3 d4d4 d5d5 t3t3 t2t2 θ φ

Desiderata for proximity If d 1 is near d 2, then d 2 is near d 1. If d 1 near d 2, and d 2 near d 3, then d 1 is not far from d 3. No doc is closer to d than d itself.

First cut Idea: Distance between d 1 and d 2 is the length of the vector |d 1 – d 2 |. Euclidean distance Why is this not a great idea? We still haven’t dealt with the issue of length normalization Short documents would be more similar to each other by virtue of length, not topic However, we can implicitly normalize by looking at angles instead

Cosine similarity Distance between vectors d 1 and d 2 captured by the cosine of the angle x between them. Note – this is similarity, not distance t 1 d 2 d 1 t 3 t 2 θ

Cosine similarity A vector can be normalized (given a length of 1) by dividing each of its components by its length – here we use the L 2 norm This maps vectors onto the unit sphere: Then, Longer documents don’t get more weight

Cosine similarity Cosine of angle between two vectors The denominator involves the lengths of the vectors. Normalization

Normalized vectors For normalized vectors, the cosine is simply the dot product:

Example Docs: Austen's Sense and Sensibility, Pride and Prejudice; Bronte's Wuthering Heights. tf weights cos(SAS, PAP) =.996 x x x 0.0 = cos(SAS, WH) =.996 x x x.254 = 0.889

Queries in the vector space model Central idea: the query as a vector: We regard the query as short document We return the documents ranked by the closeness of their vectors to the query, also represented as a vector. Note that d q is very sparse!

Summary: What’s the point of using vector spaces? A well-formed algebraic space for retrieval Key: A user’s query can be viewed as a (very) short document. Query becomes a vector in the same space as the docs. Can measure each doc’s proximity to it. Natural measure of scores/ranking – no longer Boolean. Queries are expressed as bags of words

Interaction: vectors and phrases Scoring phrases doesn’t fit naturally into the vector space world: “tangerine trees” “marmalade skies” Positional indexes don’t calculate or store tf.idf information for “tangerine trees” Biword indexes treat certain phrases as terms For these, we can pre-compute tf.idf. Theoretical problem of correlated dimensions Problem: we cannot expect end-user formulating queries to know what phrases are indexed We can use a positional index to boost or ensure phrase occurrence

Vectors and Boolean queries Vectors and Boolean queries really don’t work together very well In the space of terms, vector proximity selects by spheres: e.g., all docs having cosine similarity  0.5 to the query Boolean queries on the other hand, select by (hyper-)rectangles and their unions/intersections Round peg - square hole

Processing Ranked Search Queries

Encoding document frequencies Add tf t,d to postings lists Now almost always as frequency – scale at runtime Overall, requires little additional space abacus 8 aargh 10 acacia 35 1,2 7,3 83,1 87,2 … 1,1 5,1 13,1 17,1 … 7,1 8,2 40,1 97,3 …

Computing the k largest cosines: selection vs. sorting Typically we want to retrieve the top k docs (in the cosine ranking for the query) not to totally order all docs in the corpus Can we pick off docs with k highest cosines?

Limiting the accumulators: Best m candidates Preprocess: Pre-compute, for each term, its m nearest docs. (Treat each term as a 1-term query.) lots of preprocessing. Result: “preferred list” for each term. Search: For a t-term query, take the union of their t preferred lists – call this set S, where |S|  mt. Compute cosines from the query to only the docs in S, and choose the top k. Need to pick m>k to work well empirically.

Evaluating IR Search

Measures for a search engine How fast does it index Number of documents/hour (Average document size) How fast does it search Latency as a function of index size Expressiveness of query language Ability to express complex information needs Speed on complex queries

Measures for a search engine All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise The key measure: user happiness What is this? Speed of response/size of index are factors But blindingly fast, useless answers won’t make a user happy Need a way of quantifying user happiness

Measuring user happiness Issue: who is the user we are trying to make happy? Depends on the setting Web engine: user finds what they want and return to the engine Can measure rate of return users eCommerce site: user finds what they want and make a purchase Is it the end-user, or the eCommerce site, whose happiness we measure? Measure time to purchase, or fraction of searchers who become buyers?

Measuring user happiness Enterprise (company/govt/academic): Care about “user productivity” How much time do my users save when looking for information? Many other criteria having to do with breadth of access, secure access, etc.

Happiness: elusive to measure Commonest proxy: relevance of search results But how do you measure relevance? We will detail a methodology here, then examine its issues Relevant measuremnt requires 3 elements: 1.A benchmark document collection 2.A benchmark suite of queries 3.A binary assessment of either Relevant or Irrelevant for each query-doc pair Some work on more-than-binary, but not the standard

Evaluating an IR system Note: the information need is translated into a query Relevance is assessed relative to the information need not the query E.g., Information need: I'm looking for information on whether drinking red wine is more effective at reducing your risk of heart attacks than white wine. Query: wine red white heart attack effective You evaluate whether the doc addresses the information need, not whether it has those words

Standard relevance benchmarks TREC - National Institute of Standards and Testing (NIST) has run a large IR test bed for many years Reuters and other benchmark doc collections used “Retrieval tasks” specified sometimes as queries Human experts mark, for each query and for each doc, Relevant or Irrelevant or at least for subset of docs that some system returned for that query

Unranked retrieval evaluation: Precision and Recall Precision: fraction of retrieved docs that are relevant = P(relevant|retrieved) Recall: fraction of relevant docs that are retrieved = P(retrieved|relevant) Precision P = tp/(tp + fp) Recall R = tp/(tp + fn) RelevantNot Relevant Retrievedtpfp Not Retrievedfntn

Accuracy Given a query an engine classifies each doc as “Relevant” or “Irrelevant”. Accuracy of an engine: the fraction of these classifications that is correct. Why is this not a very useful evaluation measure in IR?

Why not just use accuracy? How to build a % accurate search engine on a low budget…. People doing information retrieval want to find something and have a certain tolerance for junk. Search for: 0 matching results found.

Precision/Recall You can get high recall (but low precision) by retrieving all docs for all queries! Recall is a non-decreasing function of the number of docs retrieved In a good system, precision decreases as either number of docs retrieved or recall increases A fact with strong empirical confirmation

Difficulties in using precision/recall Should average over large corpus/query ensembles Need human relevance assessments People aren’t reliable assessors Assessments have to be binary Nuanced assessments? Heavily skewed by corpus/authorship Results may not translate from one domain to another

A combined measure: F Combined measure that assesses this tradeoff is F measure (weighted harmonic mean): People usually use balanced F 1 measure i.e., with  = 1 or  = ½ Harmonic mean is a conservative average See CJ van Rijsbergen, Information Retrieval

Evaluating ranked results Evaluation of ranked results: The system can return any number of results By taking various numbers of the top returned documents (levels of recall), the evaluator can produce a precision-recall curve

A precision-recall curve

Averaging over queries A precision-recall graph for one query isn’t a very sensible thing to look at You need to average performance over a whole bunch of queries. But there’s a technical issue: Precision-recall calculations place some points on the graph How do you determine a value between the points? (interpolate)

Evaluation Graphs are good, but people want summary measures! Precision at fixed retrieval level Perhaps most appropriate for web search: all people want are good matches on the first one or two results pages But has an arbitrary parameter of k 11-point interpolated average precision The standard measure in the TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them Evaluates performance at all recall levels

Typical (good) 11 point precisions SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)

Yet more evaluation measures… Mean average precision (MAP) Average of the precision value obtained for the top k documents, each time a relevant doc is retrieved Avoids interpolation, use of fixed recall levels MAP for query collection is arithmetic ave. Macro-averaging: each query counts equally R-precision If have known (though perhaps incomplete) set of relevant documents of size Rel, then calculate precision of top Rel docs returned Perfect system could score 1.0.

Variance For a test collection, it is usual that a system does crummily on some information needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7) Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query. That is, there are easy information needs and hard ones!