A Framework for Result Diversification Sreenivas Gollapudi Search Labs, Microsoft Research Joint work with Aneesh Sharma (Stanford) , Samuel Ieong, Alan Halverson, and Rakesh Agrawal (Microsoft Research)
Ambiguous queries wine 2009
Definition of Diversification Intuitive definition Represent a variety of relevant meanings for a given query Mathematical definitions: Minimizing query abandonment Want to represent different user categories Trade-off between relevance and novelty
Research on diversification Query and document similarities Maximal Marginal Relevance [CG98] Personalized re-ranking of results [RD06] Probability Ranking Principle not optimal [CK06] Query abandonment Topical diversification [Z+05, AGHI09] Needs topical (categorical) information Loss minimization framework [Z02, ZL06] “Diminishing returns” for docs w/ the same intent is a specific loss function [AGHI09]
The framework Express diversity requirements in terms of desired properties Define objectives that satisfy these properties Develop efficient algorithms Metrics and evaluation methodologies
Axiomatic approach Inspired by similar approaches for Recommendation systems [Andersen et al ’08] Ranking [Altman, Tennenholtz ’07] Clustering [Kleinberg ’02] Map the space of functions – a “basis vector”
Diversification Setup (1/2) Input: Candidate documents: U={u1,u2,…, un}, query q Relevance function: wq(ui) Distance function: dq(ui, uj) (symmetric, non-metric) Size k of output result set wq(u5) u5 u1 u2 u3 u4 u6 dq(,u2,u4)
Diversification Setup (2/2) Output Diversified set S* of documents (|S*|= k) Diversification function: f : S x wq x dq R+ S* = argmax f(S) (|S|=k) u1 u4 u6 k = 3 S* = {u1,u2,u6} u2 u3 u5
Axioms Scale-invariance Consistency Richness Strength Stability Relevance Diversity Stability Two technical properties
Scale Invariance Axiom S* = argmaxS f(S, w(·), d(·, ·)) = argmaxS f(S, w΄(·), d΄(·, ·)) w΄(ui) = α · w(ui) d΄(ui,uj) = α · d(ui,uj) S*(3) No built-in scale for f !
Consistency Axiom S* = argmaxS f(S, w(·), d(·, ·)) w΄(ui) = w(ui) + ai for ui є S* d΄(ui,uj) = d(ui,uj) + bi for ui and/or uj є S* S*(3) Increasing relevance/ diversity doesn’t hurt!
Stability Axiom S*(k) = argmaxS f(S, w(·), d(·, ·),k) S*(k) S*(k+1) for all k S*(3) S*(4) Output set shouldn’t oscillate (change arbitrarily) with size
Impossibility result Scale-invariance, Consistency, Richness, Strength of Relevance/Diversity, Stability, Two technical properties Theorem: No function f can satisfy all the axioms simultaneously. Proof via constructive argument
Axiomatic characterization– Summary Baseline for what is possible Mathematical criteria for choosing f Modular approach: f is independent of specific wq(·) and dq(·, ·)!
A Framework for Diversification Express diversity requirements in terms of desired properties Define objectives that satisfy these properties Develop efficient algorithms Metrics and evaluation methodologies
Recall – Diversification Framework Input: U={u1,u2,…,un}, k, wq(·) and dq(·, ·) Some set of (top) n results Output: S* = argmaxS f(S, w(·), d(·, ·),k) Find the most diverse set of results of size k Advantages: Can integrate f with existing ranking engine Modular, plug-and-play framework
Diversification objectives Max-sum (avg) objective: Violates stability! u1 u4 u6 k = 3 S* = {u1,u2,u6} u2 k = 4 S* = {u1,u3,u5,u6} u3 u3 u5 u5
Diversification objectives Max-min objective: Violates consistency and stability! u1 u4 u6 k = 3 S* = {u1,u2,u6} u2 S* = {u1,u5,u6} u3 u5 u5
Other Diversification objectives A taxonomy-based diversification objective Uses the analogy of marginal utility to determine whether to include more results from an already covered category Violates stability and one of the technical axioms
The Framework Express diversity requirements in terms of desired properties Define objectives that satisfy these properties Develop efficient algorithms Metrics and evaluation methodologies
Algorithms for facility dispersion Recast as facility dispersion Max-sum (MaxSumDispersion): Max-min(MaxMinDispersion): Known approximation algorithms Lower bounds Lots of other facility dispersion objectives and algorithms
Algorithm for categorical diversification ∀c ∈ C, U (c |q) ← P (c |q) while |S| < k do for d ∈ D do g (d |q, c) ← c U (c |q)V (d |q,c) end for d∗ ← argmax g (d | q, c) S ← S ∪ {d∗} ∀c ∈ C, U (c |q) ← (1−V (d∗ |q, c))U (c |q) D ← D \ {d∗} end while P(c | q): conditional prob of intent c given query q g(d | q, c): current prob of d satisfying q, c Update the utility of a category
An Example Intent distribution: P (R |q) = 0.8, P (B |q) = 0.2. U(R | q) = 0.08 0.8 U(B | q) = 0.07 0.2 0.12 D V(d | q, c) g(d | q, c) S Actually produces an ordered set of results Results not proportional to intent distribution Results not according to (raw) quality Better results ⇒ less needed to be shown 0.9 0.9 ×0.8 0.72 0.5 ×0.08 ×0.08 ×0.8 0.04 0.40 0.4 ×0.08 ×0.8 ×0.08 0.03 0.32 0.4 0.4 ×0.2 ×0.2 0.08 0.08 0.4 0.4 ×0.2 ×0.2 ×0.12 0.08 0.08 0.05
The Framework Express diversity requirements in terms of desired properties Define objectives that satisfy these properties Develop efficient algorithms Metrics and evaluation methodologies
Evaluation Methodologies Approach Represent real queries Scale beyond a few user studies Problem: Hard to define ground truth Use disambiguated information sources on the web as the ground truth Incorporate intent into human judgments Can exploit the user distribution (need to be careful)
Wikipedia Disambiguation Pages Query = Wikipedia disambiguation page title Large-scale ground truth set Open source Growing in size
Metrics Based on Wikipedia Topics Novelty Coverage of wikipedia topics Relevance coverage of top Wikipedia results
The Relevance and Distance Functions Relevance function: 1/position Can use the search engine score Maybe use query category information Distance function: Compute TF-IDF distances Jaccard similarity score for two documents A and B:
Evaluating Novelty Topics/categories = list of disambiguation topics Given a set Sk of results: For each result, compute a distribution over topics (using our d(·, ·)) Sum confidence over all topics Threshold to get # topics represented Category confidence Jaguar cat: 0.1+0.8 Jaguar car: 0.9+0.2 Threshold = 1.0 Jaguar cat: 0 Jaguar car: 1 jaguar.com Jaguar cat (0.1) Jaguar car (0.9) wikipedia.org/jaguar Jaguar cat (0.8) Jaguar car (0.2)
Evaluating Relevance Query – get ranking of search restricted to Wikipedia pages a(i) = position of Wikipedia topic i in this list b(i) = position of Wikipedia topic i in list being tested Relevance is measured in terms of reciprocal ranks:
Adding Intent to Human Judgments (Generalizing Relevance Metrics) Take expectation over distribution of intents Interpretation: how will the average user feel? Consider NDCG@k Classic: NDCG-IA depends on intent distribution and intent-specific NDCG
Evaluation using Mechanical Turk Created two types of HITs on Mechanical Turk Query classification: workers are asked to choose among three interpretations Document rating (under the given interpretation) Two additional evaluations MT classification + current ratings MT classification + MT document ratings
Some Important Questions When is it right to diversify? Users have certain expectations about the workings of a search engine What is the best way to diversify? Evaluate approaches beyond diversifying the retrieved results Metrics that capture both relevance and diversity Some preliminary work suggests that there will be certain trade-offs to make
Questions?
Why frame diversification as set selection? Otherwise, need to encode explicit user model in the metric Selection only needs k (which is 10) Later, can rank set according to relevance Personalize based on clicks Alternative to stability: Select sets repeatedly (this loses information) Could refine selection online, based on user clicks
Novelty Evaluation – Effect of Algorithms
Relevance Evaluation – Effect of Algorithms
Product Evaluation – Anecdotal Result Results for query cd player Relevance: popularity Distance: from product hierarchy
Preliminary Results (100 queries)
Evaluation using Mechanical Turk
Other Measures of Success Many metrics for relevance Normalized discounted cumulative gains at k (NDCG@k) Mean average precision at k (MAP@k) Mean reciprocal rank (MRR) Some metrics for diversity Maximal marginal relevance (MMR) [CG98] Nugget-based instantiation of NDCG [C+08] Want a metric that can take into account both relevance and diversity [JK00]
Problem Statement Diversify(k) Given a query q, a set of documents D, distribution P(c | q), quality estimates V(d | c, q), and integer k Find a set of docs S D with |S| = k that maximizes interpreted as the probability that the set S is relevant to the query over all possible intentions Multiple intents Find at least one relevant doc
Discussion of Objective Makes explicit use of taxonomy In contrast, similarity-based: [CG98], [CK06], [RKJ08] Captures both diversification and doc relevance In contrast, coverage-based: [Z+05], [C+08], [V+08] Specific form of “loss minimization” [Z02], [ZL06] “Diminishing returns” for docs w/ the same intent Objective is order-independent Assumes that all users read k results May want to optimize k P(k) P(S | q)