Personalizing Search Jaime Teevan, MIT Susan T. Dumais, MSR and Eric Horvitz, MSR
Relevant result “pia workshop” Query:
Outline Approaches to personalization The PS algorithm Evaluation Results Future work
Approaches to Personalization Content of user profile Long-term interests Liu, et al. [14], Compass Filter [13] Short-term interests Query refinement [2,12,15], Watson [4] How user profile is developed Explicit Relevance feedback [19], query refinement [2,12,15] Implicit Query history [20, 22], browsing history [16, 23] Very rich user profile
PS Search Engine query
PS Search Engine query dog cat monkey banana food baby infant child boy girl forest hiking walking gorp baby infant child boy girl csail mit artificial research robot web search retrieval ir hunt
PS Search Engine query Search results page web search retrieval ir hunt 1.3
Calculating a Document’s Score Based on standard tf.idf Score = Σ tf i * w i web search retrieval ir hunt 1.3
Calculating a Document’s Score Based on standard tf.idf Score = Σ tf i * w i Σ
N nini Calculating a Document’s Score Based on standard tf.idf Score = Σ tf i * w i World (N) (n i ) w i = log Σ 1.3
N nini Calculating a Document’s Score Based on standard tf.idf Score = Σ tf i * w i (N) (n i ) w i = log World riri R (r i +0.5)(N-n i -R+r i +0.5) (n i -r i +0.5)(R-r i +0.5) w i = log † † From Sparck Jones, Walker and Roberson, 1998 [21]. ’ ’ Where: N = N+R, n i = n i +r i ’ ’ Client (r i +0.5)(N-n i -R+r i +0.5) (n i -r i +0.5)(R-r i +0.5) w i = log ’
Finding the Parameter Values Corpus representation (N, n i ) How common is the term in general? Web vs. result set User representation (R, r i ) How well does it represent the user’s interest? All vs. recent vs. Web vs. queries vs. none Document representation What terms to sum over? Full document vs. snippet web search retrieval ir hunt
Building a Test Bed 15 evaluators x ~10 queries 131 queries total Personally meaningful queries Selected from a list Queries issued earlier (kept diary) Evaluate 50 results for each query Highly relevant / relevant / irrelevant Index of personal information
Evaluating Personalized Search Measure algorithm quality DCG(i) = { Look at one parameter at a time 67 different parameter combinations! Hold other parameters constant and vary one Look at best parameter combination Compare with various baselines Gain(i), DCG (i–1) + Gain(i)/log(i), if i = 1 otherwise
Analysis of Parameters User
Analysis of Parameters CorpusUserDocument
PS Improves Text Retrieval No model Relevance Feedback Personalized Search
Text Features Not Enough
Take Advantage of Web Ranking PS+Web
Summary Personalization of Web search Result re-ranking User’s documents as relevance feedback Rich representations important Rich user profile particularly important Efficiency hacks possible Need to incorporate features beyond text
Further Exploration Improved non-text components Usage data Personalized PageRank Learn parameters Based on individual Based on query Based on results UIs for user control
User Interface Issues Make personalization transparent Give user control over personalization Slider between Web and personalized results Allows for background computation Exacerbates problem with re-finding Results change as user model changes Thesis research – Re:Search Engine
Thank you!
Much Room for Improvement Group ranking Best improves on Web by 23% More people Less improvement Personal ranking Best improves on Web by 38% Remains constant Potential for Personalization
Evaluating Personalized Search Query selection Chose from 10 pre-selected queries Previously issued query cancer Microsoft traffic … bison frise Red Sox airlines … Las Vegas rice McDonalds … Pre-selected 53 pre-selected (2-9/query) Total: 137 Joe Mary
Making PS Practical Learn most about personalization by deploying a system Best algorithm reasonably efficient Merging server and client Query expansion Get more relevant results in the set to be re-ranked Design snippets for personalization