Jaime Teevan MIT, CSAIL The Re:Search Engine. “Pick a card, any card.”

Slides:



Advertisements
Similar presentations
Accurately Interpreting Clickthrough Data as Implicit Feedback Joachims, Granka, Pan, Hembrooke, Gay Paper Presentation: Vinay Goel 10/27/05.
Advertisements

Improvements and extras Paul Thomas CSIRO. Overview of the lectures 1.Introduction to information retrieval (IR) 2.Ranked retrieval 3.Probabilistic retrieval.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Advanced Google Becoming a Power Googler. (c) Thomas T. Kaun 2005 How Google Works PageRank: The number of pages link to any given page. “Importance”
Query Chains: Learning to Rank from Implicit Feedback Paper Authors: Filip Radlinski Thorsten Joachims Presented By: Steven Carr.
How-To & Search Strategies April 16 th, Contents Using Grant Forward Search Results Filtering your Search – Keywords – Categories – Sponsors – Deadlines.
Sensible Searching: Making Search Engines Work Dr Shaun Ryan CEO S.L.I. Systems
Personalization and Search Jaime Teevan Microsoft Research.
Microsoft ® Office OneNote ® 2003 Training Organize your notebook.
1 Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Eric Brill Susan Dumais Robert Ragno Microsoft Research.
Page 1 June 2, 2015 Optimizing for Search Making it easier for users to find your content.
Evaluating Search Engine
Ryen W. White, Microsoft Research Jeff Huang, University of Washington.
1 CS 430 / INFO 430 Information Retrieval Lecture 24 Usability 2.
Personalizing the Digital Library Experience Nicholas J. Belkin, Jacek Gwizdka, Xiangmin Zhang SCILS, Rutgers University
Web 2.0 Testing and Marketing E-engagement capacity enhancement for NGOs HKU ExCEL3.
Cohort Modeling for Enhanced Personalized Search Jinyun YanWei ChuRyen White Rutgers University Microsoft BingMicrosoft Research.
Adapting Deep RankNet for Personalized Search
Section 2: Finding and Refinding Jaime Teevan Microsoft Research 1.
SEO Webinar - With Neil Palmer of IM3.co.uk In Partnership with Huddlebuy How do I improve my website traffic with SEO? Covering: What is SEO? Why is SEO.
Finding and Re-Finding Through Personalization Jaime Teevan MIT, CSAIL David Karger (advisor), Mark Ackerman, Sue Dumais, Rob Miller (committee), Eytan.
Information Re-Retrieval Repeat Queries in Yahoo’s Logs Jaime Teevan (MSR), Eytan Adar (UW), Rosie Jones and Mike Potts (Yahoo) Presented by Hugo Zaragoza.
The Perfect Search Engine Is Not Enough Jaime Teevan †, Christine Alvarado †, Mark S. Ackerman ‡ and David R. Karger † † MIT, CSAIL ‡ University of Michigan.
Jaime Teevan Microsoft Research Finding and Re-Finding Personal Information.
Click on the Sign in button Step 1 Sign in. Type in your username and password Tip: Use the ‘Forgotten your password?’ prompt, if you can’t remember your.
Roles Managers Technical Team Leaders Programmers Customers Database Administrators Instructors.
Search Engines By: Big Cat Jaime DeBartolo, Rachel Adams, Michelle Knapp.
Search Engines and Information Retrieval Chapter 1.
1 Can People Collaborate to Improve the relevance of Search Results? Florian Eiteljörge June 11, 2013Florian Eiteljörge.
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
Hao Wu Nov Outline Introduction Related Work Experiment Methods Results Conclusions & Next Steps.
Mining the Web to Create Minority Language Corpora Rayid Ghani Accenture Technology Labs - Research Rosie Jones Carnegie Mellon University Dunja Mladenic.
+ Informatics 122 Software Design II Lecture 6 Emily Navarro Duplication of course material for any commercial purpose without the explicit written permission.
Clustering Personalized Web Search Results Xuehua Shen and Hong Cheng.
How would you optimize your SEM campaigns to get improve the efficiency?
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Improving Cloaking Detection Using Search Query Popularity and Monetizability Kumar Chellapilla and David M Chickering Live Labs, Microsoft.
Usability and Accessibility CIS 376 Bruce R. Maxim UM-Dearborn.
1 Usability Studies. 2 Evaluate Usability Run a usability study to judge how an interface facilitates tasks with respect to the aspects of usability mentioned.
Guidelines for ENSCONET partners in the use of the e-forum.
Binxing Jiao et. al (SIGIR ’10) Presenter : Lin, Yi-Jhen Advisor: Dr. Koh. Jia-ling Date: 2011/4/25 VISUAL SUMMARIZATION OF WEB PAGES.
Contextual Ranking of Keywords Using Click Data Utku Irmak, Vadim von Brzeski, Reiner Kraft Yahoo! Inc ICDE 09’ Datamining session Summarized.
Search Engine Optimization for Dummies Peter Kent.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Principles of effective web design NOTES Flo Morris-Duffin.
Copyright 2010, The World Bank Group. All Rights Reserved. Testing and Documentation Part II.
Websites with good heuristics Irene Wachirawutthichai.
Created by Branden Maglio and Flynn Castellanos Team BFMMA.
SiteAngel Walkthrough Signing up for a complementary evaluation and training the Angel from the customer’s perspective
Understanding and Predicting Personal Navigation.
Post-Ranking query suggestion by diversifying search Chao Wang.
ASSOCIATIVE BROWSING Evaluating 1 Jinyoung Kim / W. Bruce Croft / David Smith for Personal Information.
The Cross Language Image Retrieval Track: ImageCLEF Breakout session discussion.
Information Architecture 2 Mailing List? No Class Scheduled October 23 Books? -Beck, K. (1999). Extreme Programming Explained: Embrace Change.Extreme Programming.
 SEO Terms A few additional terms Search site: This Web site lets you search through some kind of index or directory of Web sites, or perhaps both an.
Personalizing Web Search Jaime Teevan, MIT with Susan T. Dumais and Eric Horvitz, MSR.
Executive Summary - Human Factors Heuristic Evaluation 04/18/2014.
Helping People Find Information Better Jaime Teevan, MIT with Christine Alvarado, Mark Ackerman and David Karger.
Safety and health at work is everyone’s concern. It’s good for you. It’s good for business. Improvement of OiRA Presentation of the new features to the.
Session 5: How Search Engines Work. Focusing Questions How do search engines work? Is one search engine better than another?
Searching the Web for academic information Ruth Stubbings.
Jaime Teevan MIT, CSAIL. HCI at MIT ♠HCI Seminar ♥Fridays at 1:30 pm ♥URL: ♥Announcement mailing list ♠Applying HCI to.
Evaluation Anisio Lacerda.
Simultaneous Support for Finding and Re-Finding
Amazing The Re:Search Engine Jaime Teevan MIT, CSAIL.
Web Information retrieval (Web IR)
Consistency During Search Without Stagnation
The Perfect Search Engine Is Not Enough
Lab 2: Information Retrieval
Helpful Things To Know For Successful Digital Marketing Strategy Presented By:- Abhinav Shashtri.
Presentation transcript:

Jaime Teevan MIT, CSAIL The Re:Search Engine

“Pick a card, any card.”

Case 1Case 2Case 3Case 4Case 5Case 6

Your Card is GONE!

People Forget a Lot

Change Blindness

Change Blindness

Re:Search Engine ?

Merge Old and New Results Old New Merged

We still need magic!

Overview ♠Memorability study ♠Recognition study ♠Assumptions ♠Implementation issues ♠Evaluation issues ♠Choose your own adventure

Memorability Study ♠Participants issued self-selected query ♠After an hour, asked to fill out a survey ♠129 people remembered something

Data Analysis Probability of being remembered ♥Anything? # of words? # of fields? ♥Features ♣Result features: clicked, not clicked, last clicked, rank, dwell time, frequency of visit, etc. ♣Query features: query type, query length, # of search in session, elapsed time, etc. ♠Remembered rank v. real rank ♥Map remembered rank to real rank

“Memorability”

Remembered Results Ranked High

Recognition Study ♠Same set-up as Memorability Study ♠Follow-up survey: Results the same? ♥Case 1: Old results ♥Case 2: New results ♥Case 3: Clicked to top ♥Case 4: Intelligent merging ♠92 people have completed both steps 16% 74% 65% 17%

Assumptions ♠Re-search v. search ♠Memorable v. relevant ♠Results change v. stay the same ♠Hide change v. show change ♠Forget v. remember as forgettable ♠Merge v. identify old or new Why?How to test?What if I’m wrong?

Implementation Issues ♠Page of cached result may disappear ♠Multiple result pages ♠Identifying repeat queries ♥User identified ♥Search sessions are not repeat queries ♥Exact query may be forgotten

Evaluation Issues ♠Various goals to test ♥Does a merged list look like the original? ♥Does merging make re-finding easier? ♥Is search improved overall? ♠Lab study ♥How to set up re-finding task? ♥Timing differences significant enough? ♠Longitudinal study – What to measure? ♠What are good baselines?

Choose Your Own Adventure ♠Re-search v. search ♠Memorable v. relevant ♠Results change v. stay the same ♠Hide change v. show change ♠Forget v. remember as forgettable ♠Merge v. identify old or new ♠Implementation issues ♠Evaluation issues

Choose Your Own Adventure ♠Re-search v. searchRe-search v. search ♠Memorable v. relevantMemorable v. relevant ♠Results change v. stay the sameResults change v. stay the same ♠Hide change v. show changeHide change v. show change ♠Forget v. remember as forgettableForget v. remember as forgettable ♠Merge v. identify old or newMerge v. identify old or new ♠Implementation issuesImplementation issues ♠Evaluation issuesEvaluation issues (Done)Done

Hide Change v. Show Change ♠Why I think change should be hidden ♥Example: dynamic menus ♠How to prove ♥New results better, called the same or worse ♥Baseline for testing – 2 lists, change explicit ♠What if we should show change? ♥Memorability suggests changes to highlight ♥Other applications where want to hide change (Done)Done

Memorable v. Relevant ♠Why I think memorability is important ♥Relevance at a future date is what matters ♥Necessary to hide changehide change ♠How to prove ♥Baseline for lab study with target first ♠What if relevance is what’s important? ♥Mapping between memorable and relevant ♥Useful related work on implicit feedback (Done)Done

Re-search v. Search ♠Why I think people repeat searches ♥Information seeking literature ♥Re-finding consistently reported as a problem ♠How to prove ♥Study shows prefer to follow known paths ♥Search log analysis ♠What if people just want to search? ♥Memorable results ranked first ♥Other domains where list consistency matters (Done)Done

Merge v. Identify Old and New ♠Why I think results should be merged ♥Information need not necessarily one or other ♥People don’t like to do extra work ♠How to prove ♥Search log analysis ♥Look at what people do in longitudinal study ♥Lab study – timing becomes an issue ♠What if people want to identify query type? ♥Other applications where merging is useful (Done)Done

Results Change v. Stay the Same ♠Why I think results change ♥How search engines work ♥Personalization and dynamic content ♠How to prove ♥Track query results ♠What if results don’t change? ♥Probably will in future applications ♥Existing applications where lists change (Done)Done

Forget v. Remember as Forgettable ♠Why I think people forget ♥Visual analogy ♠How to prove ♥Lab study – Do people find new information? ♥Longitudinal study – Ever click on new result? ♠What if remember as forgettable? ♥Build better model of memorability ♥Highlight important changes (Done)Done

Implementation Issues ♠Page of cached result may disappear ♠Multiple result pages ♠Identifying repeat queries ♥User identified ♥Search sessions are not repeat queries ♥Exact query may be forgotten (Done)Done

Evaluation Issues ♠Various goals to test ♥Does a merged list look like the original? ♥Does merging make re-finding easier? ♥Is search improved overall? ♠Lab study ♥How to set up re-finding task? ♥Timing differences significant enough? ♠Longitudinal study – What to measure? ♠What are good baselines? (Done)Done

Jaime Teevan

Teleporting Orienteering Strategies for Finding

Why Do People Orienteer? ♠Easier than saying what you want ♠You know where you are ♠You know what you find ♠The tools don’t work

All must be the same to re-find the information! Structural Consistency Important New name

Absolute Consistency Unnecessary New name Focus on search result lists

Query Changes ♠Most changes are simple ♥Capitalization ♥Phrasing ♥Word ordering ♥Word form ♥New queries shorter ♠What about longer time horizons? ♠Recognition v. recall

Result List Changes ♠Tracked 10 queries on Google for a year+ ♠1.18 of top 10 disappear each week ♠Rate of change likely to increase, e.g.: ♥Raw personalization ♥Relevance feedback ♠People forget their queries ♥28% of queries forgotten within an hour

Example: “neon signs”