Simultaneous Support for Finding and Re-Finding Jaime Teevan teevan@microsoft.com Microsoft Research
People to re-find Cockburn et al. 2002 found 80% of Web page visits were re-visits Teevan et al. 2007 found 40% of searches lead to repeat dicks
New results become available Selberg & Etzioni 2000 found results change a lot
Log analysis that shows people click on old and new results when returning to result pages. (27% of repeat searches click on new results!)
How to help people re-find 1. Relevance feedback – promote results lively to be re-found A la Teevan et al. 2006 work on personalization BUT Teevan et al. 2007 found change hurts 26% of time repeat searches at different rank. 94 seconds to re-click v. 192 seconds to re-click. Similar to work with dynamic menus by Mitchell and Shneiderman 1989 2. Cache results A la Rekimoto 1999 and Hayashi et al. 1998 Problem: No access to new info, and Teevan et al. 2007 show people click new and old results for repeat searches
“Pick a card, any card.”
Case 1 Case 2 Case 3 Case 4 Case 5 Case 6 Abracadabra! Case 1 Case 2 Case 3 Case 4 Case 5 Case 6
Your Card is GONE!
People Forget a Lot
Change Blindness
Change Blindness Other HCI research using charge blindness: E.g., Durlash 2004 - Help people notice charge My research wants to take advantage of people’s failure to notice charge Some graphics systems do this for rendering
We still need magic!
Re:Search Engine Architecture result list query 1 query 2 … query n result list 1 result list 2 … result list n Index of past queries score 1 score 2 … score n Result cache Merge query result list User interaction cache Web browser User client
Components of Re:Search Engine query 1 query 2 … query n query Index of past queries score 1 score 2 … score n Index of Past Queries Result Cache User Interaction Cache Merge Algorithm result list 1 result list 2 … result list n query 1 query 2 … query n Result cache User interaction cache result list result list 1 result list 2 … result list n Merge result list
Merge Algorithm m1 m2 b1 b2 b3 result list result list 1 result list 2 … result list n Merge result list m1 m2 b1 b2 b3
Choosing the Best Possible List Consider every combination Include at least three old and three new Min-cost network flow problem New b1 … b2 7 O(mn) – O(n^3) – but can be implemented faster In practice, basically linear with m 10 t b10 s 10 … m1 … 7 m2 … Slots m10 Old
Re:Search Engine Architecture result list query 1 query 2 … query n result list 1 result list 2 … result list n Index of past queries score 1 score 2 … score n Result cache Merge query result list User interaction cache Web browser User client
Merged List Appears The Same 132 participants – between subjects Merge types Intelligent Merge Original New Dumb Merge Intelligent Merge like the Original list Dumb Merge and New looked different Intelligent Merge Original New Dumb Merge
Study of How Lists Are Used 42 participants – within subjects Two sessions a day apart 12 tasks each session Tasks based on queries Queries selected based on log analysis Session 1 Session 2 Re-finding New-finding (“stomach flu”) (“Symptoms of stomach flu?”) (“Symptoms of stomach flu?”) (“What to expect at the ER?”)
Intelligent Merging Supports Re-Finding and New-Finding Intelligent merging better than dumb merging Almost as good as using the original list New-finding No difference between new list and intelligent Intelligent merging best of both worlds
Summary People re-find often Solution: Re:Search Engine Support re-finding By supporting consistency Allow new-finding Solution: Re:Search Engine Uses memory lapses to user’s advantage Architecture (focus: merge algorithm) Studies show Re:Search Engine works
Future Work Improve and generalize model Effectively use model More sophisticated measures of memorability Other types of lists (inboxes, directory listings) Effectively use model Highlight change as well as hide it Present change at the right time This talk’s focus: how What about when to display new information?
Jaime Teevan teevan@microsoft.com Microsoft Research