Download presentation
Presentation is loading. Please wait.
Published byKristopher McLaughlin Modified over 9 years ago
1
Personalizing Information Search: Understanding Users and their Interests Diane Kelly School of Information & Library Science University of North Carolina dianek@email.unc.edu IPAM | 04 October 2007
2
What is IR? Who works on problems in IR? Where can I find the most recent work in IR? Where can I find the most recent work in IR? A TREC primer A TREC primer Background: IR and TREC
3
Personalization is a process where retrieval is customized to the individual (not one-size-fits-all searching) Hans Peter Luhn was one of the first people to personalize IR through selective dissemination of information (SDI) (now called ‘filtering’) Profiles and user models are often employed to ‘house’ data about users and represent their interests Figuring out how to populate and maintain the profile or user model is a hard problem Background: Personalization
4
Explicit Feedback Implicit Feedback User’s desktop Major Approaches
5
Explicit Feedback
6
Term relevance feedback is one of the most widely used and studied explicit feedback techniques Typical relevance feedback scenarios (examples)examples Systems-centered research has found that relevance feedback works (including pseudo- relevance feedback) User-centered research has found mixed results about its effectiveness Explicit Feedback
7
Terms are not presented in context so it may be hard for users to understand how they can help Quality of terms suggested is not always good Users don’t have the additional cognitive resources to engage in explicit feedback Users are too lazy to provide feedback Questions about the sustainability of explicit feedback for long-term modeling Explicit Feedback
8
Examples
9
BACK
10
Query Elicitation Study Users typically pose very short queries This may be because users have a difficult time articulating their information needs traditional search interfaces encourage short queries Polyrepresentative extraction of information needs suggests obtaining multiple representations of a single information need (reference interview)
11
Motivation Research has demonstrated that a positive relationship exists between query length and performance in batch-mode experimental IR Query expansion is an effective technique for increasing query length, but research has demonstrated that users have some difficulty with traditional term relevance feedback features
12
Elicitation Form [Why Know] [Already Know] [Keywords]
13
Results: Number of Terms Already Know Why Keywords N=45 9.33 16.18 10.67 2.33
14
Experimental Runs Source of TermsRun ID Baselinebaseline Baseline + Pseudo Relevance Feedback pseudo05, pseudo10, pseudo20, pseudo50 Baseline + Elicitation Form Q2Q2 Baseline + Elicitation Form Q3Q3 Baseline + Elicitation Form Q4Q4 Baseline + Combination of Elicitation Form Questions Q3Q4, Q2Q3, Q2Q4, Q234
15
Overall Performance 0.3685 0.2843
16
Query Length and Performance y = 0.263 +.000265(x), p=.000
17
Major Findings Users provided lengthy responses to some of the questions There were large differences in the length of users’ responses to each question In most cases responses significantly improved retrieval Query length and performance were significantly related
18
Implicit Feedback
19
What is it? Information about users, their needs and document preferences that can be obtained unobtrusively, by watching users’ interactions and behaviors with systems What are some examples? Examine: Select, View, Listen, Scroll, Find, Query, Cumulative measures Retain: Print, Save, Bookmark, Purchase, Email Reference: Link, Cite Annotate/Create: Mark up, Type, Edit, Organize, Label Implicit Feedback
20
Why is it important? It is generally believed that users are unwilling to engage in explicit relevance feedback It is unlikely that users can maintain their profiles over time Users generate large amounts of data each time the engage in online information-seeking activities and the things in which they are ‘interested’ is in this data somewhere Implicit Feedback
21
What do we “know” about it? There seems to be a positive correlation between selection (click-through) and relevance There seems to be a positive correlation between display time and relevance What is problematic about it? Much of the research has been based on incomplete data and general behavior And has not considered the impact of contextual variables – such as task and a user’s familiarity with a topic – on behaviors Implicit Feedback
22
Implicit Feedback Study To investigate: the relationship between behaviors and relevance the relationship between behaviors and context To develop a method for studying and measuring behaviors, context and relevance in a natural setting, over time
23
Method Approach: naturalistic and longitudinal, but some control Subjects/Cases: 7 Ph.D. students Study period: 14 weeks Compensation: new laptops and printers
24
Data Collection Document Context Tasks Topics Persistence Familiarity Endurance Frequency Stage Behaviors Display Time PrintingSaving Relevance Usefulness
25
Protocol STARTEND 14 weeks Context Evaluation Document Evaluations Context Evaluation; Document Evaluations Client- & Server-side Logging Week 1Week 13
27
Results: Description of Data Subject 1234567 Client2.6 MB6.8 MB3.9 MB2.0 MB1.5 MB21.7 MB4.9 MB Proxy1.7 GB83 MB39 MB42 MB48 MB2.9 GB2.1 GB URLs Requested 15,4995,3193,1573,2053,40414,58611,657 Docs Evaluated 870 (5%) 802 (14%) 384 (12%) 353 (11%) 200 (6%) 1,328 (8%) 1,160 (10%) Tasks6111925122133 Topics9801735254026
28
Relevance: Usefulness 4.8 (1.65) 6.1 (2.00) 5.3 (2.20) 6.0 (0.80) 5.3 (2.40) 4.6 (0.80) 5.0 (2.40)
29
Relevance: Usefulness
30
Display Time
31
Display Time & Usefulness
32
Display Time & Task Tasks 1. Researching Dissertation 2. Shopping 3. Read News 4. Movie Reviews & Schedules 5. Preparing Course 6. Entertainment
33
Major Findings Behaviors differed for each subject, but in general most display times were low most usefulness ratings were high not much printing or saving No direct relationship between display time and usefulness
34
Major Findings Main effects for display time and all contextual variables: Task (5 subjects) Topic (6 subjects) Familiarity (5 subjects) Lower levels of familiarity associated with higher display times No clear interaction effects among behaviors, context and relevance
35
Personalizing Search Using the display time, task and relevance information from the study, we evaluated the effectiveness of a set of personalized retrieval algorithms Four algorithms for using display time as implicit feedback were tested: 1.User 2.Task 3.User + Task 4.General
36
Results Iteration MAP
37
Major Findings Tailoring display time thresholds based on task information improved performance, but doing so based on user information did not There was a lot of variability between subjects, with the user-centered algorithms performing well for some and poorly for others The effectiveness of most of the algorithms increased with time (and more data)
38
Some Problems
39
Relevance What are we modeling? Does click = relevance? Relevance is multi-dimensional and dynamic A single measure does to adequately reflect ‘relevance’ Most pages are likely to be rated as useful, even if the value or importance of the information differs
40
Definition Recipe
41
Weather Forecast Information about Rocky Mountain Spotted Fever
42
Paper about Personalization
43
Page Structure Some behaviors are more likely to occur on some types of pages A more ‘intelligent’ modeling function would know when and what to observe and expect The structure of pages encourage/inhibit certain behaviors Not all pages are equally as useful for modeling a user’s interests
44
What types of behaviors do you expect here? And here?
46
The Future
47
Future New interaction styles and systems create new opportunities for explicit and implicit feedback Collaborative search features and query recommendation Features/Systems that support the entire search process (e.g., saving, organizing, etc.) QA systems New types of feedback Negative Physiological
48
Diane Kelly (dianek@email.unc.edu)dianek@email.unc.edu WEB : http://ils.unc.edu/~dianek/research.html Collaborators: Nick Belkin, Xin Fu, Vijay Dollu, Ryen White Thank You
49
TREC [Text REtrieval Conference] It’s not this …
50
What is TREC? TREC is a workshop series sponsored by the National Institute of Standards and Technology (NIST) and the US Department of Defense. It’s purpose is to build infrastructure for large-scale evaluation of text retrieval technology. TREC collections and evaluation measures are the de facto standard for evaluation in IR. TREC is comprised of different tracks each of which focuses on different issues (e.g., question answering, filtering).
52
TREC Collections Central to each TREC Track is a collection, which consists of three major components: 1.A corpus of documents (typically newswire) 2.A set of information needs (called topics)topics 3.A set of relevance judgments. Each Track also adopts particular evaluation measures Precision and Recall; F-measure Average Precision (AP) and Mean AP (MAP)
53
Comparison of Measures List 1List 2 1R1/1 = 11NR 2R2/2 = 22NR 3R3/3 = 33NR 4R4/4 = 44NR 5R5/5 = 55NR 6 6R1/6 =.167 7NR7R2/7 =.286 8NR8R3/8 =.375 9NR9R4/9 =.444 10NR10R5/10 =.50 AP1.0AP.354
54
Learn more about TREC http://trec.nist.gov http://trec.nist.gov Voorhees, E. M., & Harman, D. K. (2005). TREC: Experiment and Evaluation in Information Retrieval, Cambridge, MA: MIT Press. BACK
55
Example Topic BACK
56
Learn more about IR ACM SIGIR Conference Sparck-Jones, K., & Willett, P. (1997). Readings in Information Retrieval. Morgan-Kaufman Publishers. Baeza-Yates, R., & Ribeiro-Neto, B. (1999). Modern information retrieval. New York, NY: ACM Press. Grossman, D. A., Frieder, O. (2004). Information retrieval: Algorithms and Heuristics. The Netherlands: Springer. BACK
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.