Presentation is loading. Please wait.

Presentation is loading. Please wait.

Effectiveness of Implicit Rating Data on Characterizing Users in Complex Information Systems 9 th ECDL 2005 Vienna, Austria Sep. 20, 2005 Seonho Kim, Uma.

Similar presentations


Presentation on theme: "Effectiveness of Implicit Rating Data on Characterizing Users in Complex Information Systems 9 th ECDL 2005 Vienna, Austria Sep. 20, 2005 Seonho Kim, Uma."— Presentation transcript:

1 Effectiveness of Implicit Rating Data on Characterizing Users in Complex Information Systems 9 th ECDL 2005 Vienna, Austria Sep. 20, 2005 Seonho Kim, Uma Murthy, Kapil Ahuja, Sandi Vasile, Edward A. Fox Digital Library Research Laboratory (DLRL) Virginia Tech, Blacksburg, VA 26061 USA

2 ECDL 2005 2 Acknowledgements (Selected) Sponsors: AOL; NSF grants DUE-0121679 DUE-0435059; Virginia Tech; … Faculty/Staff: Lillian Cassel, Manuel Perez, … VT (Former) Students: Aaron Krowne, Ming Luo, Hussein Suleman, …

3 ECDL 2005 3 Overview Introduction –Prior Work –Web Trends and DL Data for User Studies –Problem of Explicit Rating Data –Implicit Rating Data in DLs –Attributes of User Activity –User Tracking Interface and User Model DB Questions and Experiments –Questions to Solve –Experiments, Hypothesis Tests, Data, Settings –Results of Hypothesis Testing –Data Types and Characterizing Users Future Work Conclusions References

4 ECDL 2005 4 Prior Work User study, User feedback –Pazzani et al. [1]: learned user profile from user feedback on the interestingness of Web sites. Log analysis & standardization efforts –Jones et al [2]: a transaction log analysis of DL –Gonçalves et al. [3]: defined an XML log standard for DLs. Implicit rating data –Nichols [4]: suggested the use of implicit data as a check on explicit ratings. –GroupLens[5]: employed “time consuming” factor for personalization.

5 ECDL 2005 5 Web Trends & DL WWW Trends –One way  Two way services –e.g., Blogs, wikis, online journals, forums, etc. –Passive anonymous observer  visible individuals with personalities –Same situation in Digital Libraries –Research emphasis on “User Study” Collaborative Filtering Personalization User Modeling Recommender system, etc.

6 ECDL 2005 6 Data for User Studies Explicit Ratings –User interview –User preference survey: demographic info, research area, majors, learning topics, publications –User rating for items Implicit Ratings –“User activities”, e.g., browsing, clicking, reading, opening, skipping, etc. –Time

7 ECDL 2005 7 Problem of Explicit Rating Data in Digital Libraries Expensive to obtain Patrons feel bothered Limited questions Terminology problems in describing research interests and learning topic –Too broad area, and too narrow personal interests –Term ambiguity –New terminology in new areas –Multiple terms for same area, multiple meanings of a term  Hard to figure out users’ interests and topics

8 ECDL 2005 8 Implicit Rating Data in Complex Information Systems Easy to obtain Patrons don’t feel bothered, can concentrate on their tasks No terminology issues Potential knowledge is included in data More effective when hybrid, with explicit rating data (Nichols [4], GroupLens[5])

9 ECDL 2005 9 User Tracking Interface and User Model DB Retrieval System User Model DB -Load -Update -Save -Create expand ignore type a query read browse open click tracking info Digital Library

10 ECDL 2005 10 Attributes of User Activity DGG (Domain Generalization Graph) for user activity attributes in DL ANY User Activity DirectionIntention FrequencyType Entering a query  implicit Sending a query  implicit Reading  implicit Skipping  implicit Selecting  implicit Expanding a node  implicit Scrolling  implicit Dragging  implicit Entering user info.  explicit Entering a query  perceiving Sending a query  rating Reading  perceiving Skipping  rating Selecting  rating Expanding a node  rating Scrolling  perceiving Dragging  perceiving Entering user info.  rating User Interest  ANY Document Topic  ANY High  ANY Low  ANY Rating  ANY Perceiving  ANY implicit  ANY explicit  ANY

11 ECDL 2005 11 Overview Introduction –Prior Work –Web Trends and DL Data for User Studies –Problem of Explicit Rating Data –Implicit Rating Data in DLs –Attributes of User Activity –User Tracking Interface and User Model DB Questions and Experiments –Questions to Solve –Experiments, Hypothesis Tests, Data, Settings –Results of Hypothesis Testing –Data Types and Characterizing Users Future Work Conclusions References

12 ECDL 2005 12 Proposed User Grouping Model User grouping is the most critical procedure for a recommender system. Suitable for dynamic and complex information systems like DLs Overcomes data sparseness Uses implicit rating data rather than explicit rating data User oriented recommender algorithm User interest-based community finding User modeling –User model (UM) contains complete statistics for recommender system. –Enhanced interoperability

13 ECDL 2005 13 Collecting User Interests for User Grouping Users with similar interests are grouped Employs a Document Clustering Algorithm, LINGO [10], to collect document topics Users’ interests are collected implicitly during searching and browsing. A User Model (UM) contains her interests and document topics. Interests of a user are subset of document topics proposed to her by Document Clustering.

14 ECDL 2005 14 Interest-based Recommender System

15 ECDL 2005 15 System Analysis with 5S Model Interest-based Recommender System for DL Society Space Structure Stream Scenario User Interface User Model Presentation Push service Filtering Ranking Highlighting Personalized pages Recommendation Group Selection Individual Selection Interest Group Researcher Learner Teacher Class Group Probability Space Vector space Collaboration space Community displays TextAudioVideo represented by UM schema User description User interests Document topics User groups Statistics participates generates refers composed of refers Users

16 ECDL 2005 16 User Model (UM) User ID User DescriptionGroupsStatistics NameDocument TopicScore User InterestScore Group IDScore E-mail Address Publications User Interests (implicit data -generated by user interface and recommender) (implicit data -generated by recommender) (explicit data -obtained from questionnaire)

17 ECDL 2005 17 Experiment - Tasks Subjects are asked to –answer a questionnaire to collect democratic information –list research interests to help us collect explicit rating data which is used for evaluation in the experiment –search some documents in her research interests and browse the result documents to help us collect implicit rating data

18 ECDL 2005 18 Experiment - Participants 22 Ph.D and MS students majoring in Computer Science CITIDEL [8] is used as a DL in “Computing” field Data from 4 students were excluded as their research domains are not included in CITIDEL

19 ECDL 2005 19 Experiment - Interfaces Specially designed user interfaces are required to capture user’s interactions JavaScripts Java Application

20 ECDL 2005 20 Results - Collected Data Example <Semi Structured Data<Cross Language Information Retrieval CLIR<Translation Model<Structured English Query<TREC Experiments at Maryland<Structured Document<Evaluation<Attribute Grammars<Learning<Web<Query Processing<Query Optimisers<QA<Disambiguation<Sources<SEQUEL<Fuzzy<Indexing<Inference Problem<Schematically Heterogeneity<Sub Optimization Query Execution Plan<Generation<(Other)(<Cross Language Information Retrieval CLIR)(<Structured English Query)(<TREC Experiments at Maryland)(<Evaluation)(<Query Processing)(<Query Optimisers)(<Disambiguation) <Cross Language Information Retrieval CLIR<Machine Translation<English Japanese<Based Machine<TREC Experiments at Maryland<Approach to Machine<Natural Language<Future of Machine Translation<Machine Adaptable Dynamic Binary<CLIR Track<Systems<New<Tables Provide<Design<Statistical Machine<Query Translation<Evaluates<Chinese<USA October Proceedings<Interlingual<Technology<Syntax Directed Transduction<Interpretation<Knowledge<Linguistic<Divergences<(Other)(<Cross Language Information Retrieval CLIR)(<Machine Translation)(<English Japanese)(<TREC Experiments at Maryland)(<CLIR Track)(<Query Translation) Parenthesized topics mean they are rated positively

21 ECDL 2005 21 Questions to Solve Is implicit rating data really effective for user study? for characterizing users? especially in complex information systems like DLs? If we are to prove it statistically, what are the right hypotheses and what are the right settings for hypotheses testing?

22 ECDL 2005 22 Two Experiments in this Study Two hypothesis tests to prove the effectiveness of Implicit Rating Data on characterizing users in DL An ANOVA test for comparing implicit rating data types on distinguishing users in DL

23 ECDL 2005 23 Hypothesis Tests Hypotheses –H 1 : For any serious user with their own research interests and topics, show repeated (consistent) output for the document collections referred to by the user. –H 2 : For serious users who share common research interests and topics, show overlapped output for the document collections referred to by them. –H 3 : For serious users who don’t share any research interests and topics, show different output for the document collections referred to by them.

24 ECDL 2005 24 Data Used for Hypothesis Tests Data for Hypothesis Tests: Users’ learning topics and research interests are obtained “implicitly” by tracking users’ activities with user tracking interface while users need not be aware. Data collected by a user tracking system for 18 students at both Ph.D. and M.S. levels, in CS major, while using CITIDEL [6]

25 ECDL 2005 25 Setting for Hypothesis Test 1 Let H 0 be a null hypothesis of H 1, thus H 0 is: Means (μ) of the frequency of document topics ‘proposed’ by the Document Clustering Algorithm are NOT consistent for a user. Simplified  A testing whether the population mean, μ, is statistically significantly greater than hypothesis mean, μ 0.

26 ECDL 2005 26 Setting for Hypothesis Test 2 Let H 0 be a null hypothesis of H 2, thus H 0 is: A user’s average ratio of overlapped topics with other persons in her groups over her total topics which have been referred, in-group overlapping ratio, μ1, is the same as the average ratio of overlapped topics with other persons out of her groups over her total topics which have been referred, out- group overlapping ratio, μ2 DL system a b c d e f a,b,c,d,e,f : users User Groups

27 ECDL 2005 27 Setting for Hypothesis Test 2 In-group overlapping ratio Out-group overlapping ratio O i,j : user i’s topic ratio overlapped with user j’s topics over i’s total topics G : total number of user group n K : total number of users in group K N : total number of users in the system

28 ECDL 2005 28 Setting for Hypothesis Test 2 Simplified  A testing whether μ 1 is statistically significantly greater than μ 2 Hypothesis 3 can be proven and estimated together, by hypothesis test 2

29 ECDL 2005 29 Results of Test 1 Conditions: 95% confidence (test size α = 0.05), sample size ‘n’ < 25, standard deviation ‘σ’ unknown, i.i.d. random samples, normal distribution,  estimated z- score T-test Test statistics: sample mean ‘ỹ’ = 1.1429, sample standard deviation ‘s’ = 0.2277, are observed from the experiment Rejection Rule is to reject H 0 if the ỹ > μ 0 +z α/2 σ/√n From the experiment, ỹ = 1.1429 > μ 0 +z α/2 σ/√n = 1.0934 Therefore decision is to Reject H 0 and accept H 1 95% Confidence Interval for μ is 1.0297 ≤ μ ≤1.2561 P-value (confidence of H 0 ) = 0.0039

30 ECDL 2005 30 Results of Test 2 Conditions: 95% confidence (test size α = 0.05), two i.i.d. random sample from a normal distribution, for two sample sizes n1 and n2, n1=n2 < 25, standard deviations of each sample σ1 and σ2 are unknown  two-sample Welch T-test From the experiment, sample mean of μ1, ỹ1 = 0.103, sample mean of μ2, ỹ2 = 0.0215, Satterthwaite’s degree of freedom approximation dfs =16.2 and Welch score w0 = 4.64 > t16.2, 0.05 = 1.745 Therefore decision is to Reject H 0 and accept H 2 95% Confidence Interval for μ1, μ2 and μ1 - μ2 are 0.0659 ≤ μ1 ≤ 0.1402, 0.0183 ≤ μ2 ≤ 0.0247 and 0.0468 ≤ μ1 - μ2≤ 0.1163, respectively P-value (confidence of H 0 ) = 0.0003

31 ECDL 2005 31 Results of Hypothesis Testing Statistically proved that implicit rating data is effective in characterizing users in complex information systems.

32 ECDL 2005 32 Data Types and Characterizing Users Previous similar studies were based on explicit user answers to surveys on their preferences, research and learning topics  basic flaw caused by the variety of academic terms. Purpose: Compare the effectiveness of different data types in characterizing users by using only automatically obtained objective data without using subjective users’ answers.

33 ECDL 2005 33 Data Types and Characterizing Users –Topics: noun phrases logged in User Models generated by a document clustering system ‘LINGO’ from documents to which users referred –Terms: single words found on user queries and topics –ANOVA statistics F(3,64) = 4.86, p-value = 0.0042, LSD = 1.7531

34 ECDL 2005 34 Data Types and Characterizing Users The higher in-group overlapping ratio / out-group overlapping ratio is more effective in characterizing users. “Proposed topics” which have appeared during the use of a digital library were most effective, however the differences between data types were not significant except the “proposed terms”.

35 ECDL 2005 35 Future Work Large scale Experiment on NDLTD [7] User Model DB visualization Utilize implicit rating data for personalization and recommendation

36 ECDL 2005 36 Conclusions Built a User Tracking system to collect Implicit rating data in DL. Statistically proved that implicit ratings is effective information in characterizing users in complex information systems like DLs. Compared the effectiveness of data types in characterizing users without depending on users’ subjective answers.

37 ECDL 2005 37 References [1] Michael Pazzani, Daniel Billsus: Learning and Revising User Profiles: The Identification of Interesting Web Sites, Machine Learning 27, 1997, 313-331 [2] Steve Jones, Sally Jo Cunningham, Rodger McNab: An Analysis of Usage of a Digital Library, in Proceedings of the 2 nd ECDL, 1998, 261-277 [3] Marcos André Gonçalves, Ming Luo, Rao Shen, Mir Farooq and Edward A. Fox: An XML Log Standard and Tools for Digital Library Logging Analysis. In Proceedings of Sixth European Conference on Research and Advanced Technology for Digital Libraries, Rome, Italy, September, 2002, 16-18 [4] David M. Nichols: Implicit Rating and Filtering. In Proceedings of 5th DELOS Workshop on Filtering and Collaborative Filtering, Budapest Hungary, November 1997, 31-36 [5] Joseph A. Konstan, Bradley N. Miller, David Maltz, Jonathan L. Herlocker, Lee R. Gordon and John Riedl, GroupLens: Applying Collaborative Filtering to Usenet News. In Communications of the ACM, Vol. 40, No. 3, 1997, 77-87 [6] CITIDEL: Available at http://www.citidel.org/, 2005 [7] NDLTD: Available at http://www.ndltd.org/, 2005

38 ECDL 2005 38 Review Introduction –Prior Work –Web Trends and DL Data for User Studies –Problem of Explicit Rating Data –Implicit Rating Data in DLs –Attributes of User Activity –User Tracking Interface and User Model DB Questions and Experiments –Questions to Solve –Experiments, Hypothesis Tests, Data, Settings –Results of Hypothesis Testing –Data Types and Characterizing Users Future Work Conclusions References


Download ppt "Effectiveness of Implicit Rating Data on Characterizing Users in Complex Information Systems 9 th ECDL 2005 Vienna, Austria Sep. 20, 2005 Seonho Kim, Uma."

Similar presentations


Ads by Google