Effectiveness of Implicit Rating Data on Characterizing Users in Complex Information Systems 9 th ECDL 2005 Vienna, Austria Sep. 20, 2005 Seonho Kim, Uma.

Slides:



Advertisements
Similar presentations
DELOS Highlights COSTANTINO THANOS ITALIAN NATIONAL RESEARCH COUNCIL.
Advertisements

1 Distributed Agents for User-Friendly Access of Digital Libraries DAFFODIL Effective Support for Using Digital Libraries Norbert Fuhr University of Duisburg-Essen,
A Graph-based Recommender System Zan Huang, Wingyan Chung, Thian-Huat Ong, Hsinchun Chen Artificial Intelligence Lab The University of Arizona 07/15/2002.
Developing and Evaluating a Query Recommendation Feature to Assist Users with Online Information Seeking & Retrieval With graduate students: Karl Gyllstrom,
Project Proposal.
Collaborative Filtering Sue Yeon Syn September 21, 2005.
1 DAFFODIL Effective Support for Using Digital Libraries Norbert Fuhr University of Duisburg-Essen, Germany.
Topic 6: Introduction to Hypothesis Testing
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Probabilistic Information Retrieval.
Design of metadata surrogates in search result interfaces of learning object repositories: Linear versus clustered metadata design Panos Balatsoukas Anne.
© Anselm Spoerri Lecture 13 Housekeeping –Term Projects Evaluations –Morse, E., Lewis, M., and Olsen, K. (2002) Testing Visual Information Retrieval Methodologies.
1 LM Approaches to Filtering Richard Schwartz, BBN LM/IR ARDA 2002 September 11-12, 2002 UMASS.
An investigation of query expansion terms Gheorghe Muresan Rutgers University, School of Communication, Information and Library Science 4 Huntington St.,
Data Mining – Intro.
Chapter 9: Introduction to the t statistic
McGraw-Hill/Irwin Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. Business Statistics: Communicating with Numbers By Sanjiv Jaggia.
Project Proposal: Academic Job Market and Application Tracker Website Project designed by: Cengiz Gunay Client: Cengiz Gunay Audience: PhD candidates and.
Chapter 13: Inference in Regression
CS598CXZ Course Summary ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
Challenges in Information Retrieval and Language Modeling Michael Shepherd Dalhousie University Halifax, NS Canada.
Personalization in Local Search Personalization of Content Ranking in the Context of Local Search Philip O’Brien, Xiao Luo, Tony Abou-Assaleh, Weizheng.
Search Engines and Information Retrieval Chapter 1.
1 Level of Significance α is a predetermined value by convention usually 0.05 α = 0.05 corresponds to the 95% confidence level We are accepting the risk.
CONCLUSION & FUTURE WORK Normally, users perform triage tasks using multiple applications in concert: a search engine interface presents lists of potentially.
Adaptive News Access Daniel Billsus Presented by Chirayu Wongchokprasitti.
Implicit An Agent-Based Recommendation System for Web Search Presented by Shaun McQuaker Presentation based on paper Implicit:
Collaborative Research: Curriculum Development for Digital Library Education Presentation in May 1,2006
User Study Evaluation Human-Computer Interaction.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
Universit at Dortmund, LS VIII
Implicit Acquisition of Context for Personalization of Information Retrieval Systems Chang Liu, Nicholas J. Belkin School of Communication and Information.
CITIDEL: Computing & Information Technology Interactive Digital Educational Library Web Page: Contacts: Future.
Seungwon Yang, Haeyong Chung, Chris North, and Edward A. Fox Virginia Tech, Blacksburg, VA USA 1ETD 2010, June 16-18, Austin, TX.
CONCLUSION & FUTURE WORK Given a new user with an information gathering task consisting of document IDs and respective term vectors, this can be compared.
Context-Sensitive Information Retrieval Using Implicit Feedback Xuehua Shen : department of Computer Science University of Illinois at Urbana-Champaign.
CONCLUSION & FUTURE WORK Normally, users perform search tasks using multiple applications in concert: a search engine interface presents lists of potentially.
Data Mining – Intro. Course Overview Spatial Databases Temporal and Spatio-Temporal Databases Multimedia Databases Data Mining.
Personalized Course Navigation Based on Grey Relational Analysis Han-Ming Lee, Chi-Chun Huang, Tzu- Ting Kao (Dept. of Computer Science and Information.
Interest-based User Grouping Model for Collaborative Filtering in Digital Libraries 7 th ICADL 2004 Shanghai, P.R.China Dec. 15, 2004 Edward A. Fox, Seonho.
User Interfaces 4 BTECH: IT WIKI PAGE:
XXDL and CSTC and Virginia Tech NSDL Fall 2000 PI Meeting September 22-24, 2000 NSF, Arlington, VA Edward A. Fox CS DLRL.
21/11/20151Gianluca Demartini Ranking Clusters for Web Search Gianluca Demartini Paul–Alexandru Chirita Ingo Brunkhorst Wolfgang Nejdl L3S Info Lunch Hannover,
Logging in Digital Libraries. Last week …. Introduction to quality indicators and the way in which these are formalized and made computable, according.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Of 33 lecture 1: introduction. of 33 the semantic web vision today’s web (1) web content – for human consumption (no structural information) people search.
Harvesting Social Knowledge from Folksonomies Harris Wu, Mohammad Zubair, Kurt Maly, Harvesting social knowledge from folksonomies, Proceedings of the.
Chapter 10 The t Test for Two Independent Samples
Digital Libraries1 David Rashty. Digital Libraries2 “A library is an arsenal of liberty” Anonymous.
Towards a Reference Quality Model for Digital Libraries Maristella Agosti Nicola Ferro Edward A. Fox Marcos André Gonçalves Bárbara Lagoeiro Moreira.
Introduction to Concept Maps Edward A. Fox and Rao Shen CS5604 Fall 2002 “Information Storage & Retrieval” Dept. of Computer Science Virginia Tech, Blacksburg,
Testing Hypotheses about a Population Proportion Lecture 29 Sections 9.1 – 9.3 Fri, Nov 12, 2004.
Testing Hypotheses about a Population Proportion Lecture 29 Sections 9.1 – 9.3 Wed, Nov 1, 2006.
Testing Hypotheses about a Population Proportion Lecture 31 Sections 9.1 – 9.3 Wed, Mar 22, 2006.
1-1 Copyright © 2014, 2011, and 2008 Pearson Education, Inc.
Visual Semantic Modeling of Digital Libraries Qinwei Zhu, Marcos André Gonçalves, Rao Shen, Edward A. Fox – Virginia Tech,, Blacksburg, VA, USA Lillian.
Computing and Information Technology Interactive Digital Educational Library Technical Development Content Collection Edward Fox (director) John A. N.
ECDL 2006, Alicante, September 18, 2006 Naga Srinivas Vemuri, Ricardo da S. Torres, Rao Shen, Marcos Andre Goncalves, Weiguo Fan, and Edward A. Fox A Content-Based.
Collaborative Query Previews in Digital Libraries Lin Fu, Dion Goh, Schubert Foo Division of Information Studies School of Communication and Information.
Collaborative Filtering and Recommender Systems Brian Lewis INF 385Q Knowledge Management Systems November 10, 2005.
Chapter 1 Introduction to Statistics. Section 1.1 Fundamental Statistical Concepts.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Query Refinement and Relevance Feedback.
1 DAFFODIL Effective Support for Using Digital Libraries Norbert Fuhr University of Duisburg-Essen, Germany.
CS791 - Technologies of Google Spring A Web­based Kernel Function for Measuring the Similarity of Short Text Snippets By Mehran Sahami, Timothy.
Inferential Statistics Psych 231: Research Methods in Psychology.
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 7 l Hypothesis Tests 7.1 Developing Null and Alternative Hypotheses 7.2 Type I & Type.

Recommenders for Information Seeking Tasks: Lessons Learned
R. E. Wyllys Copyright 2003 by R. E. Wyllys Last revised 2003 Jan 15
Testing Hypotheses about a Population Proportion
Presentation transcript:

Effectiveness of Implicit Rating Data on Characterizing Users in Complex Information Systems 9 th ECDL 2005 Vienna, Austria Sep. 20, 2005 Seonho Kim, Uma Murthy, Kapil Ahuja, Sandi Vasile, Edward A. Fox Digital Library Research Laboratory (DLRL) Virginia Tech, Blacksburg, VA USA

ECDL Acknowledgements (Selected) Sponsors: AOL; NSF grants DUE DUE ; Virginia Tech; … Faculty/Staff: Lillian Cassel, Manuel Perez, … VT (Former) Students: Aaron Krowne, Ming Luo, Hussein Suleman, …

ECDL Overview Introduction –Prior Work –Web Trends and DL Data for User Studies –Problem of Explicit Rating Data –Implicit Rating Data in DLs –Attributes of User Activity –User Tracking Interface and User Model DB Questions and Experiments –Questions to Solve –Experiments, Hypothesis Tests, Data, Settings –Results of Hypothesis Testing –Data Types and Characterizing Users Future Work Conclusions References

ECDL Prior Work User study, User feedback –Pazzani et al. [1]: learned user profile from user feedback on the interestingness of Web sites. Log analysis & standardization efforts –Jones et al [2]: a transaction log analysis of DL –Gonçalves et al. [3]: defined an XML log standard for DLs. Implicit rating data –Nichols [4]: suggested the use of implicit data as a check on explicit ratings. –GroupLens[5]: employed “time consuming” factor for personalization.

ECDL Web Trends & DL WWW Trends –One way  Two way services –e.g., Blogs, wikis, online journals, forums, etc. –Passive anonymous observer  visible individuals with personalities –Same situation in Digital Libraries –Research emphasis on “User Study” Collaborative Filtering Personalization User Modeling Recommender system, etc.

ECDL Data for User Studies Explicit Ratings –User interview –User preference survey: demographic info, research area, majors, learning topics, publications –User rating for items Implicit Ratings –“User activities”, e.g., browsing, clicking, reading, opening, skipping, etc. –Time

ECDL Problem of Explicit Rating Data in Digital Libraries Expensive to obtain Patrons feel bothered Limited questions Terminology problems in describing research interests and learning topic –Too broad area, and too narrow personal interests –Term ambiguity –New terminology in new areas –Multiple terms for same area, multiple meanings of a term  Hard to figure out users’ interests and topics

ECDL Implicit Rating Data in Complex Information Systems Easy to obtain Patrons don’t feel bothered, can concentrate on their tasks No terminology issues Potential knowledge is included in data More effective when hybrid, with explicit rating data (Nichols [4], GroupLens[5])

ECDL User Tracking Interface and User Model DB Retrieval System User Model DB -Load -Update -Save -Create expand ignore type a query read browse open click tracking info Digital Library

ECDL Attributes of User Activity DGG (Domain Generalization Graph) for user activity attributes in DL ANY User Activity DirectionIntention FrequencyType Entering a query  implicit Sending a query  implicit Reading  implicit Skipping  implicit Selecting  implicit Expanding a node  implicit Scrolling  implicit Dragging  implicit Entering user info.  explicit Entering a query  perceiving Sending a query  rating Reading  perceiving Skipping  rating Selecting  rating Expanding a node  rating Scrolling  perceiving Dragging  perceiving Entering user info.  rating User Interest  ANY Document Topic  ANY High  ANY Low  ANY Rating  ANY Perceiving  ANY implicit  ANY explicit  ANY

ECDL Overview Introduction –Prior Work –Web Trends and DL Data for User Studies –Problem of Explicit Rating Data –Implicit Rating Data in DLs –Attributes of User Activity –User Tracking Interface and User Model DB Questions and Experiments –Questions to Solve –Experiments, Hypothesis Tests, Data, Settings –Results of Hypothesis Testing –Data Types and Characterizing Users Future Work Conclusions References

ECDL Proposed User Grouping Model User grouping is the most critical procedure for a recommender system. Suitable for dynamic and complex information systems like DLs Overcomes data sparseness Uses implicit rating data rather than explicit rating data User oriented recommender algorithm User interest-based community finding User modeling –User model (UM) contains complete statistics for recommender system. –Enhanced interoperability

ECDL Collecting User Interests for User Grouping Users with similar interests are grouped Employs a Document Clustering Algorithm, LINGO [10], to collect document topics Users’ interests are collected implicitly during searching and browsing. A User Model (UM) contains her interests and document topics. Interests of a user are subset of document topics proposed to her by Document Clustering.

ECDL Interest-based Recommender System

ECDL System Analysis with 5S Model Interest-based Recommender System for DL Society Space Structure Stream Scenario User Interface User Model Presentation Push service Filtering Ranking Highlighting Personalized pages Recommendation Group Selection Individual Selection Interest Group Researcher Learner Teacher Class Group Probability Space Vector space Collaboration space Community displays TextAudioVideo represented by UM schema User description User interests Document topics User groups Statistics participates generates refers composed of refers Users

ECDL User Model (UM) User ID User DescriptionGroupsStatistics NameDocument TopicScore User InterestScore Group IDScore Address Publications User Interests (implicit data -generated by user interface and recommender) (implicit data -generated by recommender) (explicit data -obtained from questionnaire)

ECDL Experiment - Tasks Subjects are asked to –answer a questionnaire to collect democratic information –list research interests to help us collect explicit rating data which is used for evaluation in the experiment –search some documents in her research interests and browse the result documents to help us collect implicit rating data

ECDL Experiment - Participants 22 Ph.D and MS students majoring in Computer Science CITIDEL [8] is used as a DL in “Computing” field Data from 4 students were excluded as their research domains are not included in CITIDEL

ECDL Experiment - Interfaces Specially designed user interfaces are required to capture user’s interactions JavaScripts Java Application

ECDL Results - Collected Data Example <Semi Structured Data<Cross Language Information Retrieval CLIR<Translation Model<Structured English Query<TREC Experiments at Maryland<Structured Document<Evaluation<Attribute Grammars<Learning<Web<Query Processing<Query Optimisers<QA<Disambiguation<Sources<SEQUEL<Fuzzy<Indexing<Inference Problem<Schematically Heterogeneity<Sub Optimization Query Execution Plan<Generation<(Other)(<Cross Language Information Retrieval CLIR)(<Structured English Query)(<TREC Experiments at Maryland)(<Evaluation)(<Query Processing)(<Query Optimisers)(<Disambiguation) <Cross Language Information Retrieval CLIR<Machine Translation<English Japanese<Based Machine<TREC Experiments at Maryland<Approach to Machine<Natural Language<Future of Machine Translation<Machine Adaptable Dynamic Binary<CLIR Track<Systems<New<Tables Provide<Design<Statistical Machine<Query Translation<Evaluates<Chinese<USA October Proceedings<Interlingual<Technology<Syntax Directed Transduction<Interpretation<Knowledge<Linguistic<Divergences<(Other)(<Cross Language Information Retrieval CLIR)(<Machine Translation)(<English Japanese)(<TREC Experiments at Maryland)(<CLIR Track)(<Query Translation) Parenthesized topics mean they are rated positively

ECDL Questions to Solve Is implicit rating data really effective for user study? for characterizing users? especially in complex information systems like DLs? If we are to prove it statistically, what are the right hypotheses and what are the right settings for hypotheses testing?

ECDL Two Experiments in this Study Two hypothesis tests to prove the effectiveness of Implicit Rating Data on characterizing users in DL An ANOVA test for comparing implicit rating data types on distinguishing users in DL

ECDL Hypothesis Tests Hypotheses –H 1 : For any serious user with their own research interests and topics, show repeated (consistent) output for the document collections referred to by the user. –H 2 : For serious users who share common research interests and topics, show overlapped output for the document collections referred to by them. –H 3 : For serious users who don’t share any research interests and topics, show different output for the document collections referred to by them.

ECDL Data Used for Hypothesis Tests Data for Hypothesis Tests: Users’ learning topics and research interests are obtained “implicitly” by tracking users’ activities with user tracking interface while users need not be aware. Data collected by a user tracking system for 18 students at both Ph.D. and M.S. levels, in CS major, while using CITIDEL [6]

ECDL Setting for Hypothesis Test 1 Let H 0 be a null hypothesis of H 1, thus H 0 is: Means (μ) of the frequency of document topics ‘proposed’ by the Document Clustering Algorithm are NOT consistent for a user. Simplified  A testing whether the population mean, μ, is statistically significantly greater than hypothesis mean, μ 0.

ECDL Setting for Hypothesis Test 2 Let H 0 be a null hypothesis of H 2, thus H 0 is: A user’s average ratio of overlapped topics with other persons in her groups over her total topics which have been referred, in-group overlapping ratio, μ1, is the same as the average ratio of overlapped topics with other persons out of her groups over her total topics which have been referred, out- group overlapping ratio, μ2 DL system a b c d e f a,b,c,d,e,f : users User Groups

ECDL Setting for Hypothesis Test 2 In-group overlapping ratio Out-group overlapping ratio O i,j : user i’s topic ratio overlapped with user j’s topics over i’s total topics G : total number of user group n K : total number of users in group K N : total number of users in the system

ECDL Setting for Hypothesis Test 2 Simplified  A testing whether μ 1 is statistically significantly greater than μ 2 Hypothesis 3 can be proven and estimated together, by hypothesis test 2

ECDL Results of Test 1 Conditions: 95% confidence (test size α = 0.05), sample size ‘n’ < 25, standard deviation ‘σ’ unknown, i.i.d. random samples, normal distribution,  estimated z- score T-test Test statistics: sample mean ‘ỹ’ = , sample standard deviation ‘s’ = , are observed from the experiment Rejection Rule is to reject H 0 if the ỹ > μ 0 +z α/2 σ/√n From the experiment, ỹ = > μ 0 +z α/2 σ/√n = Therefore decision is to Reject H 0 and accept H 1 95% Confidence Interval for μ is ≤ μ ≤ P-value (confidence of H 0 ) =

ECDL Results of Test 2 Conditions: 95% confidence (test size α = 0.05), two i.i.d. random sample from a normal distribution, for two sample sizes n1 and n2, n1=n2 < 25, standard deviations of each sample σ1 and σ2 are unknown  two-sample Welch T-test From the experiment, sample mean of μ1, ỹ1 = 0.103, sample mean of μ2, ỹ2 = , Satterthwaite’s degree of freedom approximation dfs =16.2 and Welch score w0 = 4.64 > t16.2, 0.05 = Therefore decision is to Reject H 0 and accept H 2 95% Confidence Interval for μ1, μ2 and μ1 - μ2 are ≤ μ1 ≤ , ≤ μ2 ≤ and ≤ μ1 - μ2≤ , respectively P-value (confidence of H 0 ) =

ECDL Results of Hypothesis Testing Statistically proved that implicit rating data is effective in characterizing users in complex information systems.

ECDL Data Types and Characterizing Users Previous similar studies were based on explicit user answers to surveys on their preferences, research and learning topics  basic flaw caused by the variety of academic terms. Purpose: Compare the effectiveness of different data types in characterizing users by using only automatically obtained objective data without using subjective users’ answers.

ECDL Data Types and Characterizing Users –Topics: noun phrases logged in User Models generated by a document clustering system ‘LINGO’ from documents to which users referred –Terms: single words found on user queries and topics –ANOVA statistics F(3,64) = 4.86, p-value = , LSD =

ECDL Data Types and Characterizing Users The higher in-group overlapping ratio / out-group overlapping ratio is more effective in characterizing users. “Proposed topics” which have appeared during the use of a digital library were most effective, however the differences between data types were not significant except the “proposed terms”.

ECDL Future Work Large scale Experiment on NDLTD [7] User Model DB visualization Utilize implicit rating data for personalization and recommendation

ECDL Conclusions Built a User Tracking system to collect Implicit rating data in DL. Statistically proved that implicit ratings is effective information in characterizing users in complex information systems like DLs. Compared the effectiveness of data types in characterizing users without depending on users’ subjective answers.

ECDL References [1] Michael Pazzani, Daniel Billsus: Learning and Revising User Profiles: The Identification of Interesting Web Sites, Machine Learning 27, 1997, [2] Steve Jones, Sally Jo Cunningham, Rodger McNab: An Analysis of Usage of a Digital Library, in Proceedings of the 2 nd ECDL, 1998, [3] Marcos André Gonçalves, Ming Luo, Rao Shen, Mir Farooq and Edward A. Fox: An XML Log Standard and Tools for Digital Library Logging Analysis. In Proceedings of Sixth European Conference on Research and Advanced Technology for Digital Libraries, Rome, Italy, September, 2002, [4] David M. Nichols: Implicit Rating and Filtering. In Proceedings of 5th DELOS Workshop on Filtering and Collaborative Filtering, Budapest Hungary, November 1997, [5] Joseph A. Konstan, Bradley N. Miller, David Maltz, Jonathan L. Herlocker, Lee R. Gordon and John Riedl, GroupLens: Applying Collaborative Filtering to Usenet News. In Communications of the ACM, Vol. 40, No. 3, 1997, [6] CITIDEL: Available at [7] NDLTD: Available at

ECDL Review Introduction –Prior Work –Web Trends and DL Data for User Studies –Problem of Explicit Rating Data –Implicit Rating Data in DLs –Attributes of User Activity –User Tracking Interface and User Model DB Questions and Experiments –Questions to Solve –Experiments, Hypothesis Tests, Data, Settings –Results of Hypothesis Testing –Data Types and Characterizing Users Future Work Conclusions References