Recommenders for Information Seeking Tasks: Lessons Learned

Slides:



Advertisements
Similar presentations
Recommender System A Brief Survey.
Advertisements

Google News Personalization: Scalable Online Collaborative Filtering
Distributed Advice-Seeking on an Evolving Social Network Dept Computer Science and Software Engineering The University of Melbourne - Australia Golriz.
Hybrid recommender systems Hybrid: combinations of various inputs and/or composition of different mechanism Knowledge-based: "Tell me what.
Collaborative Filtering Sue Yeon Syn September 21, 2005.
Oct 14, 2014 Lirong Xia Recommender systems acknowledgment: Li Zhang, UCSC.
The Wisdom of the Few A Collaborative Filtering Approach Based on Expert Opinions from the Web Xavier Amatriain Telefonica Research Nuria Oliver Telefonica.
George Lee User Context-based Service Control Group
Recommender Systems Aalap Kohojkar Yang Liu Zhan Shi March 31, 2008.
Using a Trust Network To Improve Top-N Recommendation
Artificial Intelligence and Case-Based Reasoning Computer Science and Engineering Mälardalen University Västerås, Mikael Sollenborn, CSL,
CoLIS 6, Boras, Sweden, 13 – 16, Aug 2007 Resolvability of References in Users’ Personal Collections Nishikant Kapoor, John T Butler, Gary C Fouty, James.
Computing Trust in Social Networks
Automating Keyphrase Extraction with Multi-Objective Genetic Algorithms (MOGA) Jia-Long Wu Alice M. Agogino Berkeley Expert System Laboratory U.C. Berkeley.
INFO Human Information Behavior (HIB) What is information behavior? What is “information”?
Creating and Visualizing Document Classification J. Gelernter, D. Cao, R. Lu, E. Fink, J. Carbonell.
Usability Evaluation of Digital Libraries Stacey Greenaway Submitted to University of Wolverhampton module Dec 15 th 2006.
Item-based Collaborative Filtering Recommendation Algorithms
Personalization of the Digital Library Experience: Progress and Prospects Nicholas J. Belkin Rutgers University, USA
Citation Recommendation 1 Web Technology Laboratory Ferdowsi University of Mashhad.
Item Based Collaborative Filtering Recommendation Algorithms Badrul Sarwar, George Karpis, Joseph KonStan, John Riedl (UMN) p.s.: slides adapted from:
LOGO Recommendation Algorithms Lecturer: Dr. Bo Yuan
Adaptive News Access Daniel Billsus Presented by Chirayu Wongchokprasitti.
Hybrid Web Recommender Systems
Personalized Information Retrieval in Context David Vallet Universidad Autónoma de Madrid, Escuela Politécnica Superior,Spain.
University of Dublin Trinity College Localisation and Personalisation: Dynamic Retrieval & Adaptation of Multi-lingual Multimedia Content Prof Vincent.
Group Recommendations with Rank Aggregation and Collaborative Filtering Linas Baltrunas, Tadas Makcinskas, Francesco Ricci Free University of Bozen-Bolzano.
1 CNI 2005 Fall BriefingTechLens TechLens: Exploring the Use of Recommenders to Support Users of Digital Libraries Joseph A. Konstan, Nishikant Kapoor,
Topical Crawlers for Building Digital Library Collections Presenter: Qiaozhu Mei.
 Text Representation & Text Classification for Intelligent Information Retrieval Ning Yu School of Library and Information Science Indiana University.
Google News Personalization: Scalable Online Collaborative Filtering
Lecture 2 Jan 13, 2010 Social Search. What is Social Search? Social Information Access –a stream of research that explores methods for organizing users’
D AFFODIL Strategic Support Evaluated Claus-Peter Klas Norbert Fuhr Andre Schaefer University of Duisburg-Essen.
Collaborative Information Retrieval - Collaborative Filtering systems - Recommender systems - Information Filtering Why do we need CIR? - IR system augmentation.
Badrul M. Sarwar, George Karypis, Joseph A. Konstan, and John T. Riedl
ECDL, Budapest,Hungary, Sep 16 – 21, 2007 A Study of Citations in Users’ Online Personal Collections Nishikant Kapoor John T Butler, Sean M McNee, Gary.
Intelligent Agents. 2 What is an Agent? The main point about agents is they are autonomous: capable of acting independently, exhibiting control over their.
Google News Personalization Big Data reading group November 12, 2007 Presented by Babu Pillai.
L&I SCI 110: Information science and information theory Instructor: Xiangming(Simon) Mu Sept. 9, 2004.
Augmenting (personal) IR Readings Review Evaluation Papers returned & discussed Papers and Projects checkin time.
A Supervised Machine Learning Algorithm for Research Articles Leonidas Akritidis, Panayiotis Bozanis Dept. of Computer & Communication Engineering, University.
SCHOOL OF INFORMATION UNIVERSITY OF MICHIGAN si.umich.edu Author(s): Rahul Sami, 2009 License: Unless otherwise noted, this material is made available.
Collaborative Filtering and Recommender Systems Brian Lewis INF 385Q Knowledge Management Systems November 10, 2005.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
1 FollowMyLink Individual APT Presentation First Talk February 2006.
Communities and Portals Lan Zhang School of Information University of Texas at Austin.
User Errors in Formulating Queries and IR Techniques to Overcome Them Birger Larsen Information Interaction and Information Architecture Royal School of.
Dependency Networks for Inference, Collaborative filtering, and Data Visualization Heckerman et al. Microsoft Research J. of Machine Learning Research.
User Needs and Behavior WXGB63083 Course Title Course Code Introductory Lecture.
Recommender systems 06/10/2017 S. Trausan-Matu.
Announcements Paper presentation Project meet with me ASAP
Recommendation in Scholarly Big Data
Recommender Systems & Collaborative Filtering
Information Organization: Overview
Data-Driven Educational Data Mining ---- the Progress of Project
Information Retrieval and Web Search
Preface to the special issue on context-aware recommender systems
Grant Writing Information Session
Information Retrieval and Web Search
Information Retrieval and Web Search
Adaptive Interfaces Jeffrey Heer · 28 May 2009.
Exploratory Search Beyond the Query–Response Paradigm
INFO 414 Information Behavior
CSE 635 Multimedia Information Retrieval
ITEM BASED COLLABORATIVE FILTERING RECOMMENDATION ALGORITHEMS
Indented Tree or Graph? A Usability Study of Ontology Visualization Techniques in the Context of Class Mapping Evaluation 本体可视化技术在类型匹配评估中的可用性研究 Qingxia.
Information Organization: Overview
Information Retrieval and Web Search
Bug Localization with Combination of Deep Learning and Information Retrieval A. N. Lam et al. International Conference on Program Comprehension 2017.
Interactive Information Retrieval
Presentation transcript:

Recommenders for Information Seeking Tasks: Lessons Learned Michael Yudelson

References Joseph A. Konstan, Sean M. McNee, Cai-Nicolas Ziegler, Roberto Torres, Nishikant Kapoor, John Riedl: Lessons on Applying Automated Recommender Systems to Information-Seeking Tasks. AAAI 2006 McNee, S. M., Kapoor, N., and Konstan, J. A. 2006. Don't look stupid: avoiding pitfalls when recommending research papers. In Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work (Banff, Alberta, Canada, November 04 - 08, 2006). CSCW '06. ACM Press, New York, NY, 171-180. Michael Yudelson, AAAI 2006 Nectar Session Notes

Overview Statement of the Problem Theories General Advice Experiment Lessons Learned

“There is an emerging understanding that good recommendation accuracy alone does not give users of recommender systems an effective and satisfying experience. Recommender systems must provide not just accuracy, but also usefulness.” J.L. Herlocker, J.A. Konstan, L.G. Terveen, and J.T. Riedl, "Evaluating Collaborative Filtering Recommender Systems", ACM Trans.Inf.Syst., vol. 22(1), pp. 5-53, 2004.

Statement of the Problem User is engaged in an information seeking task (or several) Movies, Papers, News Goal of the recommender is to meet user specific needs with respect to Correctness Saliency Trust Expectations Usefulness

Theories Information Retrieval (IR) Machine Learning (ML) Human-Recommender Interaction (HRI) Information Seeking Theories Four Stages of Information Need (Taylor) Mechanisms and Motivations Model (Wilson) Theory of Sense Making (Dervin) Information Search Process (Kuhlthau)

General Advice Support multiple information seeking tasks User-centered design Shift focus from system and algorithm to potentially repeated interactions of a user with a system Recommend Not what is “relevant”, But what is “relevant for info seeking task X”

General Advice (cont’d) Choice of the recommender algorithm Saliency (the emotional reaction a user has to a recommendation) Spread (the diversity of items) Adaptability (how a recommender changes as a user changes) Risk (recommending items based on confidence)

What Can Go Wrong Possible pitfalls (semantic) not building user confidence (trust failure) not generating any recommendations (knowledge failure) generating incorrect recommendations (personalization failure), and generating recommendations to meet the wrong need (context failure)

Experiment Domain - Digital Libraries (ACM) Information Seeking Tasks Find references to fit a document Maintain awareness in a research field Subjects - 138 18 students, 117 professors/researchers, 7 non-computer scientists

Experiment (cont’d) Tested recommending algorithms User-Based Collaborative Filtering (CF) Naïve Bayesian Classifier (Bayes) Probabilistic Latent Semantic Indexing (PLSI) Textual TF/IDF-based algorithm (TFIDF)

Experiment (cont’d) Walkthrough Seed the document selection (self or others) Tasks (given seeded documents ) What are other relevant papers in the DL interesting to read What are the papers that would extend the coverage of the field Compare recommendations of 2 algorithms (each recommends 5 items) Satisfaction with algorithm A or B on likert scale Preference of algorithm A or B

Experiment (cont’d) Anticipated Results CF - golden standard PLSI - comparable with CF Bayes - generating more mainstream recommendations, worse personalization TFIDF - more conservative, yet coherent results CF + PLSI vs. Bayes + TFIDF

Experiment (cont’d) Results Dimensions Authoritative Work, Familiarity, Personalization Good Recommendation, Expected, Good Spread Suitability for Current Task CF + TFIDF significantly better feedback that Bayes + PLSI No significant difference between CF & TFIDF or Bayes & PLSI Contradicts IR & ML literature

Experiment (cont’d) What went wrong Both Bayes and PLSI Bayes - generated similar recommendations for all users PLSI - random, “illogical” recommendation Both Bayes and PLSI Highly dependant on “connectivity” (co-citation) of papers Suffered from inconsistency Didn’t “fail”, but were “inadequate”

Lessons Learned Understanding the task is more important than achieving high relevancy of recommendation for that task Understanding whether searcher knows what s/he’s looking for is crucial There is no “golden bullet” People think of recommenders as machine learning systems modeling what you already know, predicting the past and penalizing for predicting the future

Lessons Learned (cont’d) Dependence on offline experiments created a disconnect between algorithms that score well on accuracy metrics and algorithms that prove useful for users Problem of Ecological Validity

Lessons Learned (cont’d) 1 good recommendation in a list of 5 Wins trust of the user Loses trust of user If user needs are unclear Do a user study to elicit them

Thank you! Questions…