Recommender Systems Session C Robin Burke DePaul University Chicago, IL.

Slides:



Advertisements
Similar presentations
Chapter 11 user support. Issues –different types of support at different times –implementation and presentation both important –all need careful design.
Advertisements

Prediction Modeling for Personalization & Recommender Systems Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
© 2010 Bennett, McRobb and Farmer1 Use Case Description Supplementary material to support Bennett, McRobb and Farmer: Object Oriented Systems Analysis.
Back to Table of Contents
Collaborative Filtering Sue Yeon Syn September 21, 2005.
CS305: HCI in SW Development Evaluation (Return to…)
Chapter 10 Schedule Your Schedule. Copyright 2004 by Pearson Education, Inc. Identifying And Scheduling Tasks The schedule from the Software Development.
COLLABORATIVE FILTERING Mustafa Cavdar Neslihan Bulut.
Chapter 14 Comparing two groups Dr Richard Bußmann.
Recommender Systems – An Introduction Dietmar Jannach, Markus Zanker, Alexander Felfernig, Gerhard Friedrich Cambridge University Press Which digital.
Recommender Systems Aalap Kohojkar Yang Liu Zhan Shi March 31, 2008.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Agent Technology for e-Commerce
Inspection Methods. Inspection methods Heuristic evaluation Guidelines review Consistency inspections Standards inspections Features inspection Cognitive.
Recommender systems Ram Akella February 23, 2011 Lecture 6b, i290 & 280I University of California at Berkeley Silicon Valley Center/SC.
Recommender Systems; Social Information Filtering.
Administrivia Turn in ranking sheets, we’ll have group assignments to you as soon as possible Homeworks Programming Assignment 1 due next Tuesday Group.
Recommender systems Ram Akella November 26 th 2008.
Website Content, Forms and Dynamic Web Pages. Electronic Portfolios Portfolio: – A collection of work that clearly illustrates effort, progress, knowledge,
The 2nd International Conference of e-Learning and Distance Education, 21 to 23 February 2011, Riyadh, Saudi Arabia Prof. Dr. Torky Sultan Faculty of Computers.
© 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 1 Context of Software Product Design.
Evaluation of digital collections' user interfaces Radovan Vrana Faculty of Humanities and Social Sciences Zagreb, Croatia
IMSS005 Computer Science Seminar
Distributed Networks & Systems Lab. Introduction Collaborative filtering Characteristics and challenges Memory-based CF Model-based CF Hybrid CF Recent.
1 Information Filtering & Recommender Systems (Lecture for CS410 Text Info Systems) ChengXiang Zhai Department of Computer Science University of Illinois,
Process of Science The Scientific Method.
Adaptive News Access Daniel Billsus Presented by Chirayu Wongchokprasitti.
An Integration Framework for Sensor Networks and Data Stream Management Systems.
Feasibility Study.
A Simple Unsupervised Query Categorizer for Web Search Engines Prashant Ullegaddi and Vasudeva Varma Search and Information Extraction Lab Language Technologies.
1 Applying Collaborative Filtering Techniques to Movie Search for Better Ranking and Browsing Seung-Taek Park and David M. Pennock (ACM SIGKDD 2007)
User Study Evaluation Human-Computer Interaction.
CIKM’09 Date:2010/8/24 Advisor: Dr. Koh, Jia-Ling Speaker: Lin, Yi-Jhen 1.
 Text Representation & Text Classification for Intelligent Information Retrieval Ning Yu School of Library and Information Science Indiana University.
Module 4: Systems Development Chapter 12: (IS) Project Management.
IT Requirements Management Balancing Needs and Expectations.
The Structure of Information Retrieval Systems LBSC 708A/CMSC 838L Douglas W. Oard and Philip Resnik Session 1: September 4, 2001.
Objectives Objectives Recommendz: A Multi-feature Recommendation System Matthew Garden, Gregory Dudek, Center for Intelligent Machines, McGill University.
Introduction to Earth Science Section 2 Section 2: Science as a Process Preview Key Ideas Behavior of Natural Systems Scientific Methods Scientific Measurements.
Recommender Systems Robin Burke DePaul University Chicago, IL.
A Content-Based Approach to Collaborative Filtering Brandon Douthit-Wood CS 470 – Final Presentation.
MEMBERSHIP AND IDENTITY Active server pages (ASP.NET) 1 Chapter-4.
Ch 7. What Makes a Great Analysis? Taming The Big Data Tidal Wave 31 May 2012 SNU IDB Lab. Sengyu Rim.
Software Maintenance Speaker: Jerry Gao Ph.D. San Jose State University URL: Sept., 2001.
Copyright ©2004 Virtusa Corporation | CONFIDENTIAL Requirement Engineering Virtusa Training Group 2004 Trainer: Ojitha Kumanayaka Duration : 1 hour.
CSE SW Project Management / Module 15 - Introduction to Effort Estimation Copyright © , Dennis J. Frailey, All Rights Reserved CSE7315M15.
Requirements Engineering Requirements Engineering in Agile Methods Lecture-28.
User Modeling and Recommender Systems: Introduction to recommender systems Adolfo Ruiz Calleja 06/09/2014.
Bloom Cookies: Web Search Personalization without User Tracking Authors: Nitesh Mor, Oriana Riva, Suman Nath, and John Kubiatowicz Presented by Ben Summers.
What’s Ahead for Embedded Software? (Wed) Gilsoo Kim
User Modeling and Recommender Systems: recommendation algorithms
Speech Processing 1 Introduction Waldemar Skoberla phone: fax: WWW:
Copyright , Dennis J. Frailey CSE7315 – Software Project Management CSE7315 M15 - Version 9.01 SMU CSE 7315 Planning and Managing a Software Project.
User Stories- 2 Advanced Software Engineering Dr Nuha El-Khalili.
Presented By: Madiha Saleem Sunniya Rizvi.  Collaborative filtering is a technique used by recommender systems to combine different users' opinions and.
Recommender Systems Session F Robin Burke DePaul University Chicago, IL.
Dillon: CSE470: ANALYSIS1 Requirements l Specify functionality »model objects and resources »model behavior l Specify data interfaces »type, quantity,
CS223: Software Engineering Lecture 25: Software Testing.
Collaborative Filtering - Pooja Hegde. The Problem : OVERLOAD Too much stuff!!!! Too many books! Too many journals! Too many movies! Too much content!
Analysis of massive data sets Prof. dr. sc. Siniša Srbljić Doc. dr. sc. Dejan Škvorc Doc. dr. sc. Ante Đerek Faculty of Electrical Engineering and Computing.
 System Requirement Specification and System Planning.
Human Computer Interaction Lecture 21 User Support
Human Computer Interaction Lecture 21,22 User Support
Recommender Systems Session I
Software Requirements analysis & specifications
Q4 : How does Netflix recommend movies?
Chapter 5: Software effort estimation
Chapter 11 user support.
Presentation transcript:

Recommender Systems Session C Robin Burke DePaul University Chicago, IL

Roadmap Session A: Basic Techniques I Session A: Basic Techniques I –Introduction –Knowledge Sources –Recommendation Types –Collaborative Recommendation Session B: Basic Techniques II Session B: Basic Techniques II –Content-based Recommendation –Knowledge-based Recomme –Knowledge-based Recommendation Session C: Domains and Implementation I Session C: Domains and Implementation I –Recommendation domains –Example Implementation – –Lab I Session D: Evaluation I Session D: Evaluation I –Evaluation Session E: Applications Session E: Applications –User Interaction –Web Personalization Session F: Implementation II Session F: Implementation II –La –Lab II Session G: Hybrid Recommendation Session G: Hybrid Recommendation Session H: Robustness Session H: Robustness Session I: Advanced Topics Session I: Advanced Topics –Dynamics –Beyond accuracy

New schedule Tuesday Tuesday –15:00-18:00 Session C and part of Session E –18:00-20:00 Independent lab (programming) Wednesday Wednesday –8:00-11:00 Session D (Evaluation) –11:15-13:00 Rest of Session E –14:30-16:00 Session H (Seminar room IST) –17:00-19:00 Session G Programming assignment Programming assignment Thursday Thursday –8:00-9:45 Session I –10:00-11:00 Exam

Activity With your partner With your partner Come up with a domain for recommendation Come up with a domain for recommendation –Cannot be music music movies movies books books restaurants restaurants Can't be already the topic of your research Can't be already the topic of your research 10 minutes 10 minutes

Domains?

Characteristics Heterogeneity Heterogeneity –the diversity of the item space Risk Risk –the cost associated with system error Churn Churn –the frequency of changes in the item space Interaction style Interaction style –how users interact with the recommender Preference stability Preference stability –the lifetime of user preferences Scrutability Scrutability –the requirement for transparency Portfolio Portfolio –whether recommendation needs to take history into account Novelty Novelty –the need for novel / atypical items

Heterogeneity How broad is the range of recommended items? How broad is the range of recommended items? Example Example –Netflix movies / TV shows movies / TV shows diversity of subject matter, etc. diversity of subject matter, etc. still essentially serving the same goal: entertainment still essentially serving the same goal: entertainment relatively homogeneous relatively homogeneous –Amazon.com everything from books to electronics to gardening tools everything from books to electronics to gardening tools many different goals to be satisfied many different goals to be satisfied relatively heterogeneous relatively heterogeneous

Considerations Homogeneous items can have standardized descriptions Homogeneous items can have standardized descriptions –movies have actors, directors, plot summary, etc. –possible to develop a solid body of content data Heterogeneous items will be harder to represent Heterogeneous items will be harder to represent

Impact Content knowledge is a problem in heterogeneous domains Content knowledge is a problem in heterogeneous domains –hard to develop a good schema that represents everything –hard to cover all items with useful domain knowledge Social knowledge is one’s best bet Social knowledge is one’s best bet

Risk Some products are inherently low risk Some products are inherently low risk –99 cent music track Some are not low risk Some are not low risk –a house We mean We mean –the cost of a false positive accepted by the user Sometimes false negatives are also costly Sometimes false negatives are also costly –scientific research –legal precedents

Considerations In a low risk domain In a low risk domain –it doesn’t matter so much how we choose –user will be less likely to have strong constraints In a high risk domain In a high risk domain –important to gather more information about exactly what the requirements are

Impact Pure social recommendation will not work so well for high risk domains Pure social recommendation will not work so well for high risk domains –inability to take constraints into account –possibility of bias Knowledge-based recommendation has great potential in high risk Knowledge-based recommendation has great potential in high risk –knowledge engineering costs worthwhile –user’s constraints can be employed

Churn High churn means that items come and go quickly High churn means that items come and go quickly –news Low churn items will be around for awhile Low churn items will be around for awhile –books In the middle In the middle –restaurants –package vacations

Considerations New item problem New item problem –constant in high churn domains –Consequence difficult to build up a history of opinions difficult to build up a history of opinions Freshness may matter Freshness may matter –a good match from yesterday might be worse than –a weaker match from today

Impact Difficult to employ social knowledge alone Difficult to employ social knowledge alone –items won’t have big enough profiles Need flexible representation for content data Need flexible representation for content data –since catalog characteristics aren’t known in advance

Interaction style Some recommendation scenarios are passive Some recommendation scenarios are passive –the recommender produces content as part of a web site Others are active Others are active –the user makes a direct request Sometimes a quick hit is important Sometimes a quick hit is important –mobile application Sometimes more extensive exploration is called for Sometimes more extensive exploration is called for –rental apartment

Considerations Passive style means user requirements are harder to tap into Passive style means user requirements are harder to tap into –not necessarily impossible Brief interaction Brief interaction –means that only small amounts of information can be communicated Long interaction Long interaction –like web site browsing –may make up for deficiencies in passive data gathering

Impact Passive interactions Passive interactions –favor learning-based method –don’t need user requirements Active interactions Active interactions –favor knowledge-based interactions –other techniques don’t adapt quickly Extended, passive interaction Extended, passive interaction –allow large amounts of opinion data to be gathered

Preference stability Are users’ preferences stable over time? Are users’ preferences stable over time? Some taste domains may be consistent Some taste domains may be consistent –movies –music (purchasing) But others not But others not –restaurants –music (playlists) Not the same as churn Not the same as churn –that has to do with items coming and going

Considerations Preference instability makes opinion data less useful Preference instability makes opinion data less useful Approaches Approaches –temporal decay –contextual selection Preference stability Preference stability –large profiles can be built

Impact Preference instability Preference instability –opinion data will be sparse –knowledge-based recommendation may be better Preference stability Preference stability –best case for learning-based techniques

Scrutability “The property of being testable; open to inspection.” “The property of being testable; open to inspection.” –wiktionary Usually refers to explanatory capabilities of a recommender system Usually refers to explanatory capabilities of a recommender system Some domains need explanations of recommendations Some domains need explanations of recommendations –usually high risk –also domains where users are non-experts complex products like digital cameras complex products like digital cameras

Considerations Learning-based recommendations are hard to explain Learning-based recommendations are hard to explain –the underlying models are statistical –some research in this area but no conclusive “best way” to explain

Impact Knowledge-based techniques are usually more scrutable Knowledge-based techniques are usually more scrutable

Portfolio The “portfolio effect” occurs when an item is purchased or viewed The “portfolio effect” occurs when an item is purchased or viewed –and then is no longer interesting Not always the case Not always the case –I can recommend your favorite song again in a couple of days Sometimes recommendations have to take into account the entire history Sometimes recommendations have to take into account the entire history –investments, for example

Considerations A domain with the portfolio effect requires knowledge of the user’s history A domain with the portfolio effect requires knowledge of the user’s history –standard formulation of collaborative recommendation –only recommend items that are unrated Music recommender might need to know Music recommender might need to know –when a track was played –what a reasonable time-span between repeats –avoid over-rotation News recommendation News recommendation –tricky because new stories on same topic might be interesting –as long as there is new material

Impact A problem for content-based recommendation A problem for content-based recommendation –another copy of an item will match best –must have another way to identify overlap –or threshold “not too similar” Domain-specific requirements for rotation and portfolio composition Domain-specific requirements for rotation and portfolio composition –domain knowledge requirement

Novelty “Milk and bananas” “Milk and bananas” –two most-purchased items in US grocery stores Could recommend to everybody Could recommend to everybody –correct very frequently But... But... –not interesting –people know they want these things –profit margin low –recommender very predictable

Consideration Think about items Think about items –where the target users predicted rating is significantly higher than the average –where there is high variance (difference of opinion) These recommendations might be more valuable These recommendations might be more valuable –more “personal”

Impact Collaborative methods are vulnerable to the “tyranny of the crowd” Collaborative methods are vulnerable to the “tyranny of the crowd” –“Coldplay” effect May be necessary to May be necessary to –smooth popularity spikes –use thresholds

Categorize Domains 15 min 15 min 10 min Discussion 10 min Discussion

Break 10 minutes

Interaction Input Input –implicit –explicit Duration Duration –single response –multi-step Modeling Modeling –short-term –long-term

Recommendation Knowledge Sources Taxonomy Recommendation Knowledge Collaborative Content User Opinion Profiles Demographic Profiles Opinions Demographics Item Features Means-ends Domain Constraints Contextual Knowledge Requirements Query Constraints Preferences Context Domain Knowledge Feature Ontology

Also, Output How to present results to users? How to present results to users?

Input Explicit Explicit –ask the user what you want to know Queries Queries Ratings Ratings Preferences Preferences Implicit Implicit –gather information from behavior Ratings Ratings Preferences Preferences Queries Queries

Explicit Queries Query elicitation problem Query elicitation problem Problem Problem –How to get the user’s preferred features / constraints? Issues Issues –User expertise / terminology

Example Ambiguity Ambiguity –“madonna and child” Imprecision Imprecision –“a fast processor” Terminological mismatch Terminological mismatch –“an iTunes player” Lack of awareness Lack of awareness –(I hate lugging a heavy laptop)

Feature Lists Assume user familiarity Assume user familiarity

Recommendation Dialog Fewer questions Fewer questions –future questions can depend on current answers Mixed-initiative Mixed-initiative –recommender can propose solutions Critiquing Critiquing –examining solutions can help users define requirements –(more about critiquing later)

Implicit Evidence Watch user’s behavior Watch user’s behavior –infer preferences Benefit Benefit –no extra user effort –no terminological gap Typical sources Typical sources –web server logs more about this later more about this later –purchase / shopping cart history –CRM interactions

Problems Noise Noise –gift shopping –distractions on the web Interpretation Interpretation –visit = interest? –long stay = interest? –purchase but what about purchase and then return? but what about purchase and then return?

Tradeoffs ExplicitImplicit PlusDirect from user Expressive Less data needed Easy to gather No user effort MinusRequires user effort Requires user interface design May require user expertise Possibly noisy Challenges in interpretation

Modeling Short-term Short-term –usually we mean “single-session” Long-term Long-term –multi-session

Long-term Modeling Preferences with a long duration Preferences with a long duration –tend to be general 50s jazz vs Sonny Rollin’s albums on Prestige 50s jazz vs Sonny Rollin’s albums on Prestige –tend to be personally meaningful preference for non-smoking hotel rooms preference for non-smoking hotel rooms –may be not have an non-conscious component prefer the “look” of certain kinds of houses prefer the “look” of certain kinds of houses

Short-term Modeling What does the user want right now? What does the user want right now? –usually need some kind of query Preferences with short duration Preferences with short duration –may be very task-specific preference for a train that connects with my arriving flight preference for a train that connects with my arriving flight

Application design Have to consider the role of recommendation in the overall application Have to consider the role of recommendation in the overall application –how would the user want to interact? –how can the recommendation be delivered?

Simple Coding Exercise Recommender systems evaluation framework Recommender systems evaluation framework –a bit different than what you would use for a production system –goal to evaluate different alternatives

Three exercises Implement a simple baseline Implement a simple baseline –average prediction Implement a new similarity metric Implement a new similarity metric –Jaccard coefficient Evaluate results on a data set Evaluate results on a data set

Download Eclipse workspace file – –student-ws.zip

Structure Evaluator Predictor Set Profile Map RatingMovie DatasetReader

Predictor Predictor ThreePredictor PearsonPredictor AvePredictor JaccardPredictor initialize( ) predict( user, item )

Evaluator Evaluator MaeEvaluatorRmseEvaluator evaluate( )

Basic flow Create dataset reader for dataset Create dataset reader for dataset Read profiles Read profiles Create predictor using profiles Create predictor using profiles Create an evaluator for the predictor Create an evaluator for the predictor Call the evaluate method Call the evaluate method Output the evaluation statistic Output the evaluation statistic

PearsonPredictor Similarity caching – –We need to calculate each user’s similarity to the others anyway For each prediction – –Might as well do it only once standard time v space tradeoff

Data sets Four data sets – –Tiny (3 users) synthetic For unit testing – –Test (5 users) Also synthetic For quick tests – –U-filtered Subset of MovieLens dataset Standard for recommendation research – –u Full MovieLens 100K dataset

Demo

Task Implement a better baseline ThreePredict is weak – –Better to use the item average

AvePredictor Non-personalized prediction – –What Amazon.com shows Idea – –Cache the average score for an item – –When predict(user, item) is called Ignore the target user Better idea – –Norm for user average – –Calculate the average deviation for the item above each user’s average – –Average these deviations – –Add to target user’s average

Existing unit test

Compare With Pearson predictor

Process Class time scheduled for 18:00-20:00 Use this time to complete the assignment Due before class tomorrow Work in pairs if you prefer – –Submit by – –subject line: GRAZ H1 – –body: names of students – –attach: AvePredictor.java