CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original.

Slides:



Advertisements
Similar presentations
Recommender Systems & Collaborative Filtering
Advertisements

Prediction Modeling for Personalization & Recommender Systems Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Similarity and Distance Sketching, Locality Sensitive Hashing
Jeff Howbert Introduction to Machine Learning Winter Collaborative Filtering Nearest Neighbor Approach.
MMDS Secs Slides adapted from: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, October.
COLLABORATIVE FILTERING Mustafa Cavdar Neslihan Bulut.
Filtering and Recommender Systems Content-based and Collaborative Some of the slides based On Mooney’s Slides.
Rubi’s Motivation for CF  Find a PhD problem  Find “real life” PhD problem  Find an interesting PhD problem  Make Money!
Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22.
CS345 Data Mining Recommendation Systems Netflix Challenge Anand Rajaraman, Jeffrey D. Ullman.
Database Management Systems, R. Ramakrishnan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides.
CS 345A Data Mining Lecture 1 Introduction to Web Mining.
RECOMMENDATION SYSTEMS
Recommendations via Collaborative Filtering. Recommendations Relevant for movies, restaurants, hotels…. Recommendation Systems is a very hot topic in.
Recommender systems Ram Akella February 23, 2011 Lecture 6b, i290 & 280I University of California at Berkeley Silicon Valley Center/SC.
1 Introduction to Recommendation System Presented by HongBo Deng Nov 14, 2006 Refer to the PPT from Stanford: Anand Rajaraman, Jeffrey D. Ullman.
Recommender systems Ram Akella November 26 th 2008.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
Recommendation Systems
Collaborative Filtering & Content-Based Recommending
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
Combining Content-based and Collaborative Filtering Department of Computer Science and Engineering, Slovak University of Technology
Recommender Systems. >1,000,000,000 Finding Trusted Information How many cows in Texas?
Distributed Networks & Systems Lab. Introduction Collaborative filtering Characteristics and challenges Memory-based CF Model-based CF Hybrid CF Recent.
Item Based Collaborative Filtering Recommendation Algorithms Badrul Sarwar, George Karpis, Joseph KonStan, John Riedl (UMN) p.s.: slides adapted from:
1 Information Filtering & Recommender Systems (Lecture for CS410 Text Info Systems) ChengXiang Zhai Department of Computer Science University of Illinois,
Recommender systems Drew Culbert IST /12/02.
Privacy risks of collaborative filtering Yuval Madar, June 2012 Based on a paper by J.A. Calandrino, A. Kilzer, A. Narayanan, E. W. Felten & V. Shmatikov.
Sarah Fatima Varda Sarfraz.  What is Recommendation systems?  Three recommendation approaches  Content-based  Collaborative  Hybrid approach  Conclusions.
EMIS 8381 – Spring Netflix and Your Next Movie Night Nonlinear Programming Ron Andrews EMIS 8381.
Chengjie Sun,Lei Lin, Yuan Chen, Bingquan Liu Harbin Institute of Technology School of Computer Science and Technology 1 19/11/ :09 PM.
Toward the Next generation of Recommender systems
1 Recommender Systems Collaborative Filtering & Content-Based Recommending.
1 Computing Relevance, Similarity: The Vector Space Model.
CPSC 404 Laks V.S. Lakshmanan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides at UC-Berkeley.
A Content-Based Approach to Collaborative Filtering Brandon Douthit-Wood CS 470 – Final Presentation.
Evaluation of Recommender Systems Joonseok Lee Georgia Institute of Technology 2011/04/12 1.
1 Collaborative Filtering & Content-Based Recommending CS 290N. T. Yang Slides based on R. Mooney at UT Austin.
Recommender Systems Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata Credits to Bing Liu (UIC) and Angshul Majumdar.
Recommender Systems. Recommender Systems (RSs) n RSs are software tools providing suggestions for items to be of use to users, such as what items to buy,
Cosine Similarity Item Based Predictions 77B Recommender Systems.
DATA MINING LECTURE 5 Similarity and Distance Recommender Systems.
Pairwise Preference Regression for Cold-start Recommendation Speaker: Yuanshuai Sun
CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original.
Netflix Challenge: Combined Collaborative Filtering Greg Nelson Alan Sheinberg.
Online Social Networks and Media Recommender Systems Collaborative Filtering Social recommendations Thanks to: Jure Leskovec, Anand Rajaraman, Jeff Ullman.
Collaborative Filtering via Euclidean Embedding M. Khoshneshin and W. Street Proc. of ACM RecSys, pp , 2010.
User Modeling and Recommender Systems: recommendation algorithms
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Recommender Systems Based Rajaraman and Ullman: Mining Massive Data Sets & Francesco Ricci et al. Recommender Systems Handbook.
What do you recommend? THOSE … THIS … THAT 1. what you’ll read next summer (Amazon, Barnes&Noble) what movies you should watch… (Reel, RatingZone, Amazon)
CS791 - Technologies of Google Spring A Web­based Kernel Function for Measuring the Similarity of Short Text Snippets By Mehran Sahami, Timothy.
Analysis of massive data sets Prof. dr. sc. Siniša Srbljić Doc. dr. sc. Dejan Škvorc Doc. dr. sc. Ante Đerek Faculty of Electrical Engineering and Computing.
Chapter 14 – Association Rules and Collaborative Filtering © Galit Shmueli and Peter Bruce 2016 Data Mining for Business Analytics (3rd ed.) Shmueli, Bruce.
COMP9313: Big Data Management Lecturer: Xin Cao Course web site: COMP9313: Big Data Management Lecturer: Xin Cao Course web site:
Matrix Factorization and Collaborative Filtering
High dim. data Graph data Infinite data Machine learning Apps
Data Mining: Concepts and Techniques
Recommender Systems & Collaborative Filtering
CS728 The Collaboration Graph
Recommender Systems: Content-based Systems & Collaborative Filtering
مهندسی سيستم‌هاي تجارت الکترونيک
MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER SYSTEMS
Step-By-Step Instructions for Miniproject 2
Recommender Systems: Content-based Systems & Collaborative Filtering
CS 345A Data Mining Lecture 1
Recommendation Systems
CS 345A Data Mining Lecture 1
CS 345A Data Mining Lecture 1
Presentation transcript:

CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original slides can be accessed at:

 Customer X  Buys Metallica CD  Buys Megadeth CD  Customer Y  Does search on Metallica  Recommender system suggests Megadeth from data collected about customer X J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

Items SearchRecommendations Products, web sites, blogs, news items, … 3J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Examples:

 Shelf space is a scarce commodity for traditional retailers  Also: TV networks, movie theaters,…  Web enables near-zero-cost dissemination of information about products  From scarcity to abundance  More choice necessitates better filters  Recommendation engines  How Into Thin Air made Touching the Void a bestseller: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

Source: Chris Anderson (2004) 5J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

6 Read to learn more!

 Editorial and hand curated  List of favorites  Lists of “essential” items  Simple aggregates  Top 10, Most Popular, Recent Uploads  Tailored to individual users  Amazon, Netflix, … 7J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 X = set of Customers  S = set of Items  Utility function u: X × S  R  R = set of ratings  R is a totally ordered set  e.g., 0-5 stars, real number in [0,1] 8J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

AvatarLOTRMatrixPirates Alice Bob Carol David 9

 (1) Gathering “known” ratings for matrix  How to collect the data in the utility matrix  (2) Extrapolate unknown ratings from the known ones  Mainly interested in high unknown ratings  We are not interested in knowing what you don’t like but what you like  (3) Evaluating extrapolation methods  How to measure success/performance of recommendation methods 10J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Explicit  Ask people to rate items  Doesn’t work well in practice – people can’t be bothered  Implicit  Learn ratings from user actions  E.g., purchase implies high rating  What about low ratings? 11J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Key problem: Utility matrix U is sparse  Most people have not rated most items  Cold start:  New items have no ratings  New users have no history  Three approaches to recommender systems:  1) Content-based  2) Collaborative  3) Latent factor based 12J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, This lecture

 Main idea: Recommend items to customer x similar to previous items rated highly by x Example:  Movie recommendations  Recommend movies with same actor(s), director, genre, …  Websites, blogs, news  Recommend other sites with “similar” content J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

likes Item profiles Red Circles Triangles User profile match recommend build 15J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 For each item, create an item profile  Profile is a set (vector) of features  Movies: author, title, actor, director,…  Text: Set of “important” words in document  How to pick important features?  Usual heuristic from text mining is TF-IDF (Term frequency * Inverse Doc Frequency)  Term … Feature  Document … Item 16J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

f ij = frequency of term (feature) i in doc (item) j n i = number of docs that mention term i N = total number of docs TF-IDF score: w ij = TF ij × IDF i Doc profile = set of words with highest TF-IDF scores, together with their scores 17J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Note: we normalize TF to discount for “longer” documents

18 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Two Types of Document Similarity  In the LSH lecture: Lexical similarity  Large identical sequences of characters  For recommendation systems: Content similarity  Occurrences of common important words  TF-IDF score: If an uncommon word appears more frequently in two documents, it contributes to similarity.  Similar techniques (e.g. MinHashing and LSH) are still applicable.

19 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Representing Item Profiles  A vector entry for each feature  Boolean features e.g. One bool feature for every actor, director, genre, etc.  Numeric features e.g. Budget of a movie, TF-IDF for a document, etc.  We may need weighting terms for normalization of features Spielberg Scorsese Tarantino Lynch Budget Jurassic Park M Departed M Eraserhead K Twin Peaks M

20 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University User Profiles – Option 1  Option 1: Weighted average of rated item profiles Jurassic Park Minority Report Schindler’s List Departed AviatorEraser head Twin Peaks User User User Utility matrix (ratings 1-5) SpielbergScorceseLynch User User User User profile(ratings 1-5) Missing scores similar to bad scores

21 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University User Profiles – Option 2 (Better)  Option 2: Subtract average values from ratings first Jurassic Park Minority Report Schindler’s List Departed AviatorEraser head Twin Peaks Avg User User User Utility matrix (ratings 1-5)

22 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University User Profiles – Option 2 (Better)  Option 2: Subtract average values from ratings first Jurassic Park Minority Report Schindler’s List Departed AviatorEraser head Twin Peaks Avg User User User Utility matrix (ratings 1-5) SpielbergScorceseLynch User User User User profile

23 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Prediction Heuristic  Given:  A feature vector for user U  A feature vector for movie M  Predict user U’s rating for movie M  Which distance metric to use?  Cosine distance is a good candidate  Works on weighted vectors  Only directions are important, not the magnitude The magnitudes of vectors may be very different in movies and users

24 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Reminder: Cosine Distance θ x y

25 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Reminder: Cosine Distance - Example θ x = [0.1, 0.2, -0.1] y = [2.0, 1.0, 1.0]

26 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Prediction Example User and movie feature vectors Actor 1Actor 2Actor 3Actor 4 User U Movie Movie Movie Predict the rating of user U for movies 1, 2, and 3

27 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Prediction Example Actor 1Actor 2Actor 3Actor 4Vector Magn. User U Movie Movie Movie Predict the rating of user U for movies 1, 2, and 3

28 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Prediction Example Actor 1Actor 2Actor 3Actor 4Vector Magn. Cosine Sim User U Movie Movie Movie Predict the rating of user U for movies 1, 2, and 3

29 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Prediction Example Actor 1Actor 2Actor 3Actor 4Vector Magn. Cosine Sim Cosine Dist User U Movie Movie Movie Predict the rating of user U for movies 1, 2, and 3

30 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Prediction Example Actor 1Actor 2Actor 3Actor 4Vector Magn. Cosine Sim Cosine Dist Interpretation User U Movie Neither likes nor dislikes Movie Dislikes Movie Likes Predict the rating of user U for movies 1, 2, and 3

31 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Content-Based Approach: True or False?  Need data on other users False  Can handle users with unique tastes True – no need to have similarity with other users  Can handle new items easily True – well-defined features for items  Can handle new users easily False – how to construct user-profiles?  Can provide explanations for the predicted recommendations True – know which features contributed to the ratings Likes Metallica, Sinatra and Bieber

 +: No need for data on other users  No cold-start or sparsity problems  +: Able to recommend to users with unique tastes  +: Able to recommend new & unpopular items  No first-rater problem  +: Able to provide explanations  Can provide explanations of recommended items by listing content-features that caused an item to be recommended J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 –: Finding the appropriate features is hard  E.g., images, movies, music  –: Recommendations for new users  How to build a user profile?  –: Overspecialization  Never recommends items outside user’s content profile  People might have multiple interests  Unable to exploit quality judgments of other users  e.g. Users who like director X also like director Y User U rated X, but doesn’t know about Y J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

Harnessing quality judgments of other users

 Consider user x  Find set N of other users whose ratings are “similar” to x’s ratings  Estimate x’s ratings based on ratings of users in N 35J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, x N

36 r x = [*, _, _, *, ***] r y = [*, _, **, **, _] r x, r y as sets: r x = {1, 4, 5} r y = {1, 3, 4} r x, r y as points: r x = {1, 0, 0, 1, 3} r y = {1, 0, 2, 2, 0} r x, r y … avg. rating of x, y

 Intuitively we want: sim(A, B) > sim(A, C)  Jaccard similarity: 1/5 < 2/4  Cosine similarity: >  Considers missing ratings as “negative”  Solution: subtract the (row) mean J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, sim A,B vs. A,C: > Notice cosine sim. is correlation when data is centered at 0 Cosine sim:

38J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

39 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Rating Predictions Prediction based on the top 2 neighbors who have also rated HP2 similarity of A Predict the rating of A for HP2: r A,HP2 = (5+3) / 2 = 4

40 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Rating Predictions Prediction based on the top 2 neighbors who have also rated HP2 similarity of A Predict the rating of A for HP2: r A,HP2 = (5 x x 0) / 0.09 = 5

 So far: User-user collaborative filtering  Another view: Item-item  For item i, find other similar items  Estimate rating for item i based on ratings for similar items  Can use same similarity metrics and prediction functions as in user-user model J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, s ij … similarity of items i and j r xj …rating of user u on item j N(i;x)… set items rated by x similar to i

users movies - unknown rating- rating between 1 to 5 42J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

? users - estimate rating of movie 1 by user 5 43J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, movies

? users Neighbor selection: Identify movies similar to movie 1, rated by user 5 44J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, movies sim(1,m) Here we use Pearson correlation as similarity: 1) Subtract mean rating m i from each movie i m 1 = ( )/5 = 3.6 row 1: [-2.6, 0, -0.6, 0, 0, 1.4, 0, 0, 1.4, 0, 0.4, 0] 2) Compute cosine similarities between rows

? users Neighbor selection: Identify movies similar to movie 1, rated by user 5 45J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, movies sim(1,m) Here we use Pearson correlation as similarity: 1) Subtract mean rating m i from each movie i m 1 = ( )/5 = 3.6 row 1: [-2.6, 0, -0.6, 0, 0, 1.4, 0, 0, 1.4, 0, 0.4, 0] 2) Compute cosine similarities between rows

? users Compute similarity weights: s 1,3 =0.41, s 1,6 = J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, movies sim(1,m)

users Predict by taking weighted average: r 1.5 = (0.41* *3) / ( ) = J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, movies

 Define similarity s ij of items i and j  Select k nearest neighbors N(i; x)  Items most similar to i, that were rated by x  Estimate rating r xi as the weighted average: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, baseline estimate for r xi  μ = overall mean movie rating  b x = rating deviation of user x = (avg. rating of user x) – μ  b i = rating deviation of movie i Before:

49 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Example

50 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Example (cont’d)  Items k and m: The most similar items to i that are also rated by x Assume both have similarity values of 0.4  Assume: r xk = 2 and b xk = 3.2→ deviation of -1.2 r xm = 3 and b xk = 3.8→ deviation of -0.8

51 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Example (cont’d)

AvatarLOTRMatrixPirates Alice Bob Carol David 52J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,  In practice, it has been observed that item-item often works better than user-user  Why? Items are simpler, users have multiple tastes

53 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Collaborating Filtering: True or False?  Need data on other users True  Effective for users with unique tastes and esoteric items False – relies on similarity between users or items  Can handle new items easily False – cold start problems  Can handle new users easily False – cold start problems  Can provide explanations for the predicted recommendations User-user: False – “because users X, Y, Z also liked it” Item-item: True – “because you also liked items i, j, k”

 + Works for any kind of item  No feature selection needed  - Cold Start:  Need enough users in the system to find a match  - Sparsity:  The user/ratings matrix is sparse  Hard to find users that have rated the same items  - First rater:  Cannot recommend an item that has not been previously rated  New items, Esoteric items  - Popularity bias:  Cannot recommend items to someone with unique taste  Tends to recommend popular items J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Implement two or more different recommenders and combine predictions  Perhaps using a linear model  Add content-based methods to collaborative filtering  Item profiles for new item problem  Demographics to deal with new user problem 55J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

56 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Item/User Clustering to Reduce Sparsity

- Evaluation - Error metrics - Complexity / Speed J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

movies users J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

?? ? 21? 3 ? 1 Test Data Set users movies J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Narrow focus on accuracy sometimes misses the point  Prediction Context  Prediction Diversity 61J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

62 CS 425 – Lecture 8Mustafa Ozdal, Bilkent University Prediction Diversity Problem

 In practice, we care only to predict high ratings:  RMSE might penalize a method that does well for high ratings and badly for others  Alternative: Precision at top k 63J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Expensive step is finding k most similar customers: O(|X|)  Too expensive to do at runtime  Could pre-compute  Naïve pre-computation takes time O(k ·|X|)  X … set of customers  We already know how to do this!  Near-neighbor search in high dimensions (LSH)  Clustering  Dimensionality reduction 64J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Leverage all the data  Don’t try to reduce data size in an effort to make fancy algorithms work  Simple methods on large data do best  Add more data  e.g., add IMDB data on genres  More data beats better algorithms J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,