Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.

Slides:



Advertisements
Similar presentations
Memory vs. Model-based Approaches SVD & MF Based on the Rajaraman and Ullman book and the RS Handbook. See the Adomavicius and Tuzhilin, TKDE 2005 paper.
Advertisements

Linear Regression.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
Dimensionality Reduction PCA -- SVD
MMDS Secs Slides adapted from: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, October.
Lessons from the Netflix Prize Robert Bell AT&T Labs-Research In collaboration with Chris Volinsky, AT&T Labs-Research & Yehuda Koren, Yahoo! Research.
G54DMT – Data Mining Techniques and Applications Dr. Jaume Bacardit
The Wisdom of the Few A Collaborative Filtering Approach Based on Expert Opinions from the Web Xavier Amatriain Telefonica Research Nuria Oliver Telefonica.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Rubi’s Motivation for CF  Find a PhD problem  Find “real life” PhD problem  Find an interesting PhD problem  Make Money!
x – independent variable (input)
Lessons from the Netflix Prize
Customizable Bayesian Collaborative Filtering Denver Dash Big Data Reading Group 11/19/2007.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
Quest for $1,000,000: The Netflix Prize Bob Bell AT&T Labs-Research July 15, 2009 Joint work with Chris Volinsky, AT&T Labs-Research and Yehuda Koren,
Time Series Data Analysis - II
CS 277: Data Mining Recommender Systems
Chapter 12 (Section 12.4) : Recommender Systems Second edition of the book, coming soon.
Performance of Recommender Algorithms on Top-N Recommendation Tasks
Cao et al. ICML 2010 Presented by Danushka Bollegala.
Matrix Factorization Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Principled Regularization for Probabilistic Matrix Factorization Robert Bell, Suhrid Balakrishnan AT&T Labs-Research Duke Workshop on Sensing and Analysis.
Machine Learning Week 4 Lecture 1. Hand In Data Is coming online later today. I keep test set with approx test images That will be your real test.
Performance of Recommender Algorithms on Top-N Recommendation Tasks RecSys 2010 Intelligent Database Systems Lab. School of Computer Science & Engineering.
Data Mining - Volinsky Columbia University 1 Topic 12 – Recommender Systems and the Netflix Prize.
Training and Testing of Recommender Systems on Data Missing Not at Random Harald Steck at KDD, July 2010 Bell Labs, Murray Hill.
EMIS 8381 – Spring Netflix and Your Next Movie Night Nonlinear Programming Ron Andrews EMIS 8381.
CS 277: The Netflix Prize Professor Padhraic Smyth Department of Computer Science University of California, Irvine.
Time Series Data Analysis - I Yaji Sripada. Dept. of Computing Science, University of Aberdeen2 In this lecture you learn What are Time Series? How to.
Report #1 By Team: Green Ensemble AusDM 2009 ENSEMBLE Analytical Challenge: Rules, Objectives, and Our Approach.
Chengjie Sun,Lei Lin, Yuan Chen, Bingquan Liu Harbin Institute of Technology School of Computer Science and Technology 1 19/11/ :09 PM.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 9: Ways of speeding up the learning and preventing overfitting Geoffrey Hinton.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
Matching Users and Items Across Domains to Improve the Recommendation Quality Created by: Chung-Yi Li, Shou-De Lin Presented by: I Gde Dharma Nugraha 1.
CROSS-VALIDATION AND MODEL SELECTION Many Slides are from: Dr. Thomas Jensen -Expedia.com and Prof. Olga Veksler - CS Learning and Computer Vision.
Recommender Systems Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata Credits to Bing Liu (UIC) and Angshul Majumdar.
Collaborative Filtering with Temporal Dynamics Yehuda Koren Yahoo Research Israel KDD’09.
Lecture 5 Instructor: Max Welling Squared Error Matrix Factorization.
CS425: Algorithms for Web Scale Data Most of the slides are from the Mining of Massive Datasets book. These slides have been modified for CS425. The original.
Pairwise Preference Regression for Cold-start Recommendation Speaker: Yuanshuai Sun
CSC321: Lecture 7:Ways to prevent overfitting
Netflix Challenge: Combined Collaborative Filtering Greg Nelson Alan Sheinberg.
Yue Xu Shu Zhang.  A person has already rated some movies, which movies he/she may be interested, too?  If we have huge data of user and movies, this.
Ensemble Methods Construct a set of classifiers from the training data Predict class label of previously unseen records by aggregating predictions made.
Online Social Networks and Media Recommender Systems Collaborative Filtering Social recommendations Thanks to: Jure Leskovec, Anand Rajaraman, Jeff Ullman.
Collaborative Filtering with Temporal Dynamics Yehuda Koren Yahoo! Israel KDD 2009.
Collaborative Filtering via Euclidean Embedding M. Khoshneshin and W. Street Proc. of ACM RecSys, pp , 2010.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Company LOGO MovieMiner A collaborative filtering system for predicting Netflix user’s movie ratings [ECS289G Data Mining] Team Spelunker: Justin Becker,
Data Mining Lectures Lecture 7: Regression Padhraic Smyth, UC Irvine ICS 278: Data Mining Lecture 7: Regression Algorithms Padhraic Smyth Department of.
Neural networks (2) Reminder Avoiding overfitting Deep neural network Brief summary of supervised learning methods.
1 Dongheng Sun 04/26/2011 Learning with Matrix Factorizations By Nathan Srebro.
Matrix Factorization and Collaborative Filtering
High dim. data Graph data Infinite data Machine learning Apps
Statistics 202: Statistical Aspects of Data Mining
MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER SYSTEMS
Adopted from Bin UIC Recommender Systems Adopted from Bin UIC.
DATA MINING LECTURE 6 Dimensionality Reduction PCA – SVD
Step-By-Step Instructions for Miniproject 2
Advanced Artificial Intelligence
Q4 : How does Netflix recommend movies?
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Large Scale Support Vector Machines
Recommender Systems: Latent Factor Models
Matrix Factorization & Singular Value Decomposition
Unfolding with system identification
Analysis of Large Graphs: Overlapping Communities
Presentation transcript:

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. If you make use of a significant portion of these slides in your own lecture, please include this message, or a link to our web site:

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

,000 users 17,700 movies 3J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Matrix R

?? ? 21? 3 ? 1 Test Data Set 4 480,000 users 17,700 movies Predicted rating True rating of user x on item i Matrix R Training Data Set J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 The winner of the Netflix Challenge!  Multi-scale modeling of the data: Combine top level, “regional” modeling of the data, with a refined, local view:  Global:  Overall deviations of users/movies  Factorization:  Addressing “regional” effects  Collaborative filtering:  Extract local patterns J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Global effects Factorization Collaborative filtering

 Global:  Mean movie rating: 3.7 stars  The Sixth Sense is 0.5 stars above avg.  Joe rates 0.2 stars below avg.  Baseline estimation: Joe will rate The Sixth Sense 4 stars  Local neighborhood (CF/NN):  Joe didn’t like related movie Signs   Final estimate: Joe will rate The Sixth Sense 3.8 stars J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Earliest and most popular collaborative filtering method  Derive unknown ratings from those of “similar” movies (item-item variant)  Define similarity measure s ij of items i and j  Select k-nearest neighbors, compute the rating  N(i; x): items most similar to i that were rated by x 7J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, s ij … similarity of items i and j r xj …rating of user x on item j N(i;x)… set of items similar to item i that were rated by x

 In practice we get better estimates if we model deviations: J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, μ = overall mean rating b x = rating deviation of user x = (avg. rating of user x) – μ b i = (avg. rating of movie i) – μ Problems/Issues: 1) Similarity measures are “arbitrary” 2) Pairwise similarities neglect interdependencies among users 3) Taking a weighted average can be restricting Solution: Instead of s ij use w ij that we estimate directly from data ^ baseline estimate for r xi

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

10 Why is this a good idea?

 Goal: Make good recommendations  Quantify goodness using RMSE: Lower RMSE  better recommendations  Want to make good recommendations on items that user has not yet seen. Can’t really do this!  Let’s set build a system such that it works well on known (user, item) ratings And hope the system will also predict well the unknown ratings 11J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

12 Predicted rating True rating

13J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

14  … learning rate while |w new - w old | > ε: w old = w new w new = w old -  ·  w old

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Global effects Factorization CF/NN

Grand Prize: Netflix: Movie average: User average: Global average: Basic Collaborative filtering: 0.94 CF+Biases+learned weights: 0.91 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

Geared towards females Geared towards males Serious Funny 17J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, The Princess Diaries The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility

 “SVD” on Netflix data: R ≈ Q · P T  For now let’s assume we can approximate the rating matrix R as a product of “thin” Q · P T  R has missing entries but let’s ignore that for now!  Basically, we will want the reconstruction error to be small on known ratings and we don’t care about the values on the missing ones J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, ≈ users items PTPT Q users R SVD: A = U  V T factors

 How to estimate the missing rating of user x for item i? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, items ≈ items users ? PTPT q i = row i of Q p x = column x of P T factors Q

 How to estimate the missing rating of user x for item i? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, items ≈ items users ? PTPT factors Q q i = row i of Q p x = column x of P T

 How to estimate the missing rating of user x for item i? J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, items ≈ items users ? Q PTPT 2.4 f factors q i = row i of Q p x = column x of P T

Geared towards females Geared towards males Serious Funny 22J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, The Princess Diaries The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility Factor 1 Factor 2

Geared towards females Geared towards males Serious Funny 23J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, The Princess Diaries The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility Factor 1 Factor 2

 Remember SVD:  A: Input data matrix  U: Left singular vecs  V: Right singular vecs   : Singular values  So in our case: “SVD” on Netflix data: R ≈ Q · P T A = R, Q = U, P T =  V T A m n  m n VTVT  24 U J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

25J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 PTPT Q users items factors items users

 PTPT Q users items factors items users J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Want to minimize SSE for unseen test data  Idea: Minimize SSE on training data  Want large k (# of factors) to capture all the signals  But, SSE on test data begins to rise for k > 2  This is a classical example of overfitting:  With too much freedom (too many free parameters) the model starts fitting noise  That is it fits too well the training data and thus not generalizing well to unseen test data ?? ? 21? 3 ? 1 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 To solve overfitting we introduce regularization:  Allow rich model where there are sufficient data  Shrink aggressively where data are scarce ?? ? 21? 3 ? 1 1, 2 … user set regularization parameters “error” “length” Note: We do not care about the “raw” value of the objective function, but we care in P,Q that achieve the minimum of the objective J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

Geared towards females Geared towards males serious funny 31J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, The Princess Diaries The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility Factor 1 Factor 2 min factors “error” + “length”

Geared towards females Geared towards males serious funny 32J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility Factor 1 Factor 2 The Princess Diaries min factors “error” + “length”

Geared towards females Geared towards males serious funny 33J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility Factor 1 Factor 2 min factors “error” + “length” The Princess Diaries

Geared towards females Geared towards males serious funny 34J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, The Lion King Braveheart Lethal Weapon Independence Day Amadeus The Color Purple Dumb and Dumber Ocean’s 11 Sense and Sensibility Factor 1 Factor 2 The Princess Diaries min factors “error” + “length”

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, How to compute gradient of a matrix? Compute gradient of every element independently!

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Convergence of GD vs. SGD J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Iteration/step Value of the objective function GD improves the value of the objective function at every step. SGD improves the value but in a “noisy” way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster!

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

Koren, Bell, Volinksy, IEEE Computer, J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

41  μ = overall mean rating  b x = bias of user x  b i = bias of movie i user-movie interactionmovie biasuser bias User-Movie interaction  Characterizes the matching between users and movies  Attracts most research in the field  Benefits from algorithmic and mathematical innovations Baseline predictor  Separates users and movies  Benefits from insights into user’s behavior  Among the main practical contributions of the competition

 We have expectations on the rating by user x of movie i, even without estimating x’s attitude towards movies like i – Rating scale of user x – Values of other ratings user gave recently (day-specific mood, anchoring, multi-user accounts) – (Recent) popularity of movie i – Selection bias; related to number of ratings user gave on the same day (“frequency”) 42J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Example:  Mean rating:  = 3.7  You are a critical reviewer: your ratings are 1 star lower than the mean: b x = -1  Star Wars gets a mean rating of 0.5 higher than average movie: b i =  Predicted rating for you on Star Wars: = = 3.2 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Overall mean rating Bias for user x Bias for movie i User-Movie interaction

 Solve:  Stochastic gradient decent to find parameters  Note: Both biases b x, b i as well as interactions q i, p x are treated as parameters (we estimate them) J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, regularization goodness of fit is selected via grid- search on a validation set

45 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

Grand Prize: Netflix: Movie average: User average: Global average: Basic Collaborative filtering: 0.94 Latent factors: 0.90 Latent factors+Biases: 0.89 Collaborative filtering++: 0.91 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Sudden rise in the average movie rating (early 2004)  Improvements in Netflix  GUI improvements  Meaning of rating changed  Movie age  Users prefer new movies without any reasons  Older movies are just inherently better than newer ones J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Y. Koren, Collaborative filtering with temporal dynamics, KDD ’09

 Original model: r xi =  +b x + b i + q i · p x  Add time dependence to biases: r xi =  +b x (t)+ b i (t) +q i · p x  Make parameters b x and b i to depend on time  (1) Parameterize time-dependence by linear trends (2) Each bin corresponds to 10 consecutive weeks  Add temporal dependence to factors  p x (t)… user preference vector on day t J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Y. Koren, Collaborative filtering with temporal dynamics, KDD ’09

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

Grand Prize: Netflix: Movie average: User average: Global average: Basic Collaborative filtering: 0.94 Latent factors: 0.90 Latent factors+Biases: 0.89 Collaborative filtering++: 0.91 J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, Latent factors+Biases+Time: Still no prize!  Getting desperate. Try a “kitchen sink” approach!

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, June 26 th submission triggers 30-day “last call”

 Ensemble team formed  Group of other teams on leaderboard forms a new team  Relies on combining their models  Quickly also get a qualifying score over 10%  BellKor  Continue to get small improvements in their scores  Realize that they are in direct competition with Ensemble  Strategy  Both teams carefully monitoring the leaderboard  Only sure way to check for improvement is to submit a set of predictions  This alerts the other team of your latest score 54J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Submissions limited to 1 a day  Only 1 final submission could be made in the last 24h  24 hours before deadline…  BellKor team member in Austria notices (by chance) that Ensemble posts a score that is slightly better than BellKor’s  Frantic last 24 hours for both teams  Much computer time on final optimization  Carefully calibrated to end about an hour before deadline  Final submissions  BellKor submits a little early (on purpose), 40 mins before deadline  Ensemble submits their final entry 20 mins later  ….and everyone waits…. J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,

 Some slides and plots borrowed from Yehuda Koren, Robert Bell and Padhraic Smyth  Further reading:  Y. Koren, Collaborative filtering with temporal dynamics, KDD ’09   J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets,