Presentation is loading. Please wait.

Presentation is loading. Please wait.

Recommender Systems: Latent Factor Models

Similar presentations


Presentation on theme: "Recommender Systems: Latent Factor Models"β€” Presentation transcript:

1 Recommender Systems: Latent Factor Models
Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. If you make use of a significant portion of these slides in your own lecture, please include this message, or a link to our web site: Recommender Systems: Latent Factor Models

2 The Netflix Prize Training data Test data Competition
100 million ratings, 480,000 users, 17,770 movies 6 years of data: Test data Last few ratings of each user (2.8 million) Evaluation criterion: Root Mean Square Error (RMSE) = 1 𝑅 (𝑖,π‘₯)βˆˆπ‘… π‘Ÿ π‘₯𝑖 βˆ’ π‘Ÿ π‘₯𝑖 2 Netflix’s system RMSE: Competition 2,700+ teams $1 million prize for 10% improvement on Netflix

3 The Netflix Utility Matrix R
480,000 users Matrix R 1 3 4 5 2 17,700 movies

4 Utility Matrix R: Evaluation
480,000 users Matrix R 1 3 4 5 2 ? 𝒓 πŸ‘,πŸ” 17,700 movies Training Data Set Test Data Set True rating of user x on item i RMSE = 1 R (𝑖,π‘₯)βˆˆπ‘… π‘Ÿ π‘₯𝑖 βˆ’ π‘Ÿ π‘₯𝑖 2 Predicted rating

5 Data Details Training data Test data
100 million ratings (from 1 to 5 stars) 6 years ( ) 480,000 users 17,770 β€œmovies” Test data Last few ratings of each user Split as shown on next slide Although I use the term β€œmovies” throughout this talk, it refers to DVDs of all types besides made-for-theater movies, including seasons of popular TV series such as Seinfeld, children’s videos, concerts, etc.

6 Test Data Split into Three Pieces
Probe Ratings released Allows participants to assess methods directly Daily submissions allowed for combined Quiz/Test data Identity of Quiz cases withheld RMSE released for Quiz Test RMSE withheld Prizes based on Test RMSE Because Netflix needs to predict future ratings, it selected test data drawn from each user’s most recent ratings. Because these later ratings might differ systematically from the training data, Netflix released actual ratings for a random subset called the Probe data. The remaining data, for which Netflix withheld ratings, was further divided at random into the Quiz and Test data. Contestants could submit prediction every 24 hours for the combined Quiz/Test data. In response, Netflix reported the Quiz RMSE for each submission. No results were reported for the Test data, which were used only for determining the winners of prizes.

7 Higher Mean Rating in Probe Data
This chart shows that the later ratings in the Probe data did differ systematically from the earlier Training data; Probe ratings were systematically higher.

8 Something Happened in Early 2004
Here is more direct evidence that ratings rose over time, specifically during the first part of 2004. We do not know the reason for this shift. 2004 8

9 Data about the Movies Most Rated Movies Highest Variance
Count Avg rating Most Loved Movies 137812 4.593 The Shawshank Redemption 133597 4.545 Lord of the Rings: The Return of the King 180883 4.306 The Green Mile 150676 4.460 Lord of the Rings: The Two Towers 139050 4.415 Finding Nemo 117456 4.504 Raiders of the Lost Ark Most Rated Movies Miss Congeniality Independence Day The Patriot The Day After Tomorrow Pretty Woman Pirates of the Caribbean Highest Variance The Royal Tenenbaums Lost In Translation Pearl Harbor Miss Congeniality Napolean Dynamite Fahrenheit 9/11

10 Most Active Users User ID # Ratings Mean Rating 305344 17,651 1.90
387418 17,432 1.81 16,560 1.22 15,811 4.26 14,829 4.08 9,820 1.37 9,764 1.33 9,739 2.95 Like any data set, there are surprises. One user managed to rate more than 5,000 movies in a single day! 10

11 Major Challenges Size of data 99% of data are missing
Places premium on efficient algorithms Stretched memory limits of standard PCs 99% of data are missing Eliminates many standard prediction methods Certainly not missing at random Training and test data differ systematically Test ratings are later Test cases are spread uniformly across users 11

12 Major Challenges (cont.)
Countless factors may affect ratings Genre, movie/TV series/other Style of action, dialogue, plot, music et al. Director, actors Rater’s mood Large imbalance in training data Number of ratings per user or movie varies by several orders of magnitude Information to estimate individual parameters varies widely The two challenges on this slide are central to the whole endeavor as they clearly conflict with each other. Number 4 points us towards building very big models. Number 5 tells us that it will be easy to over fitβ€”at least for some users and some movies. 12

13 BellKor Recommender System
The winner of the Netflix Challenge! Multi-scale modeling of the data: Combine top level, β€œregional” modeling of the data, with a refined, local view: Global: Overall deviations of users/movies Factorization: Addressing β€œregional” effects Collaborative filtering: Extract local patterns Global effects Factorization Collaborative filtering

14 Modeling Local & Global Effects
Mean movie rating: 3.7 stars The Sixth Sense is 0.5 stars above avg. Joe rates 0.2 stars below avg. οƒž Baseline estimation: Joe will rate The Sixth Sense 4 stars Local neighborhood (CF/NN): Joe didn’t like related movie Signs οƒž Final estimate: Joe will rate The Sixth Sense 3.8 stars

15 Recap: Collaborative Filtering (CF)
Earliest and most popular collaborative filtering method Derive unknown ratings from those of β€œsimilar” movies (item-item variant) Define similarity measure sij of items i and j Select k-nearest neighbors, compute the rating N(i; x): items most similar to i that were rated by x sij… similarity of items i and j rxj…rating of user x on item j N(i;x)… set of items similar to item i that were rated by x

16 Modeling Local & Global Effects
In practice we get better estimates if we model deviations: ^ baseline estimate for rxi Problems/Issues: 1) Similarity measures are β€œarbitrary” 2) Pairwise similarities neglect interdependencies among users 3) Taking a weighted average can be restricting Solution: Instead of sij use wij that we estimate directly from data 𝒃 π’™π’Š =𝝁+ 𝒃 𝒙 + 𝒃 π’Š ΞΌ = overall mean rating bx = rating deviation of user x = (avg. rating of user x) – ΞΌ bi = (avg. rating of movie i) – ΞΌ

17 Idea: Interpolation Weights wij
Use a weighted sum rather than weighted avg.: π‘Ÿ π‘₯𝑖 = 𝑏 π‘₯𝑖 + π‘—βˆˆπ‘(𝑖;π‘₯) 𝑀 𝑖𝑗 π‘Ÿ π‘₯𝑗 βˆ’ 𝑏 π‘₯𝑗 A few notes: 𝑡(π’Š;𝒙) … set of movies rated by user x that are similar to movie i π’˜ π’Šπ’‹ is the interpolation weight (some real number) We allow: π’‹βˆˆπ‘΅(π’Š,𝒙) π’˜ π’Šπ’‹ β‰ πŸ π’˜ π’Šπ’‹ models interaction between pairs of movies (it does not depend on user x)

18 Idea: Interpolation Weights wij
π‘Ÿ π‘₯𝑖 = 𝑏 π‘₯𝑖 + π‘—βˆˆπ‘(𝑖,π‘₯) 𝑀 𝑖𝑗 π‘Ÿ π‘₯𝑗 βˆ’ 𝑏 π‘₯𝑗 How to set wij? Remember, error metric is: 1 𝑅 (𝑖,π‘₯)βˆˆπ‘… π‘Ÿ π‘₯𝑖 βˆ’ π‘Ÿ π‘₯𝑖 or equivalently SSE: (π’Š,𝒙)βˆˆπ‘Ή 𝒓 π’™π’Š βˆ’ 𝒓 π’™π’Š 𝟐 Find wij that minimize SSE on training data! Models relationships between item i and its neighbors j wij can be learned/estimated based on x and all other users that rated i Why is this a good idea?

19 Recommendations via Optimization
1 3 4 5 2 Goal: Make good recommendations Quantify goodness using RMSE: Lower RMSE οƒž better recommendations Want to make good recommendations on items that user has not yet seen. Can’t really do this! Let’s set build a system such that it works well on known (user, item) ratings And hope the system will also predict well the unknown ratings

20 Recommendations via Optimization
Idea: Let’s set values w such that they work well on known (user, item) ratings How to find such values w? Idea: Define an objective function and solve the optimization problem Find wij that minimize SSE on training data! 𝐽 𝑀 = π‘₯,𝑖 𝑏 π‘₯𝑖 + π‘—βˆˆπ‘ 𝑖;π‘₯ 𝑀 𝑖𝑗 π‘Ÿ π‘₯𝑗 βˆ’ 𝑏 π‘₯𝑗 βˆ’ π‘Ÿ π‘₯𝑖 2 Think of w as a vector of numbers True rating Predicted rating

21 Detour: Minimizing a function
A simple way to minimize a function 𝒇(𝒙): Compute the take a derivative πœ΅π’‡ Start at some point π’š and evaluate πœ΅π’‡(π’š) Make a step in the reverse direction of the gradient: π’š=π’šβˆ’πœ΅π’‡(π’š) Repeat until converged 𝑓 𝑦 𝑓 𝑦 +𝛻𝑓(𝑦)

22 Interpolation Weights
We have the optimization problem, now what? Gradient decent: Iterate until convergence: π’˜β† π’˜βˆ’ο¨ 𝜡 π’˜ 𝑱 where 𝜡 π’˜ 𝑱 is the gradient (derivative evaluated on data): 𝛻 𝑀 𝐽= πœ•π½(𝑀) πœ• 𝑀 𝑖𝑗 =2 π‘₯ 𝑏 π‘₯𝑖 + π‘˜βˆˆπ‘ 𝑖;π‘₯ 𝑀 π‘–π‘˜ π‘Ÿ π‘₯π‘˜ βˆ’ 𝑏 π‘₯π‘˜ βˆ’ π‘Ÿ π‘₯𝑖 π‘Ÿ π‘₯𝑗 βˆ’ 𝑏 π‘₯𝑗 for π’‹βˆˆ{𝑡 π’Š;𝒙 , βˆ€π’Š,βˆ€π’™ } else πœ•π½(𝑀) πœ• 𝑀 𝑖𝑗 =𝟎 Note: We fix movie i, go over all rxi, for every movie π’‹βˆˆπ‘΅ π’Š;𝒙 , we compute 𝝏𝑱(π’˜) 𝝏 π’˜ π’Šπ’‹ 𝐽 𝑀 = π‘₯,𝑖 𝑏 π‘₯𝑖 + π‘—βˆˆπ‘ 𝑖;π‘₯ 𝑀 𝑖𝑗 π‘Ÿ π‘₯𝑗 βˆ’ 𝑏 π‘₯𝑗 βˆ’ π‘Ÿ π‘₯𝑖 2  … learning rate while |wnew - wold| > Ξ΅: wold = wnew wnew = wold -  ·wold

23 Interpolation Weights
So far: π‘Ÿ π‘₯𝑖 = 𝑏 π‘₯𝑖 + π‘—βˆˆπ‘(𝑖;π‘₯) 𝑀 𝑖𝑗 π‘Ÿ π‘₯𝑗 βˆ’ 𝑏 π‘₯𝑗 Weights wij derived based on their role; no use of an arbitrary similarity measure (wij ο‚Ή sij) Explicitly account for interrelationships among the neighboring movies Next: Latent factor model Extract β€œregional” correlations Global effects Factorization CF/NN

24 Performance of Various Methods
Global average: User average: Movie average: Netflix: Basic Collaborative filtering: 0.94 CF+Biases+learned weights: 0.91 Grand Prize:

25 Latent Factor Models (e.g., SVD)
Serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Ocean’s 11 Geared towards males The Lion King The Princess Diaries Independence Day Dumb and Dumber Funny

26 Latent Factor Models β€œSVD” on Netflix data: R β‰ˆ Q Β· PT β‰ˆ
SVD: A = U  VT β€œSVD” on Netflix data: R β‰ˆ Q Β· PT For now let’s assume we can approximate the rating matrix R as a product of β€œthin” Q Β· PT R has missing entries but let’s ignore that for now! Basically, we will want the reconstruction error to be small on known ratings and we don’t care about the values on the missing ones users factors .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 4 5 3 1 2 users items -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 β‰ˆ factors items PT R Q Difference with SVD: -- can multiply sigma with V and get the UV decomposition -- SVD will count errors also on missing entries (zeros) -- SVD insists U,V are orthonormal. We do not want this here, we do not care, we just want something that works

27 Ratings as Products of Factors
How to estimate the missing rating of user x for item i? 𝒓 π’™π’Š = 𝒒 π’Š β‹… 𝒑 𝒙 = 𝒇 𝒒 π’Šπ’‡ β‹… 𝒑 𝒙𝒇 users 4 5 3 1 2 ? items β‰ˆ qi = row i of Q px = column x of PT .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 items factors PT factors Q

28 Ratings as Products of Factors
How to estimate the missing rating of user x for item i? 𝒓 π’™π’Š = 𝒒 π’Š β‹… 𝒑 𝒙 = 𝒇 𝒒 π’Šπ’‡ β‹… 𝒑 𝒙𝒇 users 4 5 3 1 2 ? items β‰ˆ qi = row i of Q px = column x of PT .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 items factors PT factors Q

29 Ratings as Products of Factors
How to estimate the missing rating of user x for item i? 𝒓 π’™π’Š = 𝒒 π’Š β‹… 𝒑 𝒙 = 𝒇 𝒒 π’Šπ’‡ β‹… 𝒑 𝒙𝒇 users 4 5 3 1 2 ? 2.4 items β‰ˆ qi = row i of Q px = column x of PT .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 items f factors PT f factors Q

30 Latent Factor Models Serious Geared towards Geared towards Factor 1
Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Ocean’s 11 Geared towards males Factor 1 This graph shows a hypothetical layout of movies in two dimensions. In the example, the horizontal dimension contrasts β€œchick flicks” from β€œmacho movies”, while the vertical dimension measures the seriousness of the movie. In a real application of SVD, an algorithm would determine the layout, so it night not be easy to label the axes. Feel free to disagree with my placement of the various movies. The Lion King The Princess Diaries Factor 2 Independence Day Dumb and Dumber Funny

31 Latent Factor Models Serious Geared towards Geared towards Factor 1
Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Ocean’s 11 Geared towards males Factor 1 Users fall into the same space as movies, where a user’s position in a dimension reflects the user’s preference for (or against) movies that score high on that dimension. For example, BLUE tends to like male-oriented movies, but dislikes serious movies. Therefore, we would expect him to love β€œDumb and Dumber” and hate β€œThe Color Purple”. Note that these two dimensions do not characterize Dave’s (DOCTOR) interests very well; additional dimensions would be needed. The Lion King The Princess Diaries Factor 2 Independence Day Dumb and Dumber Funny

32 ο‚» Recap: SVD Remember SVD:
n n Remember SVD: A: Input data matrix U: Left singular vecs V: Right singular vecs : Singular values So in our case: β€œSVD” on Netflix data: R β‰ˆ Q Β· PT A = R, Q = U, PT =  VT m A ο‚»  VT m U Difference with SVD: -- can multiply sigma with V and get the UV decomposition -- SVD will count errors also on missing entries (zeros) -- SVD insists U,V are orthonormal. We do not want this here, we do not care, we just want something that works 𝒓 π’™π’Š = 𝒒 π’Š β‹… 𝒑 𝒙

33 SVD - Definition A[m x n] = U[m x r]  [ r x r] (V[n x r])T
A: Input data matrix m x n matrix (e.g., m documents, n terms) U: Left singular vectors m x r matrix (m documents, r concepts) : Singular values r x r diagonal matrix (strength of each β€˜concept’) (r : rank of the matrix A) V: Right singular vectors n x r matrix (n terms, r concepts)

34 Faloutsos SVD T n n m A ο‚»  VT m U

35 ο‚» SVD + Οƒi … scalar ui … vector vi … vector A n m T 1u1v1 2u2v2
Faloutsos SVD T n 1u1v1 2u2v2 A ο‚» + m Οƒi … scalar ui … vector vi … vector

36 SVD - Properties It is always possible to decompose a real matrix A into A = U  VT , where U, , V: unique U, V: column orthonormal UT U = I; VT V = I (I: identity matrix) (Columns are orthogonal unit vectors) : diagonal Entries (singular values) are positive, and sorted in decreasing order (Οƒ1 ο‚³ Οƒ2 ο‚³ ... ο‚³ 0) Nice proof of uniqueness:

37 SVD – Example: Users-to-Movies
A = U  VT - example: Users to Movies Casablanca Serenity Matrix Amelie Alien n m SciFi  VT = Romnce U β€œConcepts”

38 SVD – Example: Users-to-Movies
A = U  VT - example: Users to Movies = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien

39 SVD – Example: Users-to-Movies
A = U  VT - example: Users to Movies = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien SciFi-concept Romance-concept

40 SVD – Example: Users-to-Movies
A = U  VT - example: U is β€œuser-to-concept” similarity matrix = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien SciFi-concept Romance-concept

41 SVD – Example: Users-to-Movies
A = U  VT - example: = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien SciFi-concept β€œstrength” of the SciFi-concept SciFi Romnce

42 SVD – Example: Users-to-Movies
A = U  VT - example: = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien V is β€œmovie-to-concept” similarity matrix SciFi-concept SciFi-concept

43 SVD - Interpretation #2 More details
Q: How exactly is dim. reduction done? = x

44 SVD - Interpretation #2 More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero = x

45 SVD - Interpretation #2 ο‚» More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero x ο‚»

46 SVD - Interpretation #2 ο‚» More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero x ο‚»

47 SVD - Interpretation #2 ο‚» More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero ο‚» x

48 SVD - Interpretation #2 ο‚» More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero ο‚» Frobenius norm: ǁMǁF = οƒ–Ξ£ij Mij2 ǁA-BǁF = οƒ– Ξ£ij (Aij-Bij)2 is β€œsmall”

49 SVD – Best Low Rank Approx.
U A Sigma VT = B is best approximation of A B U Sigma = VT

50 SVD – Best Low Rank Approx.
Theorem: Let A = U  VT and B = U S VT where S = diagonal rxr matrix with si=Οƒi (i=1…k) else si=0 then B is a best rank(B)=k approx. to A What do we mean by β€œbest”: B is a solution to minB ǁA-BǁF where rank(B)=k Ξ£ Drop summations USV – UXV = U-U(SV-XV) AXC –AYC = (AC)(X-Y) 𝜎 11 𝜎 π‘Ÿπ‘Ÿ π΄βˆ’π΅ 𝐹 = 𝑖𝑗 𝐴 𝑖𝑗 βˆ’ 𝐡 𝑖𝑗 2

51 SVD – Best Low Rank Approx.
Details! SVD – Best Low Rank Approx. Theorem: Let A = U  VT (Οƒ1ο‚³Οƒ2…, rank(A)=r) then B = U S VT S = diagonal rxr matrix where si=Οƒi (i=1…k) else si=0 is a best rank-k approximation to A: B is a solution to minB ǁA-BǁF where rank(B)=k Ξ£ 𝜎 11 𝜎 π‘Ÿπ‘Ÿ Drop summations USV – UXV = U-U(SV-XV) AXC –AYC = (AC)(X-Y)

52 SVD - Interpretation #2 Equivalent:
β€˜spectral decomposition’ of the matrix: Οƒ1 x x = u1 u2 Οƒ2 v1 v2

53 SVD: More good stuff We already know that SVD gives minimum reconstruction error (Sum of Squared Errors): min π‘ˆ,V, Ξ£ π‘–π‘—βˆˆπ΄ 𝐴 𝑖𝑗 βˆ’ π‘ˆΞ£ 𝑉 T 𝑖𝑗 2 Note two things: SSE and RMSE are monotonically related: 𝑹𝑴𝑺𝑬= 𝟏 𝒄 𝑺𝑺𝑬 Great news: SVD is minimizing RMSE Complication: The sum in SVD error term is over all entries (no-rating in interpreted as zero-rating). But our R has missing entries! Difference with SVD: -- can multiply sigma with V and get the UV decomposition -- SVD will count errors also on missing entries (zeros) -- SVD insists U,V are orthonormal. We do not want this here, we do not care, we just want something that works

54 Latent Factor Models ο‚» SVD isn’t defined when entries are missing!
users factors .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 4 5 3 1 2 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 ο‚» items factors items PT Q SVD isn’t defined when entries are missing! Use specialized methods to find P, Q min 𝑃,𝑄 𝑖,π‘₯ ∈R π‘Ÿ π‘₯𝑖 βˆ’ π‘ž 𝑖 β‹… 𝑝 π‘₯ 2 Note: We don’t require cols of P, Q to be orthogonal/unit length P, Q map users/movies to a latent space The most popular model among Netflix contestants π‘Ÿ π‘₯𝑖 = π‘ž 𝑖 β‹… 𝑝 π‘₯

55 Finding the Latent Factors

56 Latent Factor Models Our goal is to find P and Q such tat:
π’Žπ’Šπ’ 𝑷,𝑸 π’Š,𝒙 βˆˆπ‘Ή 𝒓 π’™π’Š βˆ’ 𝒒 π’Š β‹… 𝒑 𝒙 𝟐 users factors .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 4 5 3 1 2 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 ο‚» factors items items PT Q

57 Back to Our Problem Want to minimize SSE for unseen test data
Idea: Minimize SSE on training data Want large k (# of factors) to capture all the signals But, SSE on test data begins to rise for k > 2 This is a classical example of overfitting: With too much freedom (too many free parameters) the model starts fitting noise That is it fits too well the training data and thus not generalizing well to unseen test data 1 3 4 5 2 ?

58 Dealing with Missing Entries
1 3 4 5 2 ? To solve overfitting we introduce regularization: Allow rich model where there are sufficient data Shrink aggressively where data are scarce β€œerror” β€œlength” 1, 2 … user set regularization parameters Note: We do not care about the β€œraw” value of the objective function, but we care in P,Q that achieve the minimum of the objective

59 The Effect of Regularization
serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Ocean’s 11 Geared towards males Factor 1 Consider Gus. This slide shows the position for Gus that best explains his ratings for the Training dataβ€”i.e., that minimizes his sum of squared errors. If Gus has rated hundreds of movies, we could probably be confident of that we have estimated his true preferences accurately. But what if Gus has only rated a few moviesβ€”say the ten on this slide? We should not be so confident. The Princess Diaries The Lion King Dumb and Dumber Independence Day Factor 2 minfactors β€œerror” +  β€œlength” funny

60 The Effect of Regularization
serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Ocean’s 11 Geared towards males Factor 1 We hedge our bet by tethering Gus to origin with an elastic cord that tries to pull Gus back towards the origin. If Gus has rated hundreds of movies, he stays about where the data places him. The Princess Diaries The Lion King Dumb and Dumber Independence Day Factor 2 minfactors β€œerror” +  β€œlength” funny

61 The Effect of Regularization
serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Ocean’s 11 Geared towards males Factor 1 But if he has hated only a few dozen, he is pulled back towards the origin. The Princess Diaries The Lion King Dumb and Dumber Independence Day Factor 2 minfactors β€œerror” +  β€œlength” funny

62 The Effect of Regularization
serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Ocean’s 11 Geared towards males Factor 1 And if he has rated only a handful, he is pulled even further. The Princess Diaries The Lion King Dumb and Dumber Independence Day Factor 2 minfactors β€œerror” +  β€œlength” funny

63 Stochastic Gradient Descent
Want to find matrices P and Q: Gradient decent: Initialize P and Q (using SVD, pretend missing ratings are 0) Do gradient descent: P  P -  ·P Q  Q -  ·Q where Q is gradient/derivative of matrix Q: 𝛻𝑄=[𝛻 π‘ž 𝑖𝑓 ] and 𝛻 π‘ž 𝑖𝑓 = π‘₯ βˆ’2 π‘Ÿ π‘₯𝑖 βˆ’ π‘ž 𝑖 𝑝 π‘₯ 𝑝 π‘₯𝑓 +2 πœ† 2 π‘ž 𝑖𝑓 Here 𝒒 π’Šπ’‡ is entry f of row qi of matrix Q Observation: Computing gradients is slow! How to compute gradient of a matrix? Compute gradient of every element independently!

64 Stochastic Gradient Descent
Gradient Descent (GD) vs. Stochastic GD Observation: 𝛻𝑄=[𝛻 π‘ž 𝑖𝑓 ] where 𝛻 π‘ž 𝑖𝑓 = π‘₯,𝑖 βˆ’2 π‘Ÿ π‘₯𝑖 βˆ’ π‘ž 𝑖𝑓 𝑝 π‘₯𝑓 𝑝 π‘₯𝑓 +2πœ† π‘ž 𝑖𝑓 = 𝒙,π’Š 𝑸 𝒓 π’™π’Š Here 𝒒 π’Šπ’‡ is entry f of row qi of matrix Q 𝑸=π‘Έβˆ’ο¨οƒ‘π‘Έ=π‘Έβˆ’ο¨ 𝒙,π’Š 𝑸 ( 𝒓 π’™π’Š ) Idea: Instead of evaluating gradient over all ratings evaluate it for each individual rating and make a step GD: π‘Έο‚¬π‘Έβˆ’ο¨ 𝒓 π’™π’Š 𝑸( 𝒓 π’™π’Š ) SGD: π‘Έο‚¬π‘Έβˆ’πœ‡οƒ‘π‘Έ( 𝒓 π’™π’Š ) Faster convergence! Need more steps but each step is computed much faster

65 SGD vs. GD Convergence of GD vs. SGD Value of the objective function
GD improves the value of the objective function at every step. SGD improves the value but in a β€œnoisy” way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster! Iteration/step

66 Stochastic Gradient Descent
Stochastic gradient decent: Initialize P and Q (using SVD, pretend missing ratings are 0) Then iterate over the ratings (multiple times if necessary) and update factors: For each rxi: πœ€ π‘₯𝑖 =2( π‘Ÿ π‘₯𝑖 βˆ’ π‘ž 𝑖 ⋅𝑝 π‘₯ ) (derivative of the β€œerror”) π‘ž 𝑖 ← π‘ž 𝑖 + πœ‡ 1 πœ€ π‘₯𝑖 𝑝 π‘₯ βˆ’ πœ† 2 π‘ž 𝑖 (update equation) 𝑝 π‘₯ ← 𝑝 π‘₯ + πœ‡ 2 πœ€ π‘₯𝑖 π‘ž 𝑖 βˆ’ πœ† 1 𝑝 π‘₯ (update equation) 2 for loops: For until convergence: For each rxi Compute gradient, do a β€œstep” πœ‡ … learning rate

67 Koren, Bell, Volinksy, IEEE Computer, 2009

68 Extending Latent Factor Model to Include Biases

69 Modeling Biases and Interactions
user bias movie bias user-movie interaction Baseline predictor Separates users and movies Benefits from insights into user’s behavior Among the main practical contributions of the competition User-Movie interaction Characterizes the matching between users and movies Attracts most research in the field Benefits from algorithmic and mathematical innovations ΞΌ = overall mean rating bx = bias of user x bi = bias of movie i

70 Baseline Predictor We have expectations on the rating by user x of movie i, even without estimating x’s attitude towards movies like i Rating scale of user x Values of other ratings user gave recently (day-specific mood, anchoring, multi-user accounts) (Recent) popularity of movie i Selection bias; related to number of ratings user gave on the same day (β€œfrequency”)

71 Putting It All Together
π‘Ÿ π‘₯𝑖 =πœ‡ + 𝑏 π‘₯ + 𝑏 𝑖 + π‘ž 𝑖 β‹… 𝑝 π‘₯ Overall mean rating Bias for user x Bias for movie i User-Movie interaction Example: Mean rating:  = 3.7 You are a critical reviewer: your ratings are 1 star lower than the mean: bx = -1 Star Wars gets a mean rating of 0.5 higher than average movie: bi = + 0.5 Predicted rating for you on Star Wars: = = 3.2

72  is selected via grid-search on a validation set
Fitting the New Model Solve: Stochastic gradient decent to find parameters Note: Both biases bx, bi as well as interactions qi, px are treated as parameters (we estimate them) goodness of fit regularization  is selected via grid-search on a validation set

73 Performance of Various Methods

74 Performance of Various Methods
Global average: User average: Movie average: Netflix: Basic Collaborative filtering: 0.94 Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+Biases: 0.89 Grand Prize:

75 The Netflix Challenge: 2006-09

76 Temporal Biases Of Users
Sudden rise in the average movie rating (early 2004) Improvements in Netflix GUI improvements Meaning of rating changed Movie age Users prefer new movies without any reasons Older movies are just inherently better than newer ones Y. Koren, Collaborative filtering with temporal dynamics, KDD ’09

77 Temporal Biases & Factors
Original model: rxi = m +bx + bi + qi Β·px Add time dependence to biases: rxi = m +bx(t)+ bi(t) +qi Β· px Make parameters bx and bi to depend on time (1) Parameterize time-dependence by linear trends (2) Each bin corresponds to 10 consecutive weeks Add temporal dependence to factors px(t)… user preference vector on day t Y. Koren, Collaborative filtering with temporal dynamics, KDD ’09

78 Adding Temporal Effects

79 Performance of Various Methods
Global average: User average: Movie average: Netflix: Basic Collaborative filtering: 0.94 Still no prize!  Getting desperate. Try a β€œkitchen sink” approach! Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+Biases: 0.89 Latent factors+Biases+Time: 0.876 Grand Prize:

80 Netflix Prize

81 Netflix Prize

82 Netflix Prize

83 Many options for modeling
Variants of the ideas we have seen so far Different numbers of factors Different ways to model time Different ways to handle implicit information Other models (not described here) Nearest-neighbor models Restricted Boltzmann machines Model averaging is useful…. Linear model combining

84 June 26th submission triggers 30-day β€œlast call”
Standing on June 26th 2009 June 26th submission triggers 30-day β€œlast call”

85 The Last 30 Days Ensemble team formed BellKor Strategy
Group of other teams on leaderboard forms a new team Relies on combining their models Quickly also get a qualifying score over 10% BellKor Continue to get small improvements in their scores Realize that they are in direct competition with Ensemble Strategy Both teams carefully monitoring the leaderboard Only sure way to check for improvement is to submit a set of predictions This alerts the other team of your latest score

86 24 Hours from the Deadline
Submissions limited to 1 a day Only 1 final submission could be made in the last 24h 24 hours before deadline… BellKor team member in Austria notices (by chance) that Ensemble posts a score that is slightly better than BellKor’s Frantic last 24 hours for both teams Much computer time on final optimization Carefully calibrated to end about an hour before deadline Final submissions BellKor submits a little early (on purpose), 40 mins before deadline Ensemble submits their final entry 20 mins later ….and everyone waits….

87 Test Set Results The Ensemble: 87

88 Test Set Results The Ensemble: 0.856714
BellKor’s Pragmatic Theory: 88

89 Test Set Results The Ensemble: 0.856714
BellKor’s Pragmatic Theory: Both scores round to 89

90 Test Set Results The Ensemble: 0.856714
BellKor’s Pragmatic Theory: Both scores round to Tie breaker is submission date/time 90

91 Final Test Set Leaderboard
91

92 Who Got the Money? AT&T’s donated its full share to organizations supporting science education Young Science Achievers Program New Jersey Institute of Technology pre-college and educational opportunity programs North Jersey Regional Science Fair Neighborhoods Focused on African American Youth

93 Lesson #1: Data >> Models
Very limited feature set User, movie, date Places focus on models/algorithms Major steps forward associated with incorporating new data features What movies a user rated Temporal effects 93

94 #2: The Power of Regularized SVD Fit by Gradient Descent
Allowed anyone to approach early leaders Powerful predictor Efficient Easy to program Flexibility to incorporate additional features Implicit feedback Temporal effects Neighborhood effects Accurate regularization is essential

95

96 #3: The Wisdom of Crowds (of Models)
All models are wrong; some are useful – G. Box Used linear blends of many prediction sets 107 in Year 1 Over 800 at the end Difficult, or impossible, to build the grand unified model Mega blends are not needed in practice A handful of simple models achieves 80 percent of the improvement of the full blend

97 #4: Find Good Teammates Yehuda Koren
The engine of progress for the Netflix Prize Implicit feedback Temporal effects Nearest neighbor modeling Big Chaos: Michael Jahrer, Andreas Toscher (Year 2) Optimization of tuning parameters Blending methods Pragmatic Theory: Martin Chabbert, Martin Piotte (Year 3) Some movies age better than others Link functions 97

98 #5: Is This the Way to Do Science?
Big Success for Netflix Lots of cheap labor, good publicity Already incorporated 6 percent improvement Potential for much more using other data they have Big advances to the science of recommender systems Regularized SVD Identification of new features Understanding nearest neighbors Contributions to literature

99 Why Did this Work so Well?
Industrial strength data Very good design Accessibility to anyone with a PC Free flow of ideas Leaderboard Forum Workshop and papers Money?

100 But There are Limitations
Need a conceptually simple task Winner-take-all has drawbacks Intellectual property and liability issues How many prizes can overlap?

101 Homework 9.4.3, 9.4.4, 9.4.5


Download ppt "Recommender Systems: Latent Factor Models"

Similar presentations


Ads by Google