Download presentation
Presentation is loading. Please wait.
1
Recommender Systems: Latent Factor Models
Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. If you make use of a significant portion of these slides in your own lecture, please include this message, or a link to our web site: Recommender Systems: Latent Factor Models
2
The Netflix Prize Training data Test data Competition
100 million ratings, 480,000 users, 17,770 movies 6 years of data: Test data Last few ratings of each user (2.8 million) Evaluation criterion: Root Mean Square Error (RMSE) = 1 π
(π,π₯)βπ
π π₯π β π π₯π 2 Netflixβs system RMSE: Competition 2,700+ teams $1 million prize for 10% improvement on Netflix
3
The Netflix Utility Matrix R
480,000 users Matrix R 1 3 4 5 2 17,700 movies
4
Utility Matrix R: Evaluation
480,000 users Matrix R 1 3 4 5 2 ? π π,π 17,700 movies Training Data Set Test Data Set True rating of user x on item i RMSE = 1 R (π,π₯)βπ
π π₯π β π π₯π 2 Predicted rating
5
Data Details Training data Test data
100 million ratings (from 1 to 5 stars) 6 years ( ) 480,000 users 17,770 βmoviesβ Test data Last few ratings of each user Split as shown on next slide Although I use the term βmoviesβ throughout this talk, it refers to DVDs of all types besides made-for-theater movies, including seasons of popular TV series such as Seinfeld, childrenβs videos, concerts, etc.
6
Test Data Split into Three Pieces
Probe Ratings released Allows participants to assess methods directly Daily submissions allowed for combined Quiz/Test data Identity of Quiz cases withheld RMSE released for Quiz Test RMSE withheld Prizes based on Test RMSE Because Netflix needs to predict future ratings, it selected test data drawn from each userβs most recent ratings. Because these later ratings might differ systematically from the training data, Netflix released actual ratings for a random subset called the Probe data. The remaining data, for which Netflix withheld ratings, was further divided at random into the Quiz and Test data. Contestants could submit prediction every 24 hours for the combined Quiz/Test data. In response, Netflix reported the Quiz RMSE for each submission. No results were reported for the Test data, which were used only for determining the winners of prizes.
7
Higher Mean Rating in Probe Data
This chart shows that the later ratings in the Probe data did differ systematically from the earlier Training data; Probe ratings were systematically higher.
8
Something Happened in Early 2004
Here is more direct evidence that ratings rose over time, specifically during the first part of 2004. We do not know the reason for this shift. 2004 8
9
Data about the Movies Most Rated Movies Highest Variance
Count Avg rating Most Loved Movies 137812 4.593 The Shawshank Redemption 133597 4.545 Lord of the Rings: The Return of the King 180883 4.306 The Green Mile 150676 4.460 Lord of the Rings: The Two Towers 139050 4.415 Finding Nemo 117456 4.504 Raiders of the Lost Ark Most Rated Movies Miss Congeniality Independence Day The Patriot The Day After Tomorrow Pretty Woman Pirates of the Caribbean Highest Variance The Royal Tenenbaums Lost In Translation Pearl Harbor Miss Congeniality Napolean Dynamite Fahrenheit 9/11
10
Most Active Users User ID # Ratings Mean Rating 305344 17,651 1.90
387418 17,432 1.81 16,560 1.22 15,811 4.26 14,829 4.08 9,820 1.37 9,764 1.33 9,739 2.95 Like any data set, there are surprises. One user managed to rate more than 5,000 movies in a single day! 10
11
Major Challenges Size of data 99% of data are missing
Places premium on efficient algorithms Stretched memory limits of standard PCs 99% of data are missing Eliminates many standard prediction methods Certainly not missing at random Training and test data differ systematically Test ratings are later Test cases are spread uniformly across users 11
12
Major Challenges (cont.)
Countless factors may affect ratings Genre, movie/TV series/other Style of action, dialogue, plot, music et al. Director, actors Raterβs mood Large imbalance in training data Number of ratings per user or movie varies by several orders of magnitude Information to estimate individual parameters varies widely The two challenges on this slide are central to the whole endeavor as they clearly conflict with each other. Number 4 points us towards building very big models. Number 5 tells us that it will be easy to over fitβat least for some users and some movies. 12
13
BellKor Recommender System
The winner of the Netflix Challenge! Multi-scale modeling of the data: Combine top level, βregionalβ modeling of the data, with a refined, local view: Global: Overall deviations of users/movies Factorization: Addressing βregionalβ effects Collaborative filtering: Extract local patterns Global effects Factorization Collaborative filtering
14
Modeling Local & Global Effects
Mean movie rating: 3.7 stars The Sixth Sense is 0.5 stars above avg. Joe rates 0.2 stars below avg. ο Baseline estimation: Joe will rate The Sixth Sense 4 stars Local neighborhood (CF/NN): Joe didnβt like related movie Signs ο Final estimate: Joe will rate The Sixth Sense 3.8 stars
15
Recap: Collaborative Filtering (CF)
Earliest and most popular collaborative filtering method Derive unknown ratings from those of βsimilarβ movies (item-item variant) Define similarity measure sij of items i and j Select k-nearest neighbors, compute the rating N(i; x): items most similar to i that were rated by x sijβ¦ similarity of items i and j rxjβ¦rating of user x on item j N(i;x)β¦ set of items similar to item i that were rated by x
16
Modeling Local & Global Effects
In practice we get better estimates if we model deviations: ^ baseline estimate for rxi Problems/Issues: 1) Similarity measures are βarbitraryβ 2) Pairwise similarities neglect interdependencies among users 3) Taking a weighted average can be restricting Solution: Instead of sij use wij that we estimate directly from data π ππ =π+ π π + π π ΞΌ = overall mean rating bx = rating deviation of user x = (avg. rating of user x) β ΞΌ bi = (avg. rating of movie i) β ΞΌ
17
Idea: Interpolation Weights wij
Use a weighted sum rather than weighted avg.: π π₯π = π π₯π + πβπ(π;π₯) π€ ππ π π₯π β π π₯π A few notes: π΅(π;π) β¦ set of movies rated by user x that are similar to movie i π ππ is the interpolation weight (some real number) We allow: πβπ΅(π,π) π ππ β π π ππ models interaction between pairs of movies (it does not depend on user x)
18
Idea: Interpolation Weights wij
π π₯π = π π₯π + πβπ(π,π₯) π€ ππ π π₯π β π π₯π How to set wij? Remember, error metric is: 1 π
(π,π₯)βπ
π π₯π β π π₯π or equivalently SSE: (π,π)βπΉ π ππ β π ππ π Find wij that minimize SSE on training data! Models relationships between item i and its neighbors j wij can be learned/estimated based on x and all other users that rated i Why is this a good idea?
19
Recommendations via Optimization
1 3 4 5 2 Goal: Make good recommendations Quantify goodness using RMSE: Lower RMSE ο better recommendations Want to make good recommendations on items that user has not yet seen. Canβt really do this! Letβs set build a system such that it works well on known (user, item) ratings And hope the system will also predict well the unknown ratings
20
Recommendations via Optimization
Idea: Letβs set values w such that they work well on known (user, item) ratings How to find such values w? Idea: Define an objective function and solve the optimization problem Find wij that minimize SSE on training data! π½ π€ = π₯,π π π₯π + πβπ π;π₯ π€ ππ π π₯π β π π₯π β π π₯π 2 Think of w as a vector of numbers True rating Predicted rating
21
Detour: Minimizing a function
A simple way to minimize a function π(π): Compute the take a derivative π΅π Start at some point π and evaluate π΅π(π) Make a step in the reverse direction of the gradient: π=πβπ΅π(π) Repeat until converged π π¦ π π¦ +π»π(π¦)
22
Interpolation Weights
We have the optimization problem, now what? Gradient decent: Iterate until convergence: πβ πβο¨ π΅ π π± where π΅ π π± is the gradient (derivative evaluated on data): π» π€ π½= ππ½(π€) π π€ ππ =2 π₯ π π₯π + πβπ π;π₯ π€ ππ π π₯π β π π₯π β π π₯π π π₯π β π π₯π for πβ{π΅ π;π , βπ,βπ } else ππ½(π€) π π€ ππ =π Note: We fix movie i, go over all rxi, for every movie πβπ΅ π;π , we compute ππ±(π) π π ππ π½ π€ = π₯,π π π₯π + πβπ π;π₯ π€ ππ π π₯π β π π₯π β π π₯π 2 ο¨ β¦ learning rate while |wnew - wold| > Ξ΅: wold = wnew wnew = wold - ο¨ Β·οwold
23
Interpolation Weights
So far: π π₯π = π π₯π + πβπ(π;π₯) π€ ππ π π₯π β π π₯π Weights wij derived based on their role; no use of an arbitrary similarity measure (wij οΉ sij) Explicitly account for interrelationships among the neighboring movies Next: Latent factor model Extract βregionalβ correlations Global effects Factorization CF/NN
24
Performance of Various Methods
Global average: User average: Movie average: Netflix: Basic Collaborative filtering: 0.94 CF+Biases+learned weights: 0.91 Grand Prize:
25
Latent Factor Models (e.g., SVD)
Serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Oceanβs 11 Geared towards males The Lion King The Princess Diaries Independence Day Dumb and Dumber Funny
26
Latent Factor Models βSVDβ on Netflix data: R β Q Β· PT β
SVD: A = U ο VT βSVDβ on Netflix data: R β Q Β· PT For now letβs assume we can approximate the rating matrix R as a product of βthinβ Q Β· PT R has missing entries but letβs ignore that for now! Basically, we will want the reconstruction error to be small on known ratings and we donβt care about the values on the missing ones users factors .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 4 5 3 1 2 users items -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 β factors items PT R Q Difference with SVD: -- can multiply sigma with V and get the UV decomposition -- SVD will count errors also on missing entries (zeros) -- SVD insists U,V are orthonormal. We do not want this here, we do not care, we just want something that works
27
Ratings as Products of Factors
How to estimate the missing rating of user x for item i? π ππ = π π β
π π = π π ππ β
π ππ users 4 5 3 1 2 ? items β qi = row i of Q px = column x of PT .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 items factors PT factors Q
28
Ratings as Products of Factors
How to estimate the missing rating of user x for item i? π ππ = π π β
π π = π π ππ β
π ππ users 4 5 3 1 2 ? items β qi = row i of Q px = column x of PT .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 items factors PT factors Q
29
Ratings as Products of Factors
How to estimate the missing rating of user x for item i? π ππ = π π β
π π = π π ππ β
π ππ users 4 5 3 1 2 ? 2.4 items β qi = row i of Q px = column x of PT .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 items f factors PT f factors Q
30
Latent Factor Models Serious Geared towards Geared towards Factor 1
Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Oceanβs 11 Geared towards males Factor 1 This graph shows a hypothetical layout of movies in two dimensions. In the example, the horizontal dimension contrasts βchick flicksβ from βmacho moviesβ, while the vertical dimension measures the seriousness of the movie. In a real application of SVD, an algorithm would determine the layout, so it night not be easy to label the axes. Feel free to disagree with my placement of the various movies. The Lion King The Princess Diaries Factor 2 Independence Day Dumb and Dumber Funny
31
Latent Factor Models Serious Geared towards Geared towards Factor 1
Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Oceanβs 11 Geared towards males Factor 1 Users fall into the same space as movies, where a userβs position in a dimension reflects the userβs preference for (or against) movies that score high on that dimension. For example, BLUE tends to like male-oriented movies, but dislikes serious movies. Therefore, we would expect him to love βDumb and Dumberβ and hate βThe Color Purpleβ. Note that these two dimensions do not characterize Daveβs (DOCTOR) interests very well; additional dimensions would be needed. The Lion King The Princess Diaries Factor 2 Independence Day Dumb and Dumber Funny
32
ο» Recap: SVD Remember SVD:
n n Remember SVD: A: Input data matrix U: Left singular vecs V: Right singular vecs ο: Singular values So in our case: βSVDβ on Netflix data: R β Q Β· PT A = R, Q = U, PT = ο VT m A ο» ο VT m U Difference with SVD: -- can multiply sigma with V and get the UV decomposition -- SVD will count errors also on missing entries (zeros) -- SVD insists U,V are orthonormal. We do not want this here, we do not care, we just want something that works π ππ = π π β
π π
33
SVD - Definition A[m x n] = U[m x r] ο [ r x r] (V[n x r])T
A: Input data matrix m x n matrix (e.g., m documents, n terms) U: Left singular vectors m x r matrix (m documents, r concepts) ο: Singular values r x r diagonal matrix (strength of each βconceptβ) (r : rank of the matrix A) V: Right singular vectors n x r matrix (n terms, r concepts)
34
Faloutsos SVD T n n m A ο» ο VT m U
35
ο» SVD + Οi β¦ scalar ui β¦ vector vi β¦ vector A n m T ο³1u1v1 ο³2u2v2
Faloutsos SVD T n ο³1u1v1 ο³2u2v2 A ο» + m Οi β¦ scalar ui β¦ vector vi β¦ vector
36
SVD - Properties It is always possible to decompose a real matrix A into A = U ο VT , where U, ο, V: unique U, V: column orthonormal UT U = I; VT V = I (I: identity matrix) (Columns are orthogonal unit vectors) ο: diagonal Entries (singular values) are positive, and sorted in decreasing order (Ο1 ο³ Ο2 ο³ ... ο³ 0) Nice proof of uniqueness:
37
SVD β Example: Users-to-Movies
A = U ο VT - example: Users to Movies Casablanca Serenity Matrix Amelie Alien n m SciFi ο VT = Romnce U βConceptsβ
38
SVD β Example: Users-to-Movies
A = U ο VT - example: Users to Movies = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien
39
SVD β Example: Users-to-Movies
A = U ο VT - example: Users to Movies = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien SciFi-concept Romance-concept
40
SVD β Example: Users-to-Movies
A = U ο VT - example: U is βuser-to-conceptβ similarity matrix = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien SciFi-concept Romance-concept
41
SVD β Example: Users-to-Movies
A = U ο VT - example: = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien SciFi-concept βstrengthβ of the SciFi-concept SciFi Romnce
42
SVD β Example: Users-to-Movies
A = U ο VT - example: = SciFi Romnce x Casablanca Serenity Matrix Amelie Alien V is βmovie-to-conceptβ similarity matrix SciFi-concept SciFi-concept
43
SVD - Interpretation #2 More details
Q: How exactly is dim. reduction done? = x
44
SVD - Interpretation #2 More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero = x
45
SVD - Interpretation #2 ο» More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero x ο»
46
SVD - Interpretation #2 ο» More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero x ο»
47
SVD - Interpretation #2 ο» More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero ο» x
48
SVD - Interpretation #2 ο» More details
Q: How exactly is dim. reduction done? A: Set smallest singular values to zero ο» Frobenius norm: ΗMΗF = οΞ£ij Mij2 ΗA-BΗF = ο Ξ£ij (Aij-Bij)2 is βsmallβ
49
SVD β Best Low Rank Approx.
U A Sigma VT = B is best approximation of A B U Sigma = VT
50
SVD β Best Low Rank Approx.
Theorem: Let A = U ο VT and B = U S VT where S = diagonal rxr matrix with si=Οi (i=1β¦k) else si=0 then B is a best rank(B)=k approx. to A What do we mean by βbestβ: B is a solution to minB ΗA-BΗF where rank(B)=k Ξ£ Drop summations USV β UXV = U-U(SV-XV) AXC βAYC = (AC)(X-Y) π 11 π ππ π΄βπ΅ πΉ = ππ π΄ ππ β π΅ ππ 2
51
SVD β Best Low Rank Approx.
Details! SVD β Best Low Rank Approx. Theorem: Let A = U ο VT (Ο1ο³Ο2ο³β¦, rank(A)=r) then B = U S VT S = diagonal rxr matrix where si=Οi (i=1β¦k) else si=0 is a best rank-k approximation to A: B is a solution to minB ΗA-BΗF where rank(B)=k Ξ£ π 11 π ππ Drop summations USV β UXV = U-U(SV-XV) AXC βAYC = (AC)(X-Y)
52
SVD - Interpretation #2 Equivalent:
βspectral decompositionβ of the matrix: Ο1 x x = u1 u2 Ο2 v1 v2
53
SVD: More good stuff We already know that SVD gives minimum reconstruction error (Sum of Squared Errors): min π,V, Ξ£ ππβπ΄ π΄ ππ β πΞ£ π T ππ 2 Note two things: SSE and RMSE are monotonically related: πΉπ΄πΊπ¬= π π πΊπΊπ¬ Great news: SVD is minimizing RMSE Complication: The sum in SVD error term is over all entries (no-rating in interpreted as zero-rating). But our R has missing entries! Difference with SVD: -- can multiply sigma with V and get the UV decomposition -- SVD will count errors also on missing entries (zeros) -- SVD insists U,V are orthonormal. We do not want this here, we do not care, we just want something that works
54
Latent Factor Models ο» SVD isnβt defined when entries are missing!
users factors .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 4 5 3 1 2 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 ο» items factors items PT Q SVD isnβt defined when entries are missing! Use specialized methods to find P, Q min π,π π,π₯ βR π π₯π β π π β
π π₯ 2 Note: We donβt require cols of P, Q to be orthogonal/unit length P, Q map users/movies to a latent space The most popular model among Netflix contestants π π₯π = π π β
π π₯
55
Finding the Latent Factors
56
Latent Factor Models Our goal is to find P and Q such tat:
πππ π·,πΈ π,π βπΉ π ππ β π π β
π π π users factors .2 -.4 .1 .5 .6 -.5 .3 -.2 2.1 1.1 -2 -.7 .7 -1 4 5 3 1 2 users -.9 2.4 1.4 .3 -.4 .8 -.5 -2 .5 -.2 1.1 1.3 -.1 1.2 -.7 2.9 -1 .7 -.8 .1 -.6 .4 -.3 .9 1.7 .6 2.1 ο» factors items items PT Q
57
Back to Our Problem Want to minimize SSE for unseen test data
Idea: Minimize SSE on training data Want large k (# of factors) to capture all the signals But, SSE on test data begins to rise for k > 2 This is a classical example of overfitting: With too much freedom (too many free parameters) the model starts fitting noise That is it fits too well the training data and thus not generalizing well to unseen test data 1 3 4 5 2 ?
58
Dealing with Missing Entries
1 3 4 5 2 ? To solve overfitting we introduce regularization: Allow rich model where there are sufficient data Shrink aggressively where data are scarce βerrorβ βlengthβ ο¬1, ο¬2 β¦ user set regularization parameters Note: We do not care about the βrawβ value of the objective function, but we care in P,Q that achieve the minimum of the objective
59
The Effect of Regularization
serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Oceanβs 11 Geared towards males Factor 1 Consider Gus. This slide shows the position for Gus that best explains his ratings for the Training dataβi.e., that minimizes his sum of squared errors. If Gus has rated hundreds of movies, we could probably be confident of that we have estimated his true preferences accurately. But what if Gus has only rated a few moviesβsay the ten on this slide? We should not be so confident. The Princess Diaries The Lion King Dumb and Dumber Independence Day Factor 2 minfactors βerrorβ + ο¬ βlengthβ funny
60
The Effect of Regularization
serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Oceanβs 11 Geared towards males Factor 1 We hedge our bet by tethering Gus to origin with an elastic cord that tries to pull Gus back towards the origin. If Gus has rated hundreds of movies, he stays about where the data places him. The Princess Diaries The Lion King Dumb and Dumber Independence Day Factor 2 minfactors βerrorβ + ο¬ βlengthβ funny
61
The Effect of Regularization
serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Oceanβs 11 Geared towards males Factor 1 But if he has hated only a few dozen, he is pulled back towards the origin. The Princess Diaries The Lion King Dumb and Dumber Independence Day Factor 2 minfactors βerrorβ + ο¬ βlengthβ funny
62
The Effect of Regularization
serious Braveheart The Color Purple Amadeus Lethal Weapon Sense and Sensibility Geared towards females Oceanβs 11 Geared towards males Factor 1 And if he has rated only a handful, he is pulled even further. The Princess Diaries The Lion King Dumb and Dumber Independence Day Factor 2 minfactors βerrorβ + ο¬ βlengthβ funny
63
Stochastic Gradient Descent
Want to find matrices P and Q: Gradient decent: Initialize P and Q (using SVD, pretend missing ratings are 0) Do gradient descent: P ο¬ P - ο¨ Β·οP Q ο¬ Q - ο¨ Β·οQ where οQ is gradient/derivative of matrix Q: π»π=[π» π ππ ] and π» π ππ = π₯ β2 π π₯π β π π π π₯ π π₯π +2 π 2 π ππ Here π ππ is entry f of row qi of matrix Q Observation: Computing gradients is slow! How to compute gradient of a matrix? Compute gradient of every element independently!
64
Stochastic Gradient Descent
Gradient Descent (GD) vs. Stochastic GD Observation: π»π=[π» π ππ ] where π» π ππ = π₯,π β2 π π₯π β π ππ π π₯π π π₯π +2π π ππ = π,π οπΈ π ππ Here π ππ is entry f of row qi of matrix Q πΈ=πΈβο¨οπΈ=πΈβο¨ π,π οπΈ ( π ππ ) Idea: Instead of evaluating gradient over all ratings evaluate it for each individual rating and make a step GD: πΈο¬πΈβο¨ π ππ οπΈ( π ππ ) SGD: πΈο¬πΈβποπΈ( π ππ ) Faster convergence! Need more steps but each step is computed much faster
65
SGD vs. GD Convergence of GD vs. SGD Value of the objective function
GD improves the value of the objective function at every step. SGD improves the value but in a βnoisyβ way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster! Iteration/step
66
Stochastic Gradient Descent
Stochastic gradient decent: Initialize P and Q (using SVD, pretend missing ratings are 0) Then iterate over the ratings (multiple times if necessary) and update factors: For each rxi: π π₯π =2( π π₯π β π π β
π π₯ ) (derivative of the βerrorβ) π π β π π + π 1 π π₯π π π₯ β π 2 π π (update equation) π π₯ β π π₯ + π 2 π π₯π π π β π 1 π π₯ (update equation) 2 for loops: For until convergence: For each rxi Compute gradient, do a βstepβ π β¦ learning rate
67
Koren, Bell, Volinksy, IEEE Computer, 2009
68
Extending Latent Factor Model to Include Biases
69
Modeling Biases and Interactions
user bias movie bias user-movie interaction Baseline predictor Separates users and movies Benefits from insights into userβs behavior Among the main practical contributions of the competition User-Movie interaction Characterizes the matching between users and movies Attracts most research in the field Benefits from algorithmic and mathematical innovations ΞΌ = overall mean rating bx = bias of user x bi = bias of movie i
70
Baseline Predictor We have expectations on the rating by user x of movie i, even without estimating xβs attitude towards movies like i Rating scale of user x Values of other ratings user gave recently (day-specific mood, anchoring, multi-user accounts) (Recent) popularity of movie i Selection bias; related to number of ratings user gave on the same day (βfrequencyβ)
71
Putting It All Together
π π₯π =π + π π₯ + π π + π π β
π π₯ Overall mean rating Bias for user x Bias for movie i User-Movie interaction Example: Mean rating: ο ο = 3.7 You are a critical reviewer: your ratings are 1 star lower than the mean: bx = -1 Star Wars gets a mean rating of 0.5 higher than average movie: bi = + 0.5 Predicted rating for you on Star Wars: = = 3.2
72
ο¬ is selected via grid-search on a validation set
Fitting the New Model Solve: Stochastic gradient decent to find parameters Note: Both biases bx, bi as well as interactions qi, px are treated as parameters (we estimate them) goodness of fit regularization ο¬ is selected via grid-search on a validation set
73
Performance of Various Methods
74
Performance of Various Methods
Global average: User average: Movie average: Netflix: Basic Collaborative filtering: 0.94 Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+Biases: 0.89 Grand Prize:
75
The Netflix Challenge: 2006-09
76
Temporal Biases Of Users
Sudden rise in the average movie rating (early 2004) Improvements in Netflix GUI improvements Meaning of rating changed Movie age Users prefer new movies without any reasons Older movies are just inherently better than newer ones Y. Koren, Collaborative filtering with temporal dynamics, KDD β09
77
Temporal Biases & Factors
Original model: rxi = m +bx + bi + qi Β·px Add time dependence to biases: rxi = m +bx(t)+ bi(t) +qi Β· px Make parameters bx and bi to depend on time (1) Parameterize time-dependence by linear trends (2) Each bin corresponds to 10 consecutive weeks Add temporal dependence to factors px(t)β¦ user preference vector on day t Y. Koren, Collaborative filtering with temporal dynamics, KDD β09
78
Adding Temporal Effects
79
Performance of Various Methods
Global average: User average: Movie average: Netflix: Basic Collaborative filtering: 0.94 Still no prize! ο Getting desperate. Try a βkitchen sinkβ approach! Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+Biases: 0.89 Latent factors+Biases+Time: 0.876 Grand Prize:
80
Netflix Prize
81
Netflix Prize
82
Netflix Prize
83
Many options for modeling
Variants of the ideas we have seen so far Different numbers of factors Different ways to model time Different ways to handle implicit information Other models (not described here) Nearest-neighbor models Restricted Boltzmann machines Model averaging is usefulβ¦. Linear model combining
84
June 26th submission triggers 30-day βlast callβ
Standing on June 26th 2009 June 26th submission triggers 30-day βlast callβ
85
The Last 30 Days Ensemble team formed BellKor Strategy
Group of other teams on leaderboard forms a new team Relies on combining their models Quickly also get a qualifying score over 10% BellKor Continue to get small improvements in their scores Realize that they are in direct competition with Ensemble Strategy Both teams carefully monitoring the leaderboard Only sure way to check for improvement is to submit a set of predictions This alerts the other team of your latest score
86
24 Hours from the Deadline
Submissions limited to 1 a day Only 1 final submission could be made in the last 24h 24 hours before deadlineβ¦ BellKor team member in Austria notices (by chance) that Ensemble posts a score that is slightly better than BellKorβs Frantic last 24 hours for both teams Much computer time on final optimization Carefully calibrated to end about an hour before deadline Final submissions BellKor submits a little early (on purpose), 40 mins before deadline Ensemble submits their final entry 20 mins later β¦.and everyone waitsβ¦.
87
Test Set Results The Ensemble: 87
88
Test Set Results The Ensemble: 0.856714
BellKorβs Pragmatic Theory: 88
89
Test Set Results The Ensemble: 0.856714
BellKorβs Pragmatic Theory: Both scores round to 89
90
Test Set Results The Ensemble: 0.856714
BellKorβs Pragmatic Theory: Both scores round to Tie breaker is submission date/time 90
91
Final Test Set Leaderboard
91
92
Who Got the Money? AT&Tβs donated its full share to organizations supporting science education Young Science Achievers Program New Jersey Institute of Technology pre-college and educational opportunity programs North Jersey Regional Science Fair Neighborhoods Focused on African American Youth
93
Lesson #1: Data >> Models
Very limited feature set User, movie, date Places focus on models/algorithms Major steps forward associated with incorporating new data features What movies a user rated Temporal effects 93
94
#2: The Power of Regularized SVD Fit by Gradient Descent
Allowed anyone to approach early leaders Powerful predictor Efficient Easy to program Flexibility to incorporate additional features Implicit feedback Temporal effects Neighborhood effects Accurate regularization is essential
96
#3: The Wisdom of Crowds (of Models)
All models are wrong; some are useful β G. Box Used linear blends of many prediction sets 107 in Year 1 Over 800 at the end Difficult, or impossible, to build the grand unified model Mega blends are not needed in practice A handful of simple models achieves 80 percent of the improvement of the full blend
97
#4: Find Good Teammates Yehuda Koren
The engine of progress for the Netflix Prize Implicit feedback Temporal effects Nearest neighbor modeling Big Chaos: Michael Jahrer, Andreas Toscher (Year 2) Optimization of tuning parameters Blending methods Pragmatic Theory: Martin Chabbert, Martin Piotte (Year 3) Some movies age better than others Link functions 97
98
#5: Is This the Way to Do Science?
Big Success for Netflix Lots of cheap labor, good publicity Already incorporated 6 percent improvement Potential for much more using other data they have Big advances to the science of recommender systems Regularized SVD Identification of new features Understanding nearest neighbors Contributions to literature
99
Why Did this Work so Well?
Industrial strength data Very good design Accessibility to anyone with a PC Free flow of ideas Leaderboard Forum Workshop and papers Money?
100
But There are Limitations
Need a conceptually simple task Winner-take-all has drawbacks Intellectual property and liability issues How many prizes can overlap?
101
Homework 9.4.3, 9.4.4, 9.4.5
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.