Principled Regularization for Probabilistic Matrix Factorization Robert Bell, Suhrid Balakrishnan AT&T Labs-Research Duke Workshop on Sensing and Analysis of High-Dimensional Data July 26-28, 2011
2 Probabilistic Matrix Factorization (PMF) Approximate a large n-by-m matrix R by –M = P Q –P and Q each have k rows, k << n, m –m ui = p u q i –R may be sparsely populated Prime tool in Netflix Prize –99% of ratings were missing
Regularization for PMF Needed to avoid overfitting –Even after limiting rank of M –Critical for sparse, imbalanced data Penalized least squares –Minimize 3
Regularization for PMF Needed to avoid overfitting –Even after limiting rank of M –Critical for sparse, imbalanced data Penalized least squares –Minimize –or 4
Regularization for PMF Needed to avoid overfitting –Even after limiting rank of M –Critical for sparse, imbalanced data Penalized least squares –Minimize –or – ’s selected by cross validation 5
Research Questions Should we use separate P and Q ? 6
Research Questions Should we use separate P and Q ? Should we use k separate ’s for each dimension of P and Q? 7
Matrix Completion with Noise (Candes and Plan, Proc IEEE, 2010) Rank reduction without explicit factors –No pre-specification of k, rank(M) Regularization applied directly to M –Trace norm, aka, nuclear norm –Sum of the singular values of M Minimize subject to “Equivalent” to L 2 regularization for P, Q 8
Research Questions Should we use separate P and Q ? Should we use k separate ’s for each dimension of P and Q? Should we use the trace norm for regularization? 9
Bayesian Matrix Factorization (BPMF) (Salakhutdinov and Mnih, ICML 2008) Let r ui ~ N (p u q i, 2 ) No PMF-type regularization p u ~ N ( P, P -1 ) and q i ~ N ( Q, Q -1 ) Priors for 2, P, Q, P, Q Fit by Gibbs sampling Substantial reduction in prediction error relative to PMF with L 2 regularization 10
Research Questions Should we use separate P and Q ? Should we use k separate reg. parameters for each dimension of P and Q? Should we use the trace norm for regularization? Does BPMF “regularize” appropriately? 11
Matrix Factorization with Biases Let m ui = + a u + b i + p u q i Regularization similar to before –Minimize 12
Matrix Factorization with Biases Let m ui = + a u + b i + p u q i Regularization similar to before –Minimize –or 13
Research Questions Should we use separate P and Q ? Should we use k separate reg. parameters for each dimension of P and Q? Should we use the trace norm for regularization? Does BPMF “regularize” appropriately? Should we use separate ’s for the biases? 14
Some Things this Talk Will Not Cover Various extensions of PMF –Combining explicit and implicit feedback –Time varying factors –Non-negative matrix factorization –L 1 regularization – ’s depending on user or item sample sizes Efficiency of optimization algorithms –Use Newton’s method, each coordinate separately –Iterate to convergence 15
No Need for Separate P and Q M = (cP)(c -1 Q) is invariant for c ≠ 0 For initial P and Q –Solve for c to minimize –c = –Gives Sufficient to let P = Q = PQ 16
Bayesian Motivation for L 2 Regularization Simplest case: only one item –R is n-by-1 –R u1 = a 1 + ui, a 1 ~ N (0, 2 ), ui ~ N (0, 2 ) Posterior mean (or MAP) of a 1 satisfies – – a = ( 2 / 2 ) – Best is inversely proportional to 2 17
Implications for Regularization of PMF Allow a ≠ b –If a 2 ≠ b 2 Allow a ≠ b ≠ PQ Allow PQ1 ≠ PQ2 ≠ … ≠ PQk ? –Trace norm does not –BPMF appears to 18
Simulation Experiment Structure n = 2,500 users, m = 400 items 250,000 observed ratings –150,000 in Training (to estimate a, b, P, Q) –50,000 in Validation (to tune ’s) –50,000 in Test (to estimate MSE) Substantial imbalance in ratings –8 to 134 ratings per user in Training data –33 to 988 ratings per item in Training data 19
Simulation Model r ui = a u + b i + p u1 q i1 + p u2 q i2 + ui Elements of a, b, P, Q, and –Independent normals with mean 0 –Var(a u ) = 0.09 –Var(b i ) = 0.16 –Var(p u1 q i1 ) = 0.04 –Var(p u2 q i2 ) = 0.01 –Var( ui ) =
Evaluation Test MSE for estimation of m ui = E(r ui ) –MSE = Limitations –Not real data –Only one replication –No standard errors 21
PMF Results for k = 0 Restrictions on ’sValues of a, b MSE for m MSE Grand mean; no ( a, b ) NA
PMF Results for k = 0 Restrictions on ’sValues of a, b MSE for m MSE Grand mean; no ( a, b ) NA.2979 a = b =
PMF Results for k = 0 Restrictions on ’sValues of a, b MSE for m MSE Grand mean; no ( a, b ) NA.2979 a = b = a = b
PMF Results for k = 0 Restrictions on ’sValues of a, b MSE for m MSE Grand mean; no ( a, b ) NA.2979 a = b = a = b Separate a, b 9.26,
PMF Results for k = 1 Restrictions on ’sValues of a, b, PQ1 MSE for m MSE Separate a, b 9.26,
PMF Results for k = 1 Restrictions on ’sValues of a, b, PQ1 MSE for m MSE Separate a, b 9.26, a = b = PQ
PMF Results for k = 1 Restrictions on ’sValues of a, b, PQ1 MSE for m MSE Separate a, b 9.26, a = b = PQ Separate a, b, PQ1 8.50, 10.13,
PMF Results for k = 2 Restrictions on ’sValues of a, b, PQ1 MSE for m MSE Separate a, b, PQ1 8.50, 10.13, 13.44, NA
PMF Results for k = 2 Restrictions on ’sValues of a, b, PQ1 MSE for m MSE Separate a, b, PQ1 8.50, 10.13, 13.44, NA.0439 a, b, PQ1 = PQ2 8.44, 9.94, 19.84,
PMF Results for k = 2 Restrictions on ’sValues of a, b, PQ1 MSE for m MSE Separate a, b, PQ1 8.50, 10.13, 13.44, NA.0439 a, b, PQ1 = PQ2 8.44, 9.94, 19.84, Separate a, b, PQ1, PQ2 8.43, 10.24, 13.38,
Results for Matrix Completion Performs poorly on raw ratings –MSE =.0693 –Not designed to estimate biases Fit to residuals from PMF with k = 0 –MSE =.0477 –“Recovered” rank was 1 –Worse than MSE’s from PMF:.0428 to
Results for BPMF Raw ratings –MSE =.0498, using k = 3 –Early stopping –Not designed to estimate biases Fit to residuals from PMF with k = 0 –MSE =.0433, using k = 2 –Near.0428, for best PMF w/ biases 33
Summary No need for separate P and Q Theory suggests using separate ’s for distinct sets of exchangeable parameters –Biases vs. factors –For individual factors Tentative simulation results support need for separate ’s across factors –BPMF does so automatically –PMF requires a way to do efficient tuning 34