Presentation is loading. Please wait.

Presentation is loading. Please wait.

Investigation of Various Factorization Methods for Large Recommender Systems G. Takács, I. Pilászy, B. Németh and D. Tikk www.gravityrd.com 10th International.

Similar presentations


Presentation on theme: "Investigation of Various Factorization Methods for Large Recommender Systems G. Takács, I. Pilászy, B. Németh and D. Tikk www.gravityrd.com 10th International."— Presentation transcript:

1 Investigation of Various Factorization Methods for Large Recommender Systems G. Takács, I. Pilászy, B. Németh and D. Tikk www.gravityrd.com 10th International Workshop on High Performance Data Mining (in conjunction with ICDM) Pisa, December 15th 2008

2 Content Problem definition Approaches Matrix factorization Basics, BRISMF, Semipositive, Retraining Further enhancements Transductive MF, Neighbor based correction Experimental results

3 Collaborative filtering

4 Problem definition I. 143 44 244

5 Problem definition II. The phenomenon can be modeled by the random triplet (U, I, R). A realization of the phenomenon (u, i, r) means that the u-th user rated the i-th item with value r. user id (range: {1, …, M}) item id (range: {1, …, N}) rating value (range: {r1, …, rL})

6 Problem definition III. The goal: predict R from on (U, I). Error criterion: mean squared error (RMSE). The task is nothing else than the classical regression estimation. Classical methods fail because of the unusual characteristics of the predictor variables.

7 Content Problem definition Approaches Matrix factorization Basics, BRISMF, Semipositive, Retraining Further enhancements Transductive MF, Neighbor based correction Experimental results

8 Approaches Matrix factorization: approximates the rating matrix by the product of two lower-rank matrices. Neighbor based approach: defines similarity between the rows or the columns of the rating matrix. Support based approach: characterizes the users based on the binarized rating matrix. Restricted Boltzmann machine: models each user by a stochastic, recurrent neural network. Global effects: cascades 1-variable predictors.

9 Content Problem definition Approaches Matrix factorization Basics, BRISMF, Semipositive, Retraining Further enhancements Transductive MF, Neighbor based correction Experimental results

10 Matrix Factorization (MF) Idea: approximate the rating matrix as the product of two lower-rank matrices R ≈ P ∙ Q Problem: huge number of parameters (e.g. 10 million), R is partially unknown. Solution: incremental gradient descent. P: user feature matrix (M x K) Q: item feature matrix: (K x N) R: rating matrix (M x N)

11 MF sample - learning Q P 143 44 244 R 1.2 0.4 -0.5 0.9 -0.4 1.40.8-1.3-0.10.5 -0.2 0.5 -0.4 1.6 0.3

12 MF sample - learning Q P 143 44 244 R 1.2 0.4 -0.5 0.9 -0.4 1.40.8-1.3-0.10.5 -0.2 0.5 -0.4 1.6 0.3

13 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.4 -0.4 0.9 -0.4 1.30.8-1.3-0.10.5 -0.1 0.5 -0.4 1.6 0.3

14 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.4 -0.4 0.9 -0.4 1.30.8-1.3-0.10.5 -0.1 0.5 -0.4 1.6 0.3

15 MF sample - learning Q P 143 44 244 R 1.2 0.4 -0.3 0.9 -0.4 1.30.9-1.3-0.10.5 -0.1 0.4 -0.4 1.6 0.3

16 MF sample - learning Q P 143 44 244 R 1.2 0.4 -0.3 0.9 -0.4 1.30.9-1.3-0.10.5 -0.1 0.4 -0.4 1.6 0.3

17 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.4 -0.2 0.9 -0.4 1.30.9-1.3-0.00.5 -0.1 0.4 -0.4 1.5 0.3

18 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.4 -0.2 0.9 -0.4 1.30.9-1.3-0.00.5 -0.1 0.4 -0.4 1.5 0.3

19 MF sample - learning Q P 143 44 244 R 1.1 0.4 -0.2 0.8 -0.4 1.30.9-1.2-0.00.5 -0.1 0.4 -0.3 1.5 0.3

20 MF sample - learning Q P 143 44 244 R 1.1 0.4 -0.2 0.8 -0.4 1.30.9-1.2-0.00.5 -0.1 0.4 -0.3 1.5 0.3

21 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.4 -0.2 0.9 -0.4 1.30.9-1.20.10.5 -0.1 0.4 -0.3 1.6 0.3

22 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.4 -0.2 0.9 -0.4 1.30.9-1.20.10.5 -0.1 0.4 -0.3 1.6 0.3

23 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.5 -0.2 0.9 -0.3 1.50.9-1.20.10.5 0.0 0.4 -0.3 1.6 0.3

24 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.5 -0.2 0.9 -0.3 1.50.9-1.20.10.5 0.0 0.4 -0.3 1.6 0.3

25 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.4 -0.2 0.9 -0.2 1.50.9-1.10.10.5 0.0 0.4 -0.2 1.6 0.3

26 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.4 -0.2 0.9 -0.2 1.50.9-1.10.10.5 0.0 0.4 -0.2 1.6 0.3

27 MF sample - learning Q P 143 44 244 R 1.1 1.2 0.5 -0.2 0.9 -0.1 1.50.9-1.10.10.6 0.0 0.4 -0.2 1.6 0.2

28 After a while...

29 MF sample - learning Q P 143 44 244 R 1.4 0.9 2.5 1.1 1.9 -0.3 1.5 2.11.00.71.6 0.8 1.6 1.8 0.0

30 MF sample - prediction Q P 143 44 244 R 1.4 0.9 2.5 1.1 1.9 -0.3 1.5 2.11.00.71.6 0.8 1.6 1.8 0.0 -0.53.5 4.9 1.1 3.32.4 1.5

31 BRISMF Enhancements on the previous model: User and item Biases (offsets). Regularization. We can call this Biased Regularized Incremental Simultaneous MF (BRISMF). This is a very effective MF variant indeed. Leaving out any of these characteristics (B, R, I, S) leads to inferior accuracy.

32 Semipositive MF It is useful to put a nonnegativity constraint on the user feature matrix P. There are many possible ways to implement this (e.g. PLSA, alternating least squares). Our solution: if a user feature becomes negative after the update, then it is set to zero.

33 Reset User Features Disadvantage of BRISMF: user features updated at the beginning of an epoch may be inappropriate at the end of the epoch. Solution: 1) Reset user features at the end of the training. 2A) Retrain user features. 2B) Retrain both user and item features. RP Q P'

34 Content Problem definition Approaches Matrix factorization Basics, BRISMF, Semipositive, Retraining Further enhancements Transductive MF, Neighbor based correction Experimental results

35 Transductive MF How is it possible to use the Netflix Qualifying set in the correction phase? We use the following simple solution:

36 Fast and Accurate NB Correction I. Neighbor based (NB) methods can improve the accuracy of factor models, but conventional NB methods are not scalable. Is it possible to integrate the NB approach into the factor model without losing scalability?

37 Fast and Accurate NB Correction II. Where s jk is (normalized scalar product based similarity): OR (normalized Euclidean distance based similarity)

38 NB Correction sample Q P 14 R 1.41.6 1.5 2.11.02.21.6 0.8 1.6 0.7 0.0 4.20.54.2 4.1 Similarity: 0.2, Error: -0.5 Similarity: 0.8, Error: +0.2 Correction: -0.1

39 Content Problem definition Approaches Matrix factorization Basics, BRISMF, Semipositive, Retraining Further enhancements Transductive MF, Neighbor based correction Experimental results

40 Results I. Method Name of our method Our Probe10Our QuizBell et al's Quiz Simple MF BRISMF #1000 0.89380.89390.8998 Retrained MF BRISMF #1000UM 0.89210.8918N/A MF with neighbor correction BRISMF #1000UM/S2 0.89050.89040.8953

41 Results II. EpochTraining Time (sec)RMSE 11200.9188 22000.9071 32800.9057 43600.9028 54400.9008 65200.9002

42 Thanks! ?


Download ppt "Investigation of Various Factorization Methods for Large Recommender Systems G. Takács, I. Pilászy, B. Németh and D. Tikk www.gravityrd.com 10th International."

Similar presentations


Ads by Google