Local Linear Matrix Factorization for Document Modeling Institute of Computing Technology, Chinese Academy of Sciences Lu Bai, Jiafeng Guo, Yanyan Lan, Xueqi Cheng
Outline Introduction Our approach Experimental results Conclusion
Introduction
Background
Previous work No local geometric regularization None or global regularization only e.g. SVD, PLSA, LDA, NMF, etc. Over-fitting & poor generalization Pairwise Neighborhood Smoothing Increasing the low dimensional affinity over nearby document pairs e.g. LapPLSA, LTM, DTM, etc. Losing the geometric information among pairs, especially in unbalanced document distribution Heuristic similarity measure & neighbors Empirical similarity threshold and neighbor numbers e.g. LapPLSA, LTM Improper similarity measure or number of neighbors hurts the representation A new low dimensional representation mining method by better exploiting the geometric relationship among documents
Our approach Basic ideas Factorizing document-word matrix in NMF way Mining low dimensional semantic representation Modeling document’s relationships with local linear combination Preserving rich local geometric informationSelecting neighbors without similarity measure and threshold
Local Linear Matrix Factorization(LLMF) min
Cont’ min
Graphic Model of LLMF
LLMF vs Others Comparing models without geometric information E.g. NMF, PLSA, LDA LLMF smoothes document representation with its neighbors Comparing models with geometric constraints E.g. LapPLSA, LTM LLMF is free of similarity measure and neighborhood threshold LLMF is more robust in preserving local geometric structure in unbalanced data distribution
Model fitting
Experimental Settings Data set 20news & la1(from Weka) Word Stemming Stop words removing Data sets Num. Of Document Num. of word Num. of category 20news18,74426, la12,85013,1955
Cont’
Experimental Results
Cont’
Conclusion Conclusions We propose a novel method, namely LLMF for learning low dimensional representations of document with local linear constraints. LLMF can better capture the rich geometric information among documents than those based on independent pairwise relationships. Experiments on benchmark of 20news and la1 show the proposed approach can learn better semantic representations compared to other baseline methods Future works We would extend LLMF to paralleled and distributed settings It is promising to apply LLMF in recommendation systems
References D. M. Blei, A. Y. Ng, M. I. Jordan, and J. Lafferty. Latent dirichlet allocation. JMLR, 3:2003, D. Cai, X. He, and J. Han. Locally consistent concept factorization for document clustering. TKDE, 23(6):902–913,2011 D. Cai, Q. Mei, J. Han, and C. Zhai. Modeling hidden topics on document manifold. CIKM ’08, 911–920,, NY, USA, ACM T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. In Machine Learning, page 2001, 2001 S. Huh and S. E. Fienberg. Discriminative topic modeling based on manifold learning. KDD ’10, pages 653–662, New York, NY, USA, ACM
Thanks!! Q&A
Appendix