Matrix Factorization and Latent Semantic Indexing 1 Lecture 13: Matrix Factorization and Latent Semantic Indexing Web Search and Mining.

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 1.
Advertisements

Eigen Decomposition and Singular Value Decomposition
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Latent Semantic Analysis
Dimensionality Reduction PCA -- SVD
Linear Algebra.
Comparison of information retrieval techniques: Latent semantic indexing (LSI) and Concept indexing (CI) Jasminka Dobša Faculty of organization and informatics,
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
What is missing? Reasons that ideal effectiveness hard to achieve: 1. Users’ inability to describe queries precisely. 2. Document representation loses.
Hinrich Schütze and Christina Lioma
CS347 Lecture 4 April 18, 2001 ©Prabhakar Raghavan.
1 Latent Semantic Indexing Jieping Ye Department of Computer Science & Engineering Arizona State University
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 4 March 30, 2005
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
1/ 30. Problems for classical IR models Introduction & Background(LSI,SVD,..etc) Example Standard query method Analysis standard query method Seeking.
IR Models: Latent Semantic Analysis. IR Model Taxonomy Non-Overlapping Lists Proximal Nodes Structured Models U s e r T a s k Set Theoretic Fuzzy Extended.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 18: Latent Semantic Indexing 1.
Information Retrieval in Text Part II Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Multimedia Databases LSI and SVD. Text - Detailed outline text problem full text scanning inversion signature files clustering information filtering and.
E.G.M. PetrakisDimensionality Reduction1  Given N vectors in n dims, find the k most important axes to project them  k is user defined (k < n)  Applications:
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
1 CS 430 / INFO 430 Information Retrieval Lecture 9 Latent Semantic Indexing.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
CS276A Text Retrieval and Mining Lecture 15 Thanks to Thomas Hoffman, Brown University for sharing many of these slides.
Information Retrieval Latent Semantic Indexing. Speeding up cosine computation What if we could take our vectors and “pack” them into fewer dimensions.
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
Homework Define a loss function that compares two matrices (say mean square error) b = svd(bellcore) b2 = b$u[,1:2] %*% diag(b$d[1:2]) %*% t(b$v[,1:2])
1 Vector Space Model Rong Jin. 2 Basic Issues in A Retrieval Model How to represent text objects What similarity function should be used? How to refine.
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
Latent Semantic Indexing Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata.
Automated Essay Grading Resources: Introduction to Information Retrieval, Manning, Raghavan, Schutze (Chapter 06 and 18) Automated Essay Scoring with e-rater.
1 Information Retrieval through Various Approximate Matrix Decompositions Kathryn Linehan Advisor: Dr. Dianne O’Leary.
Latent Semantic Indexing 1. Term-document matrices are very large, though most cells are “zeros” But the number of topics that people talk about is small.
PrasadL18LSI1 Latent Semantic Indexing Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Introduction to Information Retrieval Lecture 19 LSI Thanks to Thomas Hofmann for some slides.
Indices Tomasz Bartoszewski. Inverted Index Search Construction Compression.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Pandu Nayak.
CpSc 881: Information Retrieval. 2 Recall: Term-document matrix This matrix is the basis for computing the similarity between documents and queries. Today:
Latent Semantic Indexing: A probabilistic Analysis Christos Papadimitriou Prabhakar Raghavan, Hisao Tamaki, Santosh Vempala.
Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad.
SINGULAR VALUE DECOMPOSITION (SVD)
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Pandu Nayak.
Latent Semantic Indexing
1 CSC 594 Topics in AI – Text Mining and Analytics Fall 2015/16 6. Dimensionality Reduction.
Latent Semantic Indexing and Probabilistic (Bayesian) Information Retrieval.
LATENT SEMANTIC INDEXING BY SINGULAR VALUE DECOMPOSITION
1 Latent Concepts and the Number Orthogonal Factors in Latent Semantic Analysis Georges Dupret
1 CS 430: Information Discovery Lecture 11 Latent Semantic Indexing.
Web Search and Text Mining Lecture 5. Outline Review of VSM More on LSI through SVD Term relatedness Probabilistic LSI.
ITCS 6265 Information Retrieval & Web Mining Lecture 16 Latent semantic indexing Thanks to Thomas Hofmann for some slides.
Web Search and Data Mining Lecture 4 Adapted from Manning, Raghavan and Schuetze.
DATA MINING LECTURE 8 Sequence Segmentation Dimensionality Reduction.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
PrasadL18LSI1 Latent Semantic Indexing Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann.
Unsupervised Learning II Feature Extraction
Eigen & Singular Value Decomposition
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
LSI, SVD and Data Management
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Latent Semantic Indexing
Restructuring Sparse High Dimensional Data for Effective Retrieval
Latent Semantic Analysis
Presentation transcript:

Matrix Factorization and Latent Semantic Indexing 1 Lecture 13: Matrix Factorization and Latent Semantic Indexing Web Search and Mining

Matrix Factorization and Latent Semantic Indexing 2 Topic  Matrix factorization = matrix decomposition  Latent Semantic Indexing (LSI)  Term-document matrices are very large  But the number of topics that people talk about is small (in some sense)  Clothes, movies, politics, …  Can we represent the term-document space by a lower dimensional latent space?

Matrix Factorization and Latent Semantic Indexing 3 Linear Algebra Background Background

Matrix Factorization and Latent Semantic Indexing 4 Eigenvalues & Eigenvectors  Eigenvectors (for a square m  m matrix S )  How many eigenvalues are there at most? only has a non-zero solution if This is a m th order equation in λ which can have at most m distinct solutions (roots of the characteristic polynomial) – can be complex even though S is real. eigenvalue(right) eigenvector Example Background

Matrix Factorization and Latent Semantic Indexing 5 Matrix-vector multiplication has eigenvalues 30, 20, 1 with corresponding eigenvectors On each eigenvector, S acts as a multiple of the identity matrix: but as a different multiple on each. Any vector (say x= ) can be viewed as a combination of the eigenvectors: x = 2v 1 + 4v 2 + 6v 3 Background

Matrix Factorization and Latent Semantic Indexing 6 Matrix vector multiplication  Thus a matrix-vector multiplication such as Sx (S, x as in the previous slide) can be rewritten in terms of the eigenvalues/vectors:  Even though x is an arbitrary vector, the action of S on x is determined by the eigenvalues/vectors. Background

Matrix Factorization and Latent Semantic Indexing 7 Matrix vector multiplication  Suggestion: the effect of “small” eigenvalues is small.  If we ignored the smallest eigenvalue (1), then instead of we would get  These vectors are similar (in cosine similarity, etc.) Background

Matrix Factorization and Latent Semantic Indexing 8 Eigenvalues & Eigenvectors For symmetric matrices, eigenvectors for distinct eigenvalues are orthogonal All eigenvalues of a real symmetric matrix are real. All eigenvalues of a positive semidefinite matrix are non-negative Background

Matrix Factorization and Latent Semantic Indexing 9 Plug in these values and solve for eigenvectors. Example  Let  Then  The eigenvalues are 1 and 3 (nonnegative, real).  The eigenvectors are orthogonal (and real): Real, symmetric. Background

Matrix Factorization and Latent Semantic Indexing 10  Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix)  Theorem: Exists an eigen decomposition  (cf. matrix diagonalization theorem)  Columns of U are eigenvectors of S  Diagonal elements of are eigenvalues of Eigen/diagonal Decomposition diagonal Unique for distinct eigen- values Background

Matrix Factorization and Latent Semantic Indexing 11 Diagonal decomposition: why/how Let U have the eigenvectors as columns: Then, SU can be written And S=U  U –1. Thus SU=U , or U –1 SU=  Background

Matrix Factorization and Latent Semantic Indexing 12 Diagonal decomposition - example Recall The eigenvectors and form Inverting, we have Then, S=U  U –1 = Recall UU –1 =I. Background

Matrix Factorization and Latent Semantic Indexing 13 Example continued Let’s divide U (and multiply U –1 ) by Then, S= Q(Q -1 = Q T )  Why? Stay tuned … Background

Matrix Factorization and Latent Semantic Indexing 14  If is a symmetric matrix:  Theorem: There exists a (unique) eigen decomposition  where Q is orthogonal:  Q -1 = Q T  Columns of Q are normalized eigenvectors  Columns are orthogonal.  (everything is real) Symmetric Eigen Decomposition Background

Matrix Factorization and Latent Semantic Indexing 15 Time out!  I came to this class to learn about web search and mining, not have my linear algebra past dredged up again …  But if you want to dredge, Strang’s Applied Mathematics is a good place to start.  What do these matrices have to do with text?  Recall M  N term-document matrices …  But everything so far needs square matrices – so …

Matrix Factorization and Latent Semantic Indexing 16 Singular Value Decomposition MMMMMNMNV is N  N For an M  N matrix A of rank r there exists a factorization (Singular Value Decomposition = SVD) as follows: The columns of U are orthogonal eigenvectors of AA T. The columns of V are orthogonal eigenvectors of A T A. Singular values. Eigenvalues 1 … r of AA T are the eigenvalues of A T A. SVD

Matrix Factorization and Latent Semantic Indexing 17 Singular Value Decomposition  Illustration of SVD dimensions and sparseness SVD

Matrix Factorization and Latent Semantic Indexing 18 SVD example Let Thus M=3, N=2. Its SVD is Typically, the singular values arranged in decreasing order. SVD

Matrix Factorization and Latent Semantic Indexing 19  SVD can be used to compute optimal low-rank approximations.  Approximation problem: Find A k of rank k such that A k and X are both m  n matrices. Typically, want k << r. Low-rank Approximation Frobenius norm Low-Rank Approximation

Matrix Factorization and Latent Semantic Indexing 20  Solution via SVD Low-rank Approximation set smallest r-k singular values to zero column notation: sum of rank 1 matrices k Low-Rank Approximation

Matrix Factorization and Latent Semantic Indexing 21  If we retain only k singular values, and set the rest to 0, then we don’t need the matrix parts in brown  Then Σ is k×k, U is M×k, V T is k×N, and A k is M×N  This is referred to as the reduced SVD  It is the convenient (space-saving) and usual form for computational applications  It’s what Matlab gives you Reduced SVD k Low-Rank Approximation

Matrix Factorization and Latent Semantic Indexing 22 Approximation error  How good (bad) is this approximation?  It’s the best possible, measured by the Frobenius norm of the error: where the  i are ordered such that  i   i+1. Suggests why Frobenius error drops as k increased. Low-Rank Approximation

Matrix Factorization and Latent Semantic Indexing 23 SVD Low-rank approximation  Whereas the term-doc matrix A may have M=50000, N=10 million (and rank close to 50000)  We can construct an approximation A 100 with rank 100.  Of all rank 100 matrices, it would have the lowest Frobenius error.  Great … but why would we??  Answer: Latent Semantic Indexing C. Eckart, G. Young, The approximation of a matrix by another of lower rank. Psychometrika, 1, , Low-Rank Approximation

Matrix Factorization and Latent Semantic Indexing 24 Latent Semantic Indexing via the SVD LSI

Matrix Factorization and Latent Semantic Indexing 25 What it is  From term-doc matrix A, we compute the approximation A k.  There is a row for each term and a column for each doc in A k  Thus docs live in a space of k<<r dimensions  These dimensions are not the original axes  But why? LSI

Matrix Factorization and Latent Semantic Indexing 26 Vector Space Model: Pros  Automatic selection of index terms  Partial matching of queries and documents (dealing with the case where no document contains all search terms)  Ranking according to similarity score (dealing with large result sets)  Term weighting schemes (improves retrieval performance)  Various extensions  Document clustering  Relevance feedback (modifying query vector)  Geometric foundation LSI

Matrix Factorization and Latent Semantic Indexing 27 Problems with Lexical Semantics  Ambiguity and association in natural language  Polysemy: Words often have a multitude of meanings and different types of usage (more severe in very heterogeneous collections).  The vector space model is unable to discriminate between different meanings of the same word. LSI

Matrix Factorization and Latent Semantic Indexing 28 Problems with Lexical Semantics  Synonymy: Different terms may have an identical or a similar meaning (weaker: words indicating the same topic).  No associations between words are made in the vector space representation. LSI

Matrix Factorization and Latent Semantic Indexing 29 Polysemy and Context  Document similarity on single word level: polysemy and context car company dodge ford meaning 2 ring jupiter space voyager meaning 1 … saturn... … planet... contribution to similarity, if used in 1 st meaning, but not if in 2 nd LSI

Matrix Factorization and Latent Semantic Indexing 30 Latent Semantic Indexing (LSI)  Perform a low-rank approximation of document- term matrix (typical rank )  General idea  Map documents (and terms) to a low-dimensional representation.  Design a mapping such that the low-dimensional space reflects semantic associations (latent semantic space).  Compute document similarity based on the inner product in this latent semantic space LSI

Matrix Factorization and Latent Semantic Indexing 31 Goals of LSI  Similar terms map to similar location in low dimensional space  Noise reduction by dimension reduction LSI

Matrix Factorization and Latent Semantic Indexing 32 Latent Semantic Analysis  Latent semantic space: illustrating example courtesy of Susan Dumais LSI

Matrix Factorization and Latent Semantic Indexing 33 Performing the maps  Each row and column of A gets mapped into the k- dimensional LSI space, by the SVD.  Claim – this is not only the mapping with the best (Frobenius error) approximation to A, but in fact improves retrieval.  A query q is also mapped into this space, by  Query NOT a sparse vector. LSI

Matrix Factorization and Latent Semantic Indexing 34 Empirical evidence  Experiments on TREC 1/2/3 – Dumais  Lanczos SVD code (available on netlib) due to Berry used in these expts  Running times of ~ one day on tens of thousands of docs [still an obstacle to use]  Dimensions – various values reported. Reducing k improves recall.  (Under 200 reported unsatisfactory)  Generally expect recall to improve – what about precision? LSI

Matrix Factorization and Latent Semantic Indexing 35 Empirical evidence  Precision at or above median TREC precision  Top scorer on almost 20% of TREC topics  Slightly better on average than straight vector spaces  Effect of dimensionality: DimensionsPrecision LSI

Matrix Factorization and Latent Semantic Indexing 36 But why is this clustering?  We’ve talked about docs, queries, retrieval and precision here.  What does this have to do with clustering?  Intuition: Dimension reduction through LSI brings together “related” axes in the vector space. LSI

Matrix Factorization and Latent Semantic Indexing 37 Intuition from block matrices Block 1 Block 2 … Block k 0’s = Homogeneous non-zero blocks. M terms N documents What’s the rank of this matrix? LSI

Matrix Factorization and Latent Semantic Indexing 38 Intuition from block matrices Block 1 Block 2 … Block k 0’s M terms N documents Vocabulary partitioned into k topics (clusters); each doc discusses only one topic. LSI

Matrix Factorization and Latent Semantic Indexing 39 Intuition from block matrices Block 1 Block 2 … Block k 0’s = non-zero entries. M terms N documents What’s the best rank-k approximation to this matrix? LSI

Matrix Factorization and Latent Semantic Indexing 40 Intuition from block matrices Block 1 Block 2 … Block k Few nonzero entries wiper tire V6 car automobile Likely there’s a good rank-k approximation to this matrix. LSI

Matrix Factorization and Latent Semantic Indexing 41 Simplistic picture Topic 1 Topic 2 Topic 3 LSI

Matrix Factorization and Latent Semantic Indexing 42 Some wild extrapolation  The “dimensionality” of a corpus is the number of distinct topics represented in it.  More mathematical wild extrapolation:  if A has a rank k approximation of low Frobenius error, then there are no more than k distinct topics in the corpus. LSI

Matrix Factorization and Latent Semantic Indexing 43 LSI has many other applications  In many settings in pattern recognition and retrieval, we have a feature-object matrix.  For text, the terms are features and the docs are objects.  Could be opinions and users …  This matrix may be redundant in dimensionality.  Can work with low-rank approximation.  If entries are missing (e.g., users’ opinions), can recover if dimensionality is low.  Powerful general analytical technique  Close, principled analog to clustering methods. LSI

Matrix Factorization and Latent Semantic Indexing Recommender Systems 44

Matrix Factorization and Latent Semantic Indexing Recommender Systems 45

Matrix Factorization and Latent Semantic Indexing 46 Other matrix factorization methods  Non-negative matrix factorization  Daniel D. Lee and H. Sebastian Seung (1999). Learning the parts of objects by non-negative matrix factorization. Nature 401 (6755): 788–791.  Joint (collective) matrix factorization  A.P. Singh & G.J. Gordon. Relational Learning via Collective Matrix Factorization. Proc. 14th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining,  Relation regularized matrix factorization  Wu-Jun Li, Dit-Yan Yeung. Relation Regularized Matrix Factorization. IJCAI 2009:

Matrix Factorization and Latent Semantic Indexing 47 Resources  IIR 18