Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad.

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 1.
Advertisements

Eigen Decomposition and Singular Value Decomposition
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Latent Semantic Analysis
Information retrieval – LSI, pLSI and LDA
Dimensionality Reduction PCA -- SVD
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
What is missing? Reasons that ideal effectiveness hard to achieve: 1. Users’ inability to describe queries precisely. 2. Document representation loses.
Hinrich Schütze and Christina Lioma
Symmetric Matrices and Quadratic Forms
3D Geometry for Computer Graphics
1 Latent Semantic Indexing Jieping Ye Department of Computer Science & Engineering Arizona State University
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 4 March 30, 2005
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
1/ 30. Problems for classical IR models Introduction & Background(LSI,SVD,..etc) Example Standard query method Analysis standard query method Seeking.
IR Models: Latent Semantic Analysis. IR Model Taxonomy Non-Overlapping Lists Proximal Nodes Structured Models U s e r T a s k Set Theoretic Fuzzy Extended.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 18: Latent Semantic Indexing 1.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Multimedia Databases LSI and SVD. Text - Detailed outline text problem full text scanning inversion signature files clustering information filtering and.
E.G.M. PetrakisDimensionality Reduction1  Given N vectors in n dims, find the k most important axes to project them  k is user defined (k < n)  Applications:
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
CS276A Text Retrieval and Mining Lecture 15 Thanks to Thomas Hoffman, Brown University for sharing many of these slides.
Information Retrieval Latent Semantic Indexing. Speeding up cosine computation What if we could take our vectors and “pack” them into fewer dimensions.
SVD(Singular Value Decomposition) and Its Applications
Latent Semantic Indexing (mapping onto a smaller space of latent concepts) Paolo Ferragina Dipartimento di Informatica Università di Pisa Reading 18.
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
Latent Semantic Indexing Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata.
PrasadL18LSI1 Latent Semantic Indexing Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Matrix Factorization and Latent Semantic Indexing 1 Lecture 13: Matrix Factorization and Latent Semantic Indexing Web Search and Mining.
Introduction to Information Retrieval Lecture 19 LSI Thanks to Thomas Hofmann for some slides.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Pandu Nayak.
AN ORTHOGONAL PROJECTION
SVD: Singular Value Decomposition
CpSc 881: Information Retrieval. 2 Recall: Term-document matrix This matrix is the basis for computing the similarity between documents and queries. Today:
Latent Semantic Indexing: A probabilistic Analysis Christos Papadimitriou Prabhakar Raghavan, Hisao Tamaki, Santosh Vempala.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
SINGULAR VALUE DECOMPOSITION (SVD)
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Pandu Nayak.
Latent Semantic Indexing
Latent Semantic Indexing and Probabilistic (Bayesian) Information Retrieval.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Modern information retreival Chapter. 02: Modeling (Latent Semantic Indexing)
1 Latent Concepts and the Number Orthogonal Factors in Latent Semantic Analysis Georges Dupret
Web Search and Text Mining Lecture 5. Outline Review of VSM More on LSI through SVD Term relatedness Probabilistic LSI.
ITCS 6265 Information Retrieval & Web Mining Lecture 16 Latent semantic indexing Thanks to Thomas Hofmann for some slides.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Web Search and Data Mining Lecture 4 Adapted from Manning, Raghavan and Schuetze.
DATA MINING LECTURE 8 Sequence Segmentation Dimensionality Reduction.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
PrasadL18LSI1 Latent Semantic Indexing Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann.
Vector Semantics Dense Vectors.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
Review of Linear Algebra
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
LSI, SVD and Data Management
Singular Value Decomposition
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Symmetric Matrices and Quadratic Forms
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Latent Semantic Indexing
Restructuring Sparse High Dimensional Data for Effective Retrieval
Latent Semantic Analysis
Presentation transcript:

Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad

Clustering documents (and terms) Latent Semantic Indexing Term-document matrices are very large But the number of topics that people talk about is small (in some sense) Clothes, movies, politics, … Can we represent the term-document space by a lower dimensional latent space?

Term-Document Matrix Represent each document as a numerical vector in the usual way. Align the vectors to form a matrix. Note that this is not a square matrix.

Term-Document Matrix Represent each document as a numerical vector in the usual way. Align the vectors to form a matrix. Note that this is not a square matrix. In a perfect world, the term-doc matrix might look like this:

Intuition from block matrices Block 1 Block 2 … Block k 0’s = Homogeneous non-zero blocks. M terms N documents What’s the rank of this matrix?

Intuition from block matrices Block 1 Block 2 … Block k 0’s M terms N documents Vocabulary partitioned into k topics (clusters); each doc discusses only one topic.

Intuition from block matrices Block 1 Block 2 … Block k Few nonzero entries wiper tire V6 car automobile Likely there’s a good rank-k approximation to this matrix.

8 Dimension Reduction and Synonymy  Dimensionality reduction forces us to omit “details”.  We have to map different words (= different dimensions of the full space) to the same dimension in the reduced space.  The “cost” of mapping synonyms to the same dimension is much less than the cost of collapsing unrelated words.  We’ll select the “least costly” mapping.  Thus, will map synonyms to the same dimension.  But, will avoid doing that for unrelated words. 8

Formal Objectives Given a term-doc matrix, M, we want to find a matrix M’ that is “similar” to M but of rank k (where k is much smaller than the rank of M). So we need some formal measure of “similarity” between two matrices. And we need an algorithm for finding the matrix M’. Conveniently, there are some neat linear algebra tricks for this. So, let’s review a bit of linear algebra.

Eigenvalues & Eigenvectors Eigenvectors (for a square m  m matrix S ) How many eigenvalues are there at most? only has a non-zero solution if this is a m -th order equation in λ which can have at most m distinct solutions (roots of the characteristic polynomial) – can be complex even though S is real. eigenvalue(right) eigenvector Example

Useful Facts about Eigenvalues & Eigenvectors For symmetric matrices, eigenvectors for distinct eigenvalues are orthogonal All eigenvalues of a real symmetric matrix are real.

Example Let Then The eigenvalues are 1 and 3 (nonnegative, real). The eigenvectors are orthogonal (and real): Real, symmetric. Plug in these values and solve for eigenvectors. Prasad

Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix) Theorem: Exists an eigen decomposition (cf. matrix diagonalization theorem) Columns of U are eigenvectors of S Diagonal elements of are eigenvalues of Eigen/diagonal Decomposition diagonal Unique for distinct eigen- values Prasad13L18LSI

Diagonal decomposition: why/how Let U have the eigenvectors as columns: Then, SU can be written And S=U  U –1. Thus SU=U , or U –1 SU= 

Key Point So Far We can decompose a square matrix into a product of matrices one of which is an eigenvalue diagonal matrix. But we’d like to say more: when the square matrix is also symmetric, we have a better theorem. Note that even that isn’t our ultimate destination, since the term-doc matrices we deal with aren’t even square matrices. One step at a time…

If is a symmetric matrix: Theorem: There exists a (unique) eigen decomposition where: Q -1 = Q T Columns of Q are normalized eigenvectors Columns are orthogonal. (everything is real) Symmetric Eigen Decomposition

Now… Let’s find some analogous theorem for non-square matrices.

Singular Value Decomposition MMMMMNMN V is N  N For an M  N matrix A of rank r there exists a factorization (Singular Value Decomposition = SVD) as follows: The columns of U are orthogonal eigenvectors of AA T. The columns of V are orthogonal eigenvectors of A T A. Singular values. Eigenvalues 1 … r of AA T are the eigenvalues of A T A. Prasad

Eigen Decomposition and SVD Note that AA T and A T A are symmetric square matrices. AA T = U  V T V  U T = U  2 U T That ’ s just the usual eigen decomposition for a symmetric square matrix. AA T and A T A have special relevance for us. a i j represents the dot-product similarity of row (column) i with row (column) j. (For docs, it ’ s the number of common terms; for terms, the number of common docs.)

Singular Value Decomposition Illustration of SVD dimensions and sparseness Prasad20L18LSI

21 Example of A = U Σ V T : The matrix A This is a standard term-document matrix. Actually, we use a non-weighted matrix here to simplify the example.

22 Example of A = U Σ V T : The matrix U One row per term, one column per min(M,N) where M is the number of terms and N is the number of documents. This is an orthonormal matrix: (i) Row vectors have unit length. (ii) Any two distinct row vectors are orthogonal to each other. Think of the dimensions (columns) as “semantic” dimensions that capture distinct topics like politics, sports, economics. Each number u ij in the matrix indicates how strongly related term i is to the topic represented by semantic dimension j.

23 Example of A = U Σ V T : The matrix Σ This is a square, diagonal matrix of dimensionality min(M,N) × min(M,N). The diagonal consists of the singular values of A. The magnitude of the singular value measures the importance of the corresponding semantic dimension. We’ll make use of this by omitting unimportant dimensions.

24 Example of A = U Σ V T : The matrix V T One column per document, one row per min(M,N) where M is the number of terms and N is the number of documents. Again: This is an orthonormal matrix: (i) Column vectors have unit length. (ii) Any two distinct column vectors are orthogonal to each other. These are again the semantic dimensions from the term matrix U that capture distinct topics like politics, sports, economics. Each number v ij in the matrix indicates how strongly related document i is to the topic represented by semantic dimension j.

25 Example of A = U Σ V T : All four matrices 25

26 LSI: Summary  We’ve decomposed the term-document matrix A into a product of three matrices.  The term matrix U – consists of one (row) vector for each term  The document matrix V T – consists of one (column) vector for each document  The singular value matrix Σ – diagonal matrix with singular values, reflecting importance of each dimension 26

SVD can be used to compute optimal low-rank approximations. Approximation problem: Find A k of rank k such that A k and X are both m  n matrices. Typically, want k << r. Low-rank Approximation Frobenius norm

Solution via SVD Low-rank Approximation set smallest r-k singular values to zero k

If we retain only k singular values, and set the rest to 0, then we don’t need the matrix parts in red Then Σ is k×k, U is M×k, V T is k×N, and A k is M×N This is referred to as the reduced SVD It is the convenient (space-saving) and usual form for computational applications Reduced SVD k 29

Approximation error How good (bad) is this approximation? It’s the best possible, measured by the Frobenius norm of the error: where the  i are ordered such that  i   i+1. Suggests why Frobenius error drops as k increases.

SVD low-rank approx. of term-doc matrices Whereas the term-doc matrix A may have M=50000, N=10 million (and rank close to 50000) For example, we can construct an approximation A 100 with rank 100. Of all rank 100 matrices, it would have the lowest Frobenius error. We can think of it as clustering our docs (or our terms) to 100 clusters. The low-dimensional space reflects semantic associations (latent semantic space). Similar terms map to similar location in low dimensional space.

Latent Semantic Indexing (LSI) Perform a low-rank approximation of document- term matrix (typical rank ) General idea Map documents (and terms) to a low-dimensional representation. The low-dimensional space reflects semantic associations (latent semantic space). Similar terms map to similar location in low dimensional space

Some wild extrapolation The “dimensionality” of a corpus is the number of distinct topics represented in it. More mathematical wild extrapolation: if A has a rank k approximation of low Frobenius error, then there are no more than k distinct topics in the corpus. Prasad33L18LSI

34 Recall unreduced decomposition A=U Σ V T 34

35 Reducing the dimensionality to 2 35

36 Reducing the dimensionality to 2 Actually, we only zero out singular values in Σ. This has the effect of setting the corresponding dimensions in U and V T to zero when computing the product A = U Σ V T. 36

37 Original matrix A vs. reduced A 2 = U Σ 2 V T We can view A 2 as a two- dimensional representation of the matrix. We have performed a dimensionality reduction to two dimensions. 37

38 Why is the reduced matrix “better” 38 Similarity of d2 and d3 in the original space: 0. Similarity of d2 und d3 in the reduced space: 0.52 * * * * * ≈ 0.52

39 Why the reduced matrix is “better” 39 “boat” and “ship” are semantically similar. The “reduced” similarity measure reflects this.

Toy Illustration Latent semantic space: illustrating example courtesy of Susan Dumais

LSI has many applications The general idea is quite standard linear algebra. It’s original application in comp ling was information retrieval (Deerwester, Dumais et al). In IR it overcomes two problems: polysemy and synonymy. In fact, it is rarely used in IR because most IR problems involve huge corpora and SVD algorithms aren’t efficient enough for use on such large corpora.

Extensions Subsequent work (Hoffman) extended LSI to probabilistic LSI. That was further extended (Blei, Ng & Jordan) to Latent Dirichlet Analysis (LDA).