Hinrich Schütze and Christina Lioma

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval Outline ❶ Latent semantic indexing ❷ Dimensionality reduction ❸ LSI in information retrieval 1.
Advertisements

Eigen Decomposition and Singular Value Decomposition
Information retrieval – LSI, pLSI and LDA
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
Dimensionality Reduction PCA -- SVD
Comparison of information retrieval techniques: Latent semantic indexing (LSI) and Concept indexing (CI) Jasminka Dobša Faculty of organization and informatics,
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
What is missing? Reasons that ideal effectiveness hard to achieve: 1. Users’ inability to describe queries precisely. 2. Document representation loses.
1 Latent Semantic Indexing Jieping Ye Department of Computer Science & Engineering Arizona State University
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 4 March 30, 2005
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
Hinrich Schütze and Christina Lioma
Singular Value Decomposition in Text Mining Ram Akella University of California Berkeley Silicon Valley Center/SC Lecture 4b February 9, 2011.
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
Indexing by Latent Semantic Analysis Scot Deerwester, Susan Dumais,George Furnas,Thomas Landauer, and Richard Harshman Presented by: Ashraf Khalil.
Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
1/ 30. Problems for classical IR models Introduction & Background(LSI,SVD,..etc) Example Standard query method Analysis standard query method Seeking.
IR Models: Latent Semantic Analysis. IR Model Taxonomy Non-Overlapping Lists Proximal Nodes Structured Models U s e r T a s k Set Theoretic Fuzzy Extended.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 18: Latent Semantic Indexing 1.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
Multimedia Databases LSI and SVD. Text - Detailed outline text problem full text scanning inversion signature files clustering information filtering and.
DATA MINING LECTURE 7 Dimensionality Reduction PCA – SVD
1 CS 430 / INFO 430 Information Retrieval Lecture 9 Latent Semantic Indexing.
Information Retrieval Latent Semantic Indexing. Speeding up cosine computation What if we could take our vectors and “pack” them into fewer dimensions.
Chapter 2 Dimensionality Reduction. Linear Methods
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
Latent Semantic Indexing Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata.
PrasadL18LSI1 Latent Semantic Indexing Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar.
Matrix Factorization and Latent Semantic Indexing 1 Lecture 13: Matrix Factorization and Latent Semantic Indexing Web Search and Mining.
Introduction to Information Retrieval Lecture 19 LSI Thanks to Thomas Hofmann for some slides.
Indices Tomasz Bartoszewski. Inverted Index Search Construction Compression.
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Pandu Nayak.
Latent Semantic Analysis Hongning Wang Recap: vector space model Represent both doc and query by concept vectors – Each concept defines one dimension.
CpSc 881: Information Retrieval. 2 Recall: Term-document matrix This matrix is the basis for computing the similarity between documents and queries. Today:
Generic text summarization using relevance measure and latent semantic analysis Gong Yihong and Xin Liu SIGIR, April 2015 Yubin Lim.
Latent Semantic Indexing: A probabilistic Analysis Christos Papadimitriou Prabhakar Raghavan, Hisao Tamaki, Santosh Vempala.
Authors: Rosario Sotomayor, Joe Carthy and John Dunnion Speaker: Rosario Sotomayor Intelligent Information Retrieval Group (IIRG) UCD School of Computer.
Text Categorization Moshe Koppel Lecture 12:Latent Semantic Indexing Adapted from slides by Prabhaker Raghavan, Chris Manning and TK Prasad.
June 5, 2006University of Trento1 Latent Semantic Indexing for the Routing Problem Doctorate course “Web Information Retrieval” PhD Student Irina Veredina.
SINGULAR VALUE DECOMPOSITION (SVD)
Introduction to Information Retrieval Introduction to Information Retrieval CS276: Information Retrieval and Web Search Christopher Manning and Pandu Nayak.
Scientific Computing Singular Value Decomposition SVD.
Latent Semantic Indexing
Latent Semantic Indexing and Probabilistic (Bayesian) Information Retrieval.
Introduction to Information Retrieval Outline ❶ Support Vector Machines ❷ Issues in the classification of text documents 1.
LATENT SEMANTIC INDEXING BY SINGULAR VALUE DECOMPOSITION
Modern information retreival Chapter. 02: Modeling (Latent Semantic Indexing)
1 Latent Concepts and the Number Orthogonal Factors in Latent Semantic Analysis Georges Dupret
1 CS 430: Information Discovery Lecture 11 Latent Semantic Indexing.
Web Search and Text Mining Lecture 5. Outline Review of VSM More on LSI through SVD Term relatedness Probabilistic LSI.
ITCS 6265 Information Retrieval & Web Mining Lecture 16 Latent semantic indexing Thanks to Thomas Hofmann for some slides.
Recuperação de Informação B Cap. 02: Modeling (Latent Semantic Indexing & Neural Network Model) 2.7.2, September 27, 1999.
Web Search and Data Mining Lecture 4 Adapted from Manning, Raghavan and Schuetze.
DATA MINING LECTURE 8 Sequence Segmentation Dimensionality Reduction.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
PrasadL18LSI1 Latent Semantic Indexing Adapted from Lectures by Prabhaker Raghavan, Christopher Manning and Thomas Hoffmann.
Vector Semantics Dense Vectors.
Latent Semantic Analysis (LSA) Jed Crandall 16 June 2009.
Singular Value Decomposition and its applications
Latent Semantic Indexing
LSI, SVD and Data Management
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Lecture 13: Singular Value Decomposition (SVD)
Latent Semantic Indexing
Recuperação de Informação B
Restructuring Sparse High Dimensional Data for Effective Retrieval
Latent Semantic Analysis
Presentation transcript:

Hinrich Schütze and Christina Lioma Lecture 18: Latent Semantic Indexing

Overview Latent semantic indexing Dimensionality reduction LSI in information retrieval

Outline Latent semantic indexing Dimensionality reduction LSI in information retrieval

Recall: Term-document matrix Anthony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth anthony 5.25 3.18 0.0 0.35 brutus 1.21 6.10 1.0 caesar 8.59 2.54 1.51 0.25 calpurnia 1.54 cleopatra 2.85 mercy 1.90 0.12 0.88 worser 1.37 0.11 4.15 1.95 . . . This matrix is the basis for computing the similarity between documents and queries. Today: Can we transform this matrix, so that we get a better measure of similarity between documents and queries? 4

Latent semantic indexing: Overview We will decompose the term-document matrix into a product of matrices. The particular decomposition we’ll use: singular value decomposition (SVD). SVD: C = UΣV T (where C = term-document matrix) We will then use the SVD to compute a new, improved term-document matrix C′. We’ll get better similarity values out of C′ (compared to C). Using SVD for this purpose is called latent semantic indexing or LSI. 5

Example of C = UΣVT : The matrix C This is a standard term-document matrix. Actually, we use a non-weighted matrix here to simplify the example. 6

Example of C = UΣVT : The matrix U One row perterm, one column per min(M,N) where M is the number of terms and N is the number of documents. This is an orthonormal matrix: (i) Row vectors have unit length. (ii) Any two distinct row vectors are orthogonal to each other. Think of the dimensions as “semantic” dimensions that capture distinct topics like politics, sports, economics. Each number uij in the matrix indicates how strongly related term i is to the topic represented by semantic dimension j . 7

Example of C = UΣVT : The matrix Σ This is a square, diagonal matrix of dimensionality min(M,N) × min(M,N). The diagonal consists of the singular values of C. The magnitude of the singular value measures the importance of the corresponding semantic dimension. We’ll make use of this by omitting unimportant dimensions. 8

Example of C = UΣVT : The matrix VT One column per document, one row per min(M,N) where M is the number of terms and N is the number of documents. Again: This is an orthonormal matrix: (i) Column vectors have unit length. (ii) Any two distinct column vectors are orthogonal to each other. These are again the semantic dimensions from the term matrix U that capture distinct topics like politics, sports, economics. Each number vij in the matrix indicates how strongly related document i is to the topic represented by semantic dimension j . 9

Example of C = UΣVT : All four matrices 10

LSI: Summary We’ve decomposed the term-document matrix C into a product of three matrices. The term matrix U – consists of one (row) vector for each term The document matrix VT – consists of one (column) vector for each document The singular value matrix Σ – diagonal matrix with singular values, reflecting importance of each dimension Next: Why are we doing this? 11

Outline Latent semantic indexing Dimensionality reduction LSI in information retrieval

How we use the SVD in LSI Key property: Each singular value tells us how important its dimension is. By setting less important dimensions to zero, we keep the important information, but get rid of the “details”. These details may be noise – in that case, reduced LSI is a better representation because it is less noisy make things dissimilar that should be similar – again reduced LSI is a better representation because it represents similarity better. Analogy for “fewer details is better” Image of a bright red flower Image of a black and white flower Omitting color makes is easier to see similarity 13

Reducing the dimensionality to 2 Actually, we only zero out singular values in Σ. This has the effect of setting the corresponding dimensions in U and V T to zero when computing the product C = UΣV T . 14

Reducing the dimensionality to 2 15

Recall unreduced decomposition C=UΣVT 16

Original matrix C vs. reduced C2 = UΣ2VT We can view C2 as a two-dimensional representation of the matrix. We have performed a dimensionality reduction to two dimensions. 17

Why the reduced matrix is “better” Similarity of d2 and d3 in the original space: 0. Similarity of d2 und d3 in the reduced space: 0.52 * 0.28 + 0.36 * 0.16 + 0.72 * 0.36 + 0.12 * 0.20 + - 0.39 * - 0.08 ≈ 0.52 18

Why the reduced matrix is “better” “boat” and “ship” are semantically similar. The “reduced” similarity measure reflects this. What property of the SVD reduction is responsible for improved similarity? 19

Why the reduced matrix is “better” 20

Why the reduced matrix is “better” 21

Outline Latent semantic indexing Dimensionality reduction LSI in information retrieval

Why we use LSI in information retrieval LSI takes documents that are semantically similar (= talk about the same topics), . . . . . . but are not similar in the vector space (because they use different words) . . . . . . and re-represents them in a reduced vector space . . . . . . in which they have higher similarity. Thus, LSI addresses the problems of synonymy and semantic relatedness. Standard vector space: Synonyms contribute nothing to document similarity. Desired effect of LSI: Synonyms contribute strongly to document similarity. 23

How LSI addresses synonymy and semantic relatedness The dimensionality reduction forces us to omit a lot of “detail”. We have to map differents words (= different dimensions of the full space) to the same dimension in the reduced space. The “cost” of mapping synonyms to the same dimension is much less than the cost of collapsing unrelated words. SVD selects the “least costly” mapping (see below). Thus, it will map synonyms to the same dimension. But it will avoid doing that for unrelated words. 24

LSI: Comparison to other approaches Recap: Relevance feedback and query expansion are used to increase recall in information retrieval – if query and documents have (in the extreme case) no terms in common. LSI increases recall and hurts precision. Thus, it addresses the same problems as (pseudo) relevance feedback and query expansion . . . . . . and it has the same problems. 25

Implementation Compute SVD of term-document matrix Reduce the space and compute reduced document representations Map the query into the reduced space This follows from: Compute similarity of q2 with all reduced documents in V2. Output ranked list of documents as usual Exercise: What is the fundamental problem with this approach? 26

Optimality SVD is optimal in the following sense. Keeping the k largest singular values and setting all others to zero gives you the optimal approximation of the original matrix C. Eckart-Young theorem Optimal: no other matrix of the same rank (= with the same underlying dimensionality) approximates C better. Measure of approximation is Frobenius norm: So LSI uses the “best possible” matrix. Caveat: There is only a tenuous relationship between the Frobenius norm and cosine similarity between documents. 27

Resources Chapter 18 of IIR Resources at http://ifnlp.org/ir Original paper on latent semantic indexing by Deerwester et al. Paper on probabilistic LSI by Thomas Hofmann Word space: LSI for words 28