Download presentation
Presentation is loading. Please wait.
Published byKelly Harmon Modified over 9 years ago
1
Web Search and Data Mining Lecture 4 Adapted from Manning, Raghavan and Schuetze
2
Recap of the last lecture MapReduce and distributed indexing Scoring documents: linear comb/zone weighting tf idf term weighting and vector spaces Derivation of idf
3
This lecture Vector space models Dimension reduction: random projection Review of linear algebra Latent semantic indexing (LSI)
4
Documents as vectors At the end of Lecture 3 we said: Each doc d can now be viewed as a vector of wf idf values, one component for each term So we have a vector space terms are axes docs live in this space Dimension is usually very large
5
Why turn docs into vectors? First application: Query-by-example Given a doc d, find others “like” it. Now that d is a vector, find vectors (docs) “near” it. Natural setting for bag of words model Dimension reduction
6
Intuition Postulate: Documents that are “close together” in the vector space talk about the same things. t1t1 d2d2 d1d1 d3d3 d4d4 d5d5 t3t3 t2t2 θ φ
7
Measuring Document Similarity Idea: Distance between d 1 and d 2 is the length of the vector |d 1 – d 2 |. Euclidean distance Why is this not a great idea? We still haven’t dealt with the issue of length normalization Short documents would be more similar to each other by virtue of length, not topic However, we can implicitly normalize by looking at angles instead
8
Cosine similarity Distance between vectors d 1 and d 2 captured by the cosine of the angle x between them. Note – this is similarity, not distance No triangle inequality for similarity. t 1 d 2 d 1 t 3 t 2 θ
9
Cosine similarity A vector can be normalized (given a length of 1) by dividing each of its components by its length – here we use the L 2 norm This maps vectors onto the unit sphere: Then, Longer documents don’t get more weight
10
Cosine similarity Cosine of angle between two vectors The denominator involves the lengths of the vectors. Normalization
11
Queries in the vector space model Central idea: the query as a vector: We regard the query as short document We return the documents ranked by the closeness of their vectors to the query, also represented as a vector. Note that d q is very sparse!
12
Dimensionality reduction What if we could take our vectors and “pack” them into fewer dimensions (say 50,000 100) while preserving distances? (Well, almost.) Speeds up cosine computations. Many possibilities including, Random projection. “Latent semantic indexing”.
13
Random projection onto k<<m axes Choose a random direction x 1 in the vector space. For i = 2 to k, Choose a random direction x i that is orthogonal to x 1, x 2, … x i– 1. Project each document vector into the subspace spanned by {x 1, x 2, …, x k }.
14
E.g., from 3 to 2 dimensions d2d2 d1d1 x1x1 t 3 x2x2 t 2 t 1 x1x1 x2x2 d2d2 d1d1 x 1 is a random direction in (t 1,t 2,t 3 ) space. x 2 is chosen randomly but orthogonal to x 1. Dot product of x 1 and x 2 is zero.
15
Guarantee With high probability, relative distances are (approximately) preserved by projection.
16
Computing the random projection Projecting n vectors from m dimensions down to k dimensions: Start with m n matrix of terms docs, A. Find random k m orthogonal projection matrix R. Compute matrix product W = R A. j th column of W is the vector corresponding to doc j, but now in k << m dimensions.
17
Cost of computation This takes a total of kmn multiplications. Expensive – see Resources for ways to do essentially the same thing, quicker. Other variations, using sparse random matrix, entries of R from {-1, 0, 1} with probabilities {1/6, 2/3, 1/6}. Why?
18
Latent semantic indexing (LSI) Another technique for dimension reduction Random projection was data-independent LSI on the other hand is data-dependent Eliminate redundant axes Pull together “related” axes – hopefully car and automobile
19
Linear Algebra Background
20
Eigenvalues & Eigenvectors Eigenvectors (for a square m m matrix S ) How many eigenvalues are there at most? only has a non-zero solution if this is a m -th order equation in λ which can have at most m distinct solutions (roots of the characteristic polynomial) – can be complex even though S is real. eigenvalue(right) eigenvector Example
21
Eigenvalues & Eigenvectors For symmetric matrices, eigenvectors for distinct eigenvalues are orthogonal All eigenvalues of a real symmetric matrix are real. All eigenvalues of a positive semidefinite matrix are non-negative
22
Example Let Then The eigenvalues are 1 and 3 (nonnegative, real). The eigenvectors are orthogonal (and real): Real, symmetric. Plug in these values and solve for eigenvectors.
23
Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix) Theorem: Exists an eigen decomposition (cf. matrix diagonalization theorem) Columns of U are eigenvectors of S Diagonal elements of are eigenvalues of Eigen/diagonal Decomposition diagonal Unique for distinc t eigen- values
24
Diagonal decomposition: why/how Let U have the eigenvectors as columns: Then, SU can be written And S=U U –1. Thus SU=U , or U –1 SU=
25
Diagonal decomposition - example Recall The eigenvectors and form Inverting, we have Then, S=U U –1 = Recall UU – 1 =1.
26
Example continued Let ’ s divide U (and multiply U –1 ) by Then, S= Q(Q -1 = Q T ) Why? Stay tuned …
27
If is a symmetric matrix: Theorem: Exists a (unique) eigen decomposition where Q is orthogonal: Q -1 = Q T Columns of Q are normalized eigenvectors Columns are orthogonal. (everything is real) Symmetric Eigen Decomposition
28
Time out! I came to this class to learn about text retrieval and mining, not have my linear algebra past dredged up again … But if you want to dredge, Strang’s Applied Mathematics is a good place to start. What do these matrices have to do with text? Recall m n term-document matrices … But everything so far needs square matrices – so …
29
Singular Value Decomposition mmmmmnmnV is n n For an m n matrix A of rank r there exists a factorization (Singular Value Decomposition = SVD) as follows: The columns of U are orthogonal eigenvectors of AA T. The columns of V are orthogonal eigenvectors of A T A. Singular values. Eigenvalues 1 … r of AA T are the eigenvalues of A T A.
30
Singular Value Decomposition Illustration of SVD dimensions and sparseness
31
SVD example Let Thus m=3, n=2. Its SVD is Typically, the singular values arranged in decreasing order.
32
SVD can be used to compute optimal low-rank approximations. Approximation problem: Find A k of rank k such that A k and X are both m n matrices. Typically, want k << r. Low-rank Approximation Frobenius norm
33
Solution via SVD Low-rank Approximation set smallest r-k singular values to zero column notation: sum of rank 1 matrices k
34
Approximation error How good (bad) is this approximation? It’s the best possible, measured by the Frobenius norm of the error: where the i are ordered such that i i+1. Suggests why Frobenius error drops as k increased.
35
SVD Low-rank approximation Whereas the term-doc matrix A may have m=50000, n=10 million (and rank close to 50000) We can construct an approximation A 100 with rank 100. Of all rank 100 matrices, it would have the lowest Frobenius error. Great … but why would we?? Answer: Latent Semantic Indexing C. Eckart, G. Young, The approximation of a matrix by another of lower rank. Psychometrika, 1, 211-218, 1936.
36
Latent Semantic Analysis via SVD
37
What it is From term-doc matrix A, we compute the approximation A k. There is a row for each term and a column for each doc in A k Thus docs live in a space of k<<r dimensions These dimensions are not the original axes But why?
38
Vector Space Model: Pros Automatic selection of index terms Partial matching of queries and documents (dealing with the case where no document contains all search terms) Ranking according to similarity score (dealing with large result sets) Term weighting schemes (improves retrieval performance) Various extensions Document clustering Relevance feedback (modifying query vector) Geometric foundation
39
Problems with Lexical Semantics Ambiguity and association in natural language Polysemy: Words often have a multitude of meanings and different types of usage (more severe in very heterogeneous collections). The vector space model is unable to discriminate between different meanings of the same word.
40
Problems with Lexical Semantics Synonymy: Different terms may have identical or a similar meaning (weaker: words indicating the same topic). No associations between words are made in the vector space representation.
41
Latent Semantic Indexing (LSI) Perform a low-rank approximation of document-term matrix (typical rank 100-300) General idea Map documents (and terms) to a low- dimensional representation. Design a mapping such that the low-dimensional space reflects semantic associations (latent semantic space). Compute document similarity based on the inner product in this latent semantic space
42
Goals of LSI Similar terms map to similar location in low dimensional space Noise reduction by dimension reduction
43
Latent Semantic Analysis Latent semantic space: illustrating example courtesy of Susan Dumais
44
Performing the maps Each row and column of A gets mapped into the k-dimensional LSI space, by the SVD. Claim – this is not only the mapping with the best (Frobenius error) approximation to A, but in fact improves retrieval. A query q is also mapped into this space, by Query NOT a sparse vector.
45
Empirical evidence Experiments on TREC 1/2/3 – Dumais Lanczos SVD code (available on netlib) due to Berry used in these expts Running times of ~ one day on tens of thousands of docs (old data) Dimensions – various values 250-350 reported (Under 200 reported unsatisfactory) Generally expect recall to improve – what about precision?
46
Empirical evidence Precision at or above median TREC precision Top scorer on almost 20% of TREC topics Slightly better on average than straight vector spaces Effect of dimensionality: DimensionsPrecision 2500.367 3000.371 3460.374
47
Some wild extrapolation The “dimensionality” of a corpus is the number of distinct topics represented in it. More mathematical wild extrapolation: if A has a rank k approximation of low Frobenius error, then there are no more than k distinct topics in the corpus. ( Latent semantic indexing: A probabilistic analysis,'' ) Latent semantic indexing: A probabilistic analysis,''
48
LSI has many other applications In many settings in pattern recognition and retrieval, we have a feature-object matrix. For text, the terms are features and the docs are objects. Could be opinions and users … This matrix may be redundant in dimensionality. Can work with low-rank approximation. If entries are missing (e.g., users’ opinions), can recover if dimensionality is low. Powerful general analytical technique Close, principled analog to clustering methods.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.