Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 I256: Applied Natural Language Processing Marti Hearst Nov 6, 2006.

Similar presentations


Presentation on theme: "1 I256: Applied Natural Language Processing Marti Hearst Nov 6, 2006."— Presentation transcript:

1 1 I256: Applied Natural Language Processing Marti Hearst Nov 6, 2006

2 2 Today Text Clustering Latent Semantic Indexing (LSA)

3 3 Text Clustering Finds overall similarities among groups of documents Finds overall similarities among groups of tokens Picks out some themes, ignores others

4 4 Text Clustering Clustering is “The art of finding groups in data.” -- Kaufmann and Rousseeu Term 1 Term 2

5 5 Text Clustering Term 1 Term 2 Clustering is “The art of finding groups in data.” -- Kaufmann and Rousseeu

6 6 Slide by Vasileios Hatzivassiloglou Clustering Applications Find semantically related words by combining similarity evidence from multiple indicators Try to find overall trends or patterns in text collections

7 7 Slide by Vasileios Hatzivassiloglou “Training” in Clustering Clustering is an unsupervised learning method For each data set, a totally fresh solution is constructed Therefore, there is no training However, we often use some data for which we have additional information on how it should be partitioned to evaluate the performance of the clustering method

8 8 Pair-wise Document Similarity novagalaxy heath’wood filmroledietfur 1 3 1 5 2 2 1 5 4 1 ABCDABCD How to compute document similarity?

9 9 Pair-wise Document Similarity (no normalization for simplicity) novagalaxy heath’wood filmroledietfur 1 3 1 5 2 2 1 5 4 1 ABCDABCD

10 10 Pair-wise Document Similarity (cosine normalization)

11 11 Document/Document Matrix

12 12 Slide by Vasileios Hatzivassiloglou Hierarchical clustering methods Agglomerative or bottom-up: Start with each sample in its own cluster Merge the two closest clusters Repeat until one cluster is left Divisive or top-down: Start with all elements in one cluster Partition one of the current clusters in two Repeat until all samples are in singleton clusters

13 13 Agglomerative Clustering ABCDEFGHIABCDEFGHI

14 14 Agglomerative Clustering ABCDEFGHIABCDEFGHI

15 15 Agglomerative Clustering ABCDEFGHIABCDEFGHI

16 16 Slide by Vasileios Hatzivassiloglou Merging Nodes Each node is a combination of the documents combined below it We represent the merged nodes as a vector of term weights This vector is referred to as the cluster centroid

17 17 Slide by Vasileios Hatzivassiloglou Merging criteria We need to extend the distance measure from samples to sets of samples The complete linkage method The single linkage method The average linkage method

18 18 Single-link merging criteria Merge closest pair of clusters: Single-link: clusters are close if any of their points are dist(A,B) = min dist(a,b) for a  A, b  B each word type is a single-point cluster merge

19 19 Fast, but tend to get long, stringy, meandering clusters... Bottom-Up Clustering – Single-Link

20 20 Bottom-Up Clustering – Complete-Link Again, merge closest pair of clusters: Complete-link: clusters are close only if all of their points are dist(A,B) = max dist(a,b) for a  A, b  B distance between clusters

21 21 Bottom-Up Clustering – Complete-Link distance between clusters Slow to find closest pair – need quadratically many distances

22 22 K-Means Clustering 1 Decide on a pair-wise similarity measure 2 Find K centers using agglomerative clustering take a small sample group bottom up until K groups found 3 Assign each document to nearest center, forming new clusters 4 Repeat 3 as necessary

23 23 Slide by Vasileios Hatzivassiloglou k-Medians Similar to k-means but instead of calculating the means across features, it selects as c i the sample in cluster C i that minimizes (the median) Advantages Does not require feature vectors Distance between samples is always available Statistics with medians are more robust than statistics with means

24 24 Slide by Vasileios Hatzivassiloglou Choosing k In both hierarchical and k-means/medians, we need to be told where to stop, i.e., how many clusters to form This is partially alleviated by visual inspection of the hierarchical tree (the dendrogram) It would be nice if we could find an optimal k from the data We can do this by trying different values of k and seeing which produces the best separation among the resulting clusters.

25 25 Scatter/Gather: Clustering a Large Text Collection Cutting, Pedersen, Tukey & Karger 92, 93 Hearst & Pedersen 95 Cluster sets of documents into general “themes”, like a table of contents Display the contents of the clusters by showing topical terms and typical titles User chooses subsets of the clusters and re-clusters the documents within Resulting new groups have different “themes”

26 26 S/G Example: query on “star” Encyclopedia text 14 sports 8 symbols47 film, tv 68 film, tv (p) 7 music 97 astrophysics 67 astronomy(p)12 steller phenomena 10 flora/fauna 49 galaxies, stars 29 constellations 7 miscelleneous Clustering and re-clustering is entirely automated

27 27

28 28

29 29

30 30 Clustering Retrieval Results Tends to place similar docs together So can be used as a step in relevance ranking But not great for showing to users Exception: good for showing what to throw out!

31 31 Another use of clustering Use clustering to map the entire huge multidimensional document space into a huge number of small clusters. “Project” these onto a 2D graphical representation Looks neat, but doesn’t work well as an information retrieval interface.

32 32 Clustering Multi-Dimensional Document Space (image from Wise et al 95)

33 33 How to evaluate clusters? In practice, it’s hard to do Different algorithms’ results look good and bad in different ways It’s difficult to distinguish their outcomes In theory, define an evaluation function Typically choose something easy to measure (e.g., the sum of the average distance in each class)

34 34 Slide by Inderjit S. Dhillon Two Types of Document Clustering Grouping together of “similar” objects Hard Clustering -- Each object belongs to a single cluster Soft Clustering -- Each object is probabilistically assigned to clusters

35 35 Slide by Vasileios Hatzivassiloglou Soft clustering A variation of many clustering methods Instead of assigning each data sample to one and only one cluster, it calculates probabilities of membership for all clusters So, a sample might belong to cluster A with probability 0.4 and to cluster B with probability 0.6

36 36 Slide by Vasileios Hatzivassiloglou Application: Clustering of adjectives Cluster adjectives based on the nouns they modify Multiple syntactic clues for modification The similarity measure is Kendall’s τ, a robust measure of similarity Clustering is done via a hill-climbing method that minimizes the combined average dissimilarity Predicting the semantic orientation of adjectives, V Hatzivassiloglou, KR McKeown, EACL 1997

37 37 Slide by Vasileios Hatzivassiloglou Clustering of nouns Work by Pereira, Tishby, and Lee Dissimilarity is KL divergence Asymmetric relationship: nouns are clustered, verbs which have the nouns as objects serve as indicators Soft, hierarchical clustering

38 38 Distributional Clustering of English Words - Pereira, Tishby and Lee, ACL 93

39 39 Distributional Clustering of English Words - Pereira, Tishby and Lee, ACL 93

40 40 Slide by Kostas Kleisouris Latent Semantic Analysis Mathematical/statistical technique for extracting and representing the similarity of meaning of words Represents word and passage meaning as high- dimensional vectors in the semantic space Uses Singular Value Decomposition (SVD) to simulate human learning of word and passage meaning Its success depends on: Sufficient scale and sampling of the data it is given Choosing the right number of dimensions to extract

41 41 Slide by Schone, Jurafsky, and Stenchikova LSA Characteristics Why is reducing dimensionality beneficial? Some words with similar occurrence patterns are projected onto the same dimension Closely mimics human judgments of meaning similarity

42 42 Slide by Kostas Kleisouris Sample Applications of LSA Essay Grading LSA is trained on a large sample of text from the same domain as the topic of the essay Each essay is compared to a large set of essays scored by experts and a subset of the most similar identified by LSA The target essay is assigned a score consisting of a weighted combination of the scores for the comparison essays

43 43 Slide by Kostas Kleisouris Sample Applications of LSA Prediction of differences in comprehensibility of texts By using conceptual similarity measures between successive sentences LSA has predicted comprehension test results with students Evaluate and give advice to students as they write and revise summaries of texts they have read Assess psychiatric status By representing the semantic content of answers to psychiatric interview questions

44 44 Slide by Kostas Kleisouris Sample Applications of LSA Improving Information Retrieval Use LSA to match users’ queries with documents that have the desired conceptual meaning Not used in practice – doesn’t help much when you have large corpora to match against, but maybe helpful for a few difficult queries and for term expansion

45 45 Slide by Kostas Kleisouris LSA intuitions Implements the idea that the meaning of a passage is the sum of the meanings of its words: meaning of word 1 + meaning of word 2 + … + meaning of word n = meaning of passage This “bag of words” function shows that a passage is considered to be an unordered set of word tokens and the meanings are additive. By creating an equation of this kind for every passage of language that a learner observes, we get a large system of linear equations.

46 46 Slide by Kostas Kleisouris LSA Intuitions However Too few equations to specify the values of the variables Different values for the same variable (natural since meanings are vague or multiple) Instead of finding absolute values for the meanings, they are represented in a richer form (vectors) Use of SVD (reduces the linear system into multidimensional vectors)

47 47 Slide by Jason Eisner Latent Semantic Analysis A trick from Information Retrieval Each document in corpus is a length-k vector –Or each paragraph, or whatever (0, 3, 3, 1, 0, 7,... 1, 0) aardvark abacus abbot abduct above zygote zymurgy abandoned a single document

48 48 Slide by Jason Eisner Latent Semantic Analysis A trick from Information Retrieval Each document in corpus is a length-k vector Plot all documents in corpus True plot in k dimensionsReduced-dimensionality plot

49 49 Slide by Jason Eisner Latent Semantic Analysis Reduced plot is a perspective drawing of true plot It projects true plot onto a few axes  a best choice of axes – shows most variation in the data. Found by linear algebra: “Singular Value Decomposition” (SVD) True plot in k dimensionsReduced-dimensionality plot

50 50 Slide by Jason Eisner Latent Semantic Analysis SVD plot allows best possible reconstruction of true plot (i.e., can recover 3-D coordinates with minimal distortion) Ignores variation in the axes that it didn’t pick out Hope that variation’s just noise and we want to ignore it True plot in k dimensionsReduced-dimensionality plot word 1 word 2 word 3 theme A theme B theme A theme B

51 51 Slide by Jason Eisner Latent Semantic Analysis SVD finds a small number of theme vectors Approximates each doc as linear combination of themes Coordinates in reduced plot = linear coefficients How much of theme A in this document? How much of theme B? Each theme is a collection of words that tend to appear together True plot in k dimensionsReduced-dimensionality plot theme A theme B theme A theme B

52 52 Slide by Jason Eisner Latent Semantic Analysis Another perspective (similar to neural networks): documents 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 terms matrix of strengths (how strong is each term in each document?) Each connection has a weight given by the matrix.

53 53 Slide by Jason Eisner Latent Semantic Analysis Which documents is term 5 strong in? docs 2, 5, 6 light up strongest. documents 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 terms

54 54 Slide by Jason Eisner This answers a query consisting of terms 5 and 8! really just matrix multiplication: term vector (query) x strength matrix = doc vector. Latent Semantic Analysis Which documents are terms 5 and 8 strong in? documents 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 terms

55 55 Slide by Jason Eisner Latent Semantic Analysis Conversely, what terms are strong in document 5? gives doc 5’s coordinates! documents 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 terms

56 56 Slide by Jason Eisner Latent Semantic Analysis SVD approximates by smaller 3-layer network Forces sparse data through a bottleneck, smoothing it documents 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 terms documents 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 terms themes

57 57 Slide by Jason Eisner Latent Semantic Analysis I.e., smooth sparse data by matrix approx: M  A B A encodes camera angle, B gives each doc’s new coords documents 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 terms matrix M A B documents 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9 terms themes

58 58 Slide by Kostas Kleisouris How LSA works Takes as input a corpus of natural language The corpus is parsed into meaningful passages (such as paragraphs) A matrix is formed with passages as rows and words as columns. Cells contain the number of times that a given word is used in a given passage The cell values are transformed into a measure of the information about the passage identity the carry SVD is applied to represent the words and passages as vectors in a high dimensional semantic space

59 59 Slide by Schone, Jurafsky, and Stenchikova Represent text as a matrix d1d2d3d4d5d6 cosmonaut101000 astronaut010000 moon110000 car100110 truck000101 documents words A[i,j] = number of of occurrence of a word i in document j

60 60 Slide by Schone, Jurafsky, and Stenchikova SVD {A}={T}{S}{D}’ A n m n m Min(n,m) n m = x x T S D’D’  Reduce dimensionality to k and compute A1: k A1 n m n k k k k m = x x T1 S D’D’ A1 is the best least square approximation of A by a matrix in rank k

61 61 Slide by Schone, Jurafsky, and Stenchikova Matrix T dim1dim2dim3dim4dim5 cosmonaut -0.44-0.300.570.580.25 astronaut -0.13-0.33-0.590.000.73 moon -0.48-0.51-0.370.00-0.61 car -0.700.350.15-0.580.16 truck -0.260.65-0.410.58-0.09 dimensions words T- term matrix. Rows of T correspond to Rows of original matrix A Dim2 directly reflects the different co-occurrence patterns

62 62 Slide by Schone, Jurafsky, and Stenchikova Matrix D’ d1d2d3d4d5d6 Dimension1 -0.75-0.28-0.20-0.45-0.33-0.12 Dimension2 -0.29-053-0.190.630.220.41 Dimension3 0.28-0.750.45-0.200.12-0.33 Dimension4 -0.000.000.580.00-0.580.58 Dimension5 -0.530.290.630.190.41-0.22 documents D- document matrix. Columns of D ’ (rows of D) correspond to rows of original matrix A Dim2 directly reflects the different co-occurrence patterns dimensions

63 63 Slide by Schone, Jurafsky, and Stenchikova Reevaluating document similarities B = S1 x D1 Matrix B is a dimensionality reduction of the original matrix A Compute document correlation B’*B d1d2d3d4d5d6 d1 1 d2 0.781 d3 0.400.881 d4 0.47-0.18-0.621 d5 0.740.16-0.320.941 d6 0.10-0.54-0.870.930.741

64 64 Slide by Schone, Jurafsky, and Stenchikova Unfolding new documents Given a new document, how to determine which documents it is similar to? A = T S D’ T’A = T’ T S D’ T’ A = S D’ q a new vector in the space of A q in reduced space = T’ * q

65 65 Next Time Several takes on blog analysis Sentiment classification


Download ppt "1 I256: Applied Natural Language Processing Marti Hearst Nov 6, 2006."

Similar presentations


Ads by Google