Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Text Mining Thanks for Hongning slides on Text Ming Courses, Slides are slightly modified by Lei Chen.

Similar presentations


Presentation on theme: "Introduction to Text Mining Thanks for Hongning slides on Text Ming Courses, Slides are slightly modified by Lei Chen."— Presentation transcript:

1 Introduction to Text Mining Thanks for Hongning Wang@UVa’s slides on Text Ming Courses, Slides are slightly modified by Lei Chen

2 What is “Text Mining”? “Text mining, also referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text.” - wikipedia “Another way to view text data mining is as a process of exploratory data analysis that leads to heretofore unknown information, or to answers for questions for which the answer is not currently known.” - Hearst, 1999 CS@UVaCS6501: Text Mining2

3 Two different definitions of mining Goal-oriented (effectiveness driven) – Any process that generates useful results that are non- obvious is called “mining”. – Keywords: “useful” + “non-obvious” – Data isn’t necessarily massive Method-oriented (efficiency driven) – Any process that involves extracting information from massive data is called “mining” – Keywords: “massive” + “pattern” – Patterns aren’t necessarily useful 3CS@UVaCS6501: Text Mining

4 Text mining around us Sentiment analysis CS@UVaCS6501: Text Mining4

5 Text mining around us Document summarization CS@UVaCS6501: Text Mining5

6 Text mining around us Restaurant/hotel recommendation CS@UVaCS6501: Text Mining6

7 Text mining around us Text analytics in financial services CS@UVaCS6501: Text Mining7

8 How to perform text mining? As computer scientists, we view it as – Text Mining = Data Mining + Text Data Applied machine learning Natural language processing Information retrieval Emails Blogs News articles Web pages Tweets Scientific literature Software documentations CS@UVaCS6501: Text Mining8

9 Text mining v.s. NLP, IR, DM… How does it relate to data mining in general? How does it relate to computational linguistics? How does it relate to information retrieval? Finding PatternsFinding “Nuggets” NovelNon-Novel Non-textual data General data-mining Exploratory data analysis Database queries Textual data Computational Linguistics Information retrieval Text Mining CS@UVaCS6501: Text Mining9

10 Text mining in general CS@UVaCS6501: Text Mining10 AccessMining Organization Filter information Discover knowledge Add Structure/Annotations Serve for IR applications Based on NLP/ML techniques Sub-area of DM research

11 Challenges in text mining Data collection is “free text” – Data is not well-organized Semi-structured or unstructured – Natural language text contains ambiguities on many levels Lexical, syntactic, semantic, and pragmatic – Learning techniques for processing text typically need annotated training examples Expensive to acquire at scale What to mine? 11CS@UVaCS6501: Text Mining

12 Text mining problems we will solve Document categorization – Adding structures to the text corpus CS@UVaCS6501: Text Mining12

13 Text mining problems we will solve Text clustering – Identifying structures in the text corpus CS@UVaCS6501: Text Mining13

14 Text mining problems we will solve Topic modeling – Identifying structures in the text corpus CS@UVaCS6501: Text Mining14

15 Text Document Representation & Language Model 1.How to represent a document? – Make it computable 2.How to infer the relationship among documents or identify the structure within a document? – Knowledge discovery 3. Language Model and N-Grams CS@UVaCS 6501: Text Mining15

16 How to represent a document Represent by a string? – No semantic meaning Represent by a list of sentences? – Sentence is just like a short document (recursive definition) CS@UVaCS 6501: Text Mining16

17 Vector space model Represent documents by concept vectors – Each concept defines one dimension – k concepts define a high-dimensional space – Element of vector corresponds to concept weight E.g., d=(x 1,…,x k ), x i is “importance” of concept i in d Distance between the vectors in this concept space – Relationship among documents CS@UVaCS 6501: Text Mining17

18 An illustration of VS model All documents are projected into this concept space Sports Education Finance D4D4 D2D2 D1D1 D5D5 D3D3 CS@UVaCS 6501: Text Mining18 |D 2 -D 4 |

19 What the VS model doesn’t say How to define/select the “basic concept” – Concepts are assumed to be orthogonal How to assign weights – Weights indicate how well the concept characterizes the document How to define the distance metric CS@UVaCS 6501: Text Mining19

20 What is a good “Basic Concept”? Orthogonal – Linearly independent basis vectors “Non-overlapping” in meaning – No ambiguity Weights can be assigned automatically and accurately Existing solutions – Terms or N-grams, a.k.a., Bag-of-Words – Topics We will come back to this later CS@UVaCS 6501: Text Mining20

21 Bag-of-Words representation Term as the basis for vector space – Doc1: Text mining is to identify useful information. – Doc2: Useful information is mined from text. – Doc3: Apple is delicious. textinformationidentifyminingminedisusefultofromappledelicious Doc111110111000 Doc211001110100 Doc300000100011 CS@UVaCS 6501: Text Mining21

22 Tokenization Break a stream of text into meaningful units – Tokens: words, phrases, symbols – Definition depends on language, corpus, or even context Input: It’s not straight-forward to perform so-called “tokenization.” Output(1): 'It’s', 'not', 'straight-forward', 'to', 'perform', 'so-called', '“tokenization.”' Output(2): 'It', '’', 's', 'not', 'straight', '-', 'forward, 'to', 'perform', 'so', '-', 'called', ‘“', 'tokenization', '.', '”‘ CS@UVaCS6501: Information Retrieval22

23 Tokenization Solutions – Regular expressions [\w]+: so-called -> ‘so’, ‘called’ [\S]+: It’s -> ‘It’s’ instead of ‘It’, ‘’s’ – Statistical methods Explore rich features to decide where the boundary of a word is – Apache OpenNLP (http://opennlp.apache.org/)http://opennlp.apache.org/ – Stanford NLP Parser (http://nlp.stanford.edu/software/lex- parser.shtml)http://nlp.stanford.edu/software/lex- parser.shtml Online Demo – Stanford (http://nlp.stanford.edu:8080/parser/index.jsp)http://nlp.stanford.edu:8080/parser/index.jsp – UIUC (http://cogcomp.cs.illinois.edu/curator/demo/index.html)http://cogcomp.cs.illinois.edu/curator/demo/index.html CS@UVaCS6501: Information Retrieval23 We will come back to this later

24 Bag-of-Words representation Assumption – Words are independent from each other Pros – Simple Cons – Basis vectors are clearly not linearly independent! – Grammar and order are missing The most frequently used document representation – Image, speech, gene sequence CS@UVaCS 6501: Text Mining24 textinformationidentifyminingminedisusefultofromappledelicious Doc111110111000 Doc211001110100 Doc300000100011

25 Bag-of-Words with N-grams CS@UVaCS 6501: Text Mining25

26 Automatic document representation CS@UVaCS 6501: Text Mining26

27 A statistical property of language A plot of word frequency in Wikipedia (Nov 27, 2006) Word frequency Word rank by frequency CS@UVaCS 6501: Text Mining27 In the Brown Corpus of American English text, the word "the" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences; the second-place word "of" accounts for slightly over 3.5% of words. Discrete version of power law

28 Zipf’s law tells us Head words take large portion of occurrences, but they are semantically meaningless – E.g., the, a, an, we, do, to Tail words take major portion of vocabulary, but they rarely occur in documents – E.g., dextrosinistral The rest is most representative – To be included in the controlled vocabulary CS@UVaCS 6501: Text Mining28

29 Automatic document representation Remove non-informative words Remove rare words CS@UVaCS 6501: Text Mining29

30 Normalization Convert different forms of a word to a normalized form in the vocabulary – U.S.A. -> USA, St. Louis -> Saint Louis Solution – Rule-based Delete periods and hyphens All in lower cases – Dictionary-based Construct equivalent class – Car -> “automobile, vehicle” – Mobile phone -> “cellphone” CS@UVaCS 6501: Text Mining30 We will come back to this later

31 Stemming Reduce inflected or derived words to their root form – Plurals, adverbs, inflected word forms E.g., ladies -> lady, referring -> refer, forgotten -> forget – Bridge the vocabulary gap – Solutions (for English) Porter stemmer: patterns of vowel-consonant sequence Krovetz stemmer: morphological rules – Risk: lose precise meaning of the word E.g., lay -> lie (a false statement? or be in a horizontal position?) CS@UVaCS 6501: Text Mining31

32 Stopwords Useless words for document analysis – Not all words are informative – Remove such words to reduce vocabulary size – No universal definition – Risk: break the original meaning and structure of text E.g., this is not a good option -> option to be or not to be -> null The OEC: Facts about the language CS@UVaCS 6501: Text Mining32

33 Constructing a VSM representation CS@UVaCS 6501: Text Mining33 4.Stopword/controlled vocabulary filtering: Documents in a vector space! D1: ‘Text mining is to identify useful information.’ D1: ‘Text’, ‘mining’, ‘is’, ‘to’, ‘identify’, ‘useful’, ‘information’, ‘.’ D1: ‘text’, ‘mine’, ‘is’, ‘to’, ‘identify’, ‘use’, ‘inform’, ‘.’ D1: ‘text-mine’, ‘mine-is’, ‘is-to’, ‘to-identify’, ‘identify-use’, ‘use-inform’, ‘inform-.’ D1: ‘text-mine’, ‘to-identify’, ‘identify-use’, ‘use-inform’ 1.Tokenization: 2.Stemming/normalization: 3.N-gram construction: Naturally fit into MapReduce paradigm! Mapper Reducer

34 Recap: an illustration of VS model All documents are projected into this concept space Sports Education Finance D4D4 D2D2 D1D1 D5D5 D3D3 CS@UVaCS 6501: Text Mining34 |D 2 -D 4 |

35 Recap: Bag-of-Words representation Assumption – Words are independent from each other Pros – Simple Cons – Basis vectors are clearly not linearly independent! – Grammar and order are missing The most frequently used document representation – Image, speech, gene sequence CS@UVaCS 6501: Text Mining35 textinformationidentifyminingminedisusefultofromappledelicious Doc111110111000 Doc211001110100 Doc300000100011

36 Recap: automatic document representation Remove non-informative words Remove rare words CS@UVaCS 6501: Text Mining36

37 Recap: constructing a VSM representation CS@UVaCS 6501: Text Mining37 4.Stopword/controlled vocabulary filtering: Documents in a vector space! D1: ‘Text mining is to identify useful information.’ D1: ‘Text’, ‘mining’, ‘is’, ‘to’, ‘identify’, ‘useful’, ‘information’, ‘.’ D1: ‘text’, ‘mine’, ‘is’, ‘to’, ‘identify’, ‘use’, ‘inform’, ‘.’ D1: ‘text-mine’, ‘mine-is’, ‘is-to’, ‘to-identify’, ‘identify-use’, ‘use-inform’, ‘inform-.’ D1: ‘text-mine’, ‘to-identify’, ‘identify-use’, ‘use-inform’ 1.Tokenization: 2.Stemming/normalization: 3.N-gram construction: Naturally fit into MapReduce paradigm! Mapper Reducer

38 How to assign weights? Important! Why? – Corpus-wise: some terms carry more information about the document content – Document-wise: not all terms are equally important How? – Two basic heuristics TF (Term Frequency) = Within-doc-frequency IDF (Inverse Document Frequency) CS@UVaCS 6501: Text Mining38

39 Term frequency CS@UVaCS 6501: Text Mining39 Doc A: ‘good’,10 Doc B: ‘good’,2 Doc C: ‘good’,3 Which two documents are more similar to each other?

40 TF normalization Two views of document length – A doc is long because it is verbose – A doc is long because it has more content Raw TF is inaccurate – Document length variation – “Repeated occurrences” are less informative than the “first occurrence” – Information about semantic does not increase proportionally with number of term occurrence Generally penalize long document, but avoid over-penalizing – Pivoted length normalization CS@UVaCS 6501: Text Mining40

41 TF normalization Norm. TF Raw TF CS@UVaCS 6501: Text Mining41

42 TF normalization Norm. TF Raw TF 1 CS@UVaCS 6501: Text Mining42

43 Document frequency Idea: a term is more discriminative if it occurs only in fewer documents CS@UVaCS 6501: Text Mining43

44 Inverse document frequency Total number of docs in collection Non-linear scaling CS@UVaCS 6501: Text Mining44

45 Why document frequency Wordttfdf try104228760 insurance104403997 Table 1. Example total term frequency v.s. document frequency in Reuters-RCV1 collection. CS@UVaCS 6501: Text Mining45

46 TF-IDF weighting “Salton was perhaps the leading computer scientist working in the field of information retrieval during his time.” - wikipedia Gerard Salton Award – highest achievement award in IR CS@UVaCS 6501: Text Mining46

47 How to define a good similarity metric? Euclidean distance? Sports Education Finance D4D4 D2D2 D1D1 D5D5 D3D3 D6D6 TF-IDF space CS@UVaCS 6501: Text Mining47

48 How to define a good similarity metric? CS@UVaCS 6501: Text Mining48

49 From distance to angle Angle: how vectors are overlapped – Cosine similarity – projection of one vector onto another Sports Finance D1D1 D2D2 D6D6 TF-IDF space The choice of angle The choice of Euclidean distance CS@UVaCS 6501: Text Mining49

50 Cosine similarity TF-IDF vector Unit vector D1D1 D2D2 D6D6 TF-IDF space Sports Finance CS@UVaCS 6501: Text Mining50

51 Advantages of VS model Empirically effective! Intuitive Easy to implement Well-studied/mostly evaluated The Smart system – Developed at Cornell: 1960-1999 – Still widely used Warning: many variants of TF-IDF! CS@UVaCS 6501: Text Mining51

52 Disadvantages of VS model Assume term independence Lack of “predictive adequacy” – Arbitrary term weighting – Arbitrary similarity measure Lots of parameter tuning! CS@UVaCS 6501: Text Mining52

53 Text Document Representation & Language Model 1.How to represent a document? – Make it computable 2.How to infer the relationship among documents or identify the structure within a document? – Knowledge discovery 3. Language Model and N-Grams CS@UVaCS 6501: Text Mining53

54 What is a statistical LM? A model specifying probability distribution over word sequences – p(“ Today is Wednesday ”)  0.001 – p(“ Today Wednesday is ”)  0.0000000000001 – p(“ The eigenvalue is positive” )  0.00001 It can be regarded as a probabilistic mechanism for “generating” text, thus also called a “generative” model CS@UVaCS 6501: Text Mining54

55 Why is a LM useful? Provide a principled way to quantify the uncertainties associated with natural language Allow us to answer questions like: – Given that we see “ John ” and “ feels ”, how likely will we see “ happy ” as opposed to “ habit ” as the next word? (speech recognition) – Given that we observe “baseball” three times and “game” once in a news article, how likely is it about “sports” v.s. “politics”? (text categorization) – Given that a user is interested in sports news, how likely would the user use “baseball” in a query? (information retrieval) CS@UVaCS 6501: Text Mining55

56 Measure the fluency of documents CS@UVaCS 6501: Text Mining56

57 Source-Channel framework [Shannon 48] Source Transmitter (encoder) Destination Receiver (decoder) Noisy Channel P(X) P(Y|X) X Y X’ P(X|Y)=? When X is text, p(X) is a language model (Bayes Rule) Many Examples: Speech recognition: X=Word sequence Y=Speech signal Machine translation: X=English sentence Y=Chinese sentence OCR Error Correction: X=Correct word Y= Erroneous word Information Retrieval: X=Document Y=Query Summarization: X=Summary Y=Document CS@UVaCS 6501: Text Mining57

58 Basic concepts of probability Random experiment – An experiment with uncertain outcome (e.g., tossing a coin, picking a word from text) Sample space (S) – All possible outcomes of an experiment, e.g., tossing 2 fair coins, S={HH, HT, TH, TT} Event (E) – E  S, E happens iff outcome is in S, e.g., E={HH} (all heads), E={HH,TT} (same face) – Impossible event ({}), certain event (S) Probability of event – 0 ≤ P(E) ≤ 1 CS@UVaCS 6501: Text Mining58

59 Recap: how to assign weights? Important! Why? – Corpus-wise: some terms carry more information about the document content – Document-wise: not all terms are equally important How? – Two basic heuristics TF (Term Frequency) = Within-doc-frequency IDF (Inverse Document Frequency) CS@UVaCS 6501: Text Mining59

60 Sampling with replacement Pick a random shape, then put it back in the bag CS@UVaCS 6501: Text Mining60

61 Essential probability concepts CS@UVaCS 6501: Text Mining61

62 Essential probability concepts CS@UVaCS 6501: Text Mining62

63 Language model for text CS@UVaCS 6501: Text Mining63 Chain rule: from conditional probability to joint probability sentence Average English sentence length is 14.3 words 475,000 main headwords in Webster's Third New International Dictionary How large is this? We need independence assumptions!

64 Unigram language model CS@UVaCS 6501: Text Mining64 The simplest and most popular choice!

65 More sophisticated LMs CS@UVaCS 6501: Text Mining65

66 Why just unigram models? Difficulty in moving toward more complex models – They involve more parameters, so need more data to estimate – They increase the computational complexity significantly, both in time and space Capturing word order or structure may not add so much value for “topical inference” But, using more sophisticated models can still be expected to improve performance... CS@UVaCS 6501: Text Mining66

67 Generative view of text documents (Unigram) Language Model  p(w|  ) … text 0.2 mining 0.1 assocation 0.01 clustering 0.02 … food 0.00001 … Topic 1: Text mining … text 0.01 food 0.25 nutrition 0.1 healthy 0.05 diet 0.02 … Topic 2: Health Document Text mining document Food nutrition document Sampling CS@UVaCS 6501: Text Mining67

68 How to generate text from an N-gram language model? CS@UVaCS 6501: Text Mining68

69 Generating text from language models CS@UVaCS 6501: Text Mining69 Under a unigram language model:

70 Generating text from language models CS@UVaCS 6501: Text Mining70 The same likelihood! Under a unigram language model:

71 N-gram language models will help Unigram – Months the my and issue of year foreign new exchange’s september were recession exchange new endorsed a q acquire to six executives. Bigram – Last December through the way to preserve the Hudson corporation N.B.E.C. Taylor would seem to complete the major central planners one point five percent of U.S.E. has already told M.X. corporation of living on information such as more frequently fishing to keep her. Trigram – They also point to ninety nine point six billon dollars from two hundred four oh six three percent of the rates of interest stores as Mexico and Brazil on market conditions. CS@UVaCS 6501: Text Mining71 Generated from language models of New York Times

72 Turing test: generating Shakespeare CS@UVaCS 6501: Text Mining72 A B C D SCIgen - An Automatic CS Paper Generator

73 Estimation of language models CS@UVaCS 6501: Text Mining73 Document text 10 mining 5 association 3 database 3 algorithm 2 … query 1 efficient 1 Unigram Language Model  p(w|  )=? … text ? mining ? assocation ? database ? … query ? … Estimation A “text mining” paper (total #words=100)

74 Sampling with replacement Pick a random shape, then put it back in the bag CS@UVaCS 6501: Text Mining74

75 Recap: language model for text CS@UVaCS 6501: Text Mining75 Chain rule: from conditional probability to joint probability sentence Average English sentence length is 14.3 words 475,000 main headwords in Webster's Third New International Dictionary How large is this? We need independence assumptions!

76 Recap: how to generate text from an N-gram language model? CS@UVaCS 6501: Text Mining76

77 Recap: estimation of language models Maximum likelihood estimation Unigram Language Model  p(w|  )=? Document text 10 mining 5 association 3 database 3 algorithm 2 … query 1 efficient 1 … text ? mining ? assocation ? database ? … query ? … Estimation A “text mining” paper (total #words=100) 10/100 5/100 3/100 1/100 CS@UVaCS 6501: Text Mining77

78 Parameter estimation CS@UVaCS 6501: Text Mining78

79 Maximum likelihood vs. Bayesian Maximum likelihood estimation – “Best” means “data likelihood reaches maximum” – Issue: small sample size Bayesian estimation – “Best” means being consistent with our “prior” knowledge and explaining data well – A.k.a, Maximum a Posterior estimation – Issue: how to define prior? CS@UVaCS 6501: Text Mining79 ML: Frequentist’s point of view MAP: Bayesian’s point of view

80 Illustration of Bayesian estimation CS@UVaCS 6501: Text Mining80 Prior: p(  ) Likelihood: p(X|  ) X=(x 1,…,x N ) Posterior: p(  |X)  p(X|  )p(  )    : prior mode  ml : ML estimate  : posterior mode

81 Maximum likelihood estimation CS@UVa81 Set partial derivatives to zero ML estimate Sincewe have Requirement from probability CS 6501: Text Mining

82 Maximum likelihood estimation CS@UVaCS 6501: Text Mining82 Length of document or total number of words in a corpus

83 Problem with MLE Unseen events – We estimated a model on 440K word tokens, but: Only 30,000 unique words occurred Only 0.04% of all possible bigrams occurred  This means any word/N-gram that does not occur in the training data has zero probability!  No future documents can contain those unseen words/N-grams CS@UVaCS 6501: Text Mining83 A plot of word frequency in Wikipedia (Nov 27, 2006) Word frequency Word rank by frequency

84 Smoothing If we want to assign non-zero probabilities to unseen words – Unseen words = new words, new N-grams – Discount the probabilities of observed words General procedure 1.Reserve some probability mass of words seen in a document/corpus 2.Re-allocate it to unseen words CS@UVaCS 6501: Text Mining84

85 Illustration of N-gram language model smoothing w Max. Likelihood Estimate Smoothed LM Assigning nonzero probabilities to the unseen words CS@UVaCS 6501: Text Mining85 Discount from the seen words

86 Smoothing methods Additive smoothing – Add a constant  to the counts of each word Unigram language model as an example – Problems? Hint: all words are equally important? “Add one”, Laplace smoothing Vocabulary size Counts of w in d Length of d (total counts) CS@UVaCS 6501: Text Mining86

87 Add one smoothing for bigrams CS@UVaCS 6501: Text Mining87

88 After smoothing Giving too much to the unseen events CS@UVaCS 6501: Text Mining88

89 Refine the idea of smoothing Should all unseen words get equal probabilities? We can use a reference model to discriminate unseen words Discounted ML estimate Reference language model CS@UVaCS 6501: Text Mining89

90 Smoothing methods Linear interpolation – Use (N –1)-gram probabilities to smooth N-gram probabilities We never see the trigram “Bob was reading”, but we do see the bigram “was reading” CS@UVaCS 6501: Text Mining90

91 Smoothing methods Linear interpolation – Use (N –1)-gram probabilities to smooth N-gram probabilities – Further generalize it CS@UVaCS 6501: Text Mining91 ML N-gram Smoothed (N-1)-gram Smoothed N-gram …….

92 Smoothing methods CS@UVaCS 6501: Text Mining92 We will come back to this later

93 Smoothing methods CS@UVaCS 6501: Text Mining93 Smoothed (N-1)-gram Smoothed N-gram

94 What you should know Basic ideas of vector space model Procedures of constructing VS representation for a document Two important heuristics in bag-of-words representation – TF – IDF Similarity metric for VS model CS@UVaCS 6501: Text Mining94


Download ppt "Introduction to Text Mining Thanks for Hongning slides on Text Ming Courses, Slides are slightly modified by Lei Chen."

Similar presentations


Ads by Google