Presentation is loading. Please wait.

Presentation is loading. Please wait.

N-Grams and Corpus Linguistics

Similar presentations


Presentation on theme: "N-Grams and Corpus Linguistics"— Presentation transcript:

1 N-Grams and Corpus Linguistics
Lecture 8 August 3, 2005 9/19/2018

2 Simple N-Grams Assume a language has V word types in its lexicon, how likely is word x to follow word y? Simplest model of word probability: 1/V Alternative 1: estimate likelihood of x occurring in new text based on its general frequency of occurrence estimated from a corpus (unigram probability) popcorn is more likely to occur than unicorn Alternative 2: condition the likelihood of x occurring in the context of previous words (bigrams, trigrams,…) mythical unicorn is more likely than mythical popcorn 9/19/2018

3 N-grams A simple model of language
Computes a probability for observed input. Probability is the likelihood of the observation being generated by the same source as the training data Such a model is often called a language model 9/19/2018

4 Computing the Probability of a Word Sequence
P(w1, …, wn) = P(w1).P(w2|w1).P(w3|w1,w2). … P(wn|w1, …,wn-1) P(the mythical unicorn) = P(the) P(mythical|the) P(unicorn|the mythical) The longer the sequence, the less likely we are to find it in a training corpus P(Most biologists and folklore specialists believe that in fact the mythical unicorn horns derived from the narwhal) Solution: approximate using n-grams 9/19/2018

5 Bigram Model Approximate by
P(unicorn|the mythical) by P(unicorn|mythical) Markov assumption: the probability of a word depends only on the probability of a limited history Generalization: the probability of a word depends only on the probability of the n previous words trigrams, 4-grams, … the higher n is, the more data needed to train backoff models 9/19/2018

6 Using N-Grams For N-gram models
P(wn-1,wn) = P(wn | wn-1) P(wn-1) By the Chain Rule we can decompose a joint probability, e.g. P(w1,w2,w3) P(w1,w2, ...,wn) = P(w1|w2,w3,...,wn) P(w2|w3, ...,wn) … P(wn-1|wn) P(wn) For bigrams then, the probability of a sequence is just the product of the conditional probabilities of its bigrams P(the,mythical,unicorn) = P(unicorn|mythical) P(mythical|the) P(the|<start>) 9/19/2018

7 N-grams Parameter Values: How many possible distinct probabilities will be needed? Total number of word tokens in the training data Total number of unique words: word types is our vocabulary size. 9/19/2018

8 N-gram Parameter sizes
Vocabulary V – size |V| P(Wi = x) – how many different values for Wi? P(Wi = x| Wj = y ) – how many different values for Wi,Wi? P(Wi = x| Wk = z, Wj = y ) – how many different values for Wi,Wi, Wk ? 9/19/2018

9 Training and Testing N-Gram probabilities come from a training corpus
overly narrow corpus: probabilities don't generalize overly general corpus: probabilities don't reflect task or domain A separate test corpus is used to evaluate the model, typically using standard metrics held out test set; development test set cross validation results tested for statistical significance 9/19/2018

10 A Simple Example P(I want to each Chinese food) =
P(I | <start>) P(want | I) P(to | want) P(eat | to) P(Chinese | eat) P(food | Chinese) 9/19/2018

11 A Bigram Grammar Fragment from BERP
.001 Eat British .03 Eat today .007 Eat dessert .04 Eat Indian .01 Eat tomorrow Eat a .02 Eat Mexican Eat at Eat Chinese .05 Eat dinner Eat in .06 Eat lunch Eat breakfast Eat some Eat Thai .16 Eat on 9/19/2018

12 .01 British lunch .05 Want a British cuisine .65 Want to .15
British restaurant .04 I have .60 British food .08 I don’t .02 To be .29 I would .09 To spend .32 I want .14 To have <start> I’m .26 To eat <start> Tell Want Thai .06 <start> I’d Want some .25 <start> I 9/19/2018

13 vs. I want to eat Chinese food = .00015
P(I want to eat British food) = P(I|<start>) P(want|I) P(to|want) P(eat|to) P(British|eat) P(food|British) = .25*.32*.65*.26*.001*.60 = vs. I want to eat Chinese food = Probabilities seem to capture ``syntactic'' facts, ``world knowledge'' eat is often followed by an NP British food is not too popular N-gram models can be trained by counting and normalization 9/19/2018

14 Bigram Counts 1 4 Lunch 17 19 Food 120 2 Chinese 52 Eat 12 3 860 10 To
1 4 Lunch 17 19 Food 120 2 Chinese 52 Eat 12 3 860 10 To 6 8 786 Want 13 1087 I lunch 9/19/2018

15 Bigram Probabilities Normalization: divide each row's counts by appropriate unigram counts for wn-1 Computing the bigram probability of I I C(I,I)/C(all I) p (I|I) = 8 / 3437 = .0023 Maximum Likelihood Estimation (MLE): relative frequency of e.g. 459 1506 213 938 3256 1215 3437 Lunch Food Chinese Eat To Want I 9/19/2018

16 What do we learn about the language?
What's being captured with ... P(want | I) = .32 P(to | want) = .65 P(eat | to) = .26 P(food | Chinese) = .56 P(lunch | eat) = .055 What about... P(I | I) = .0023 P(I | want) = .0025 P(I | food) = .013 9/19/2018

17 P(I | want) = .0025 I want I want
P(I | I) = I I I I want P(I | want) = I want I want P(I | food) = .013 the kind of food I want is ... 9/19/2018

18 Approximating Shakespeare
As we increase the value of N, the accuracy of the n-gram model increases, since choice of next word becomes increasingly constrained Generating sentences with random unigrams... Every enter now severally so, let Hill he late speaks; or! a more to leg less first you enter With bigrams... What means, sir. I confess she? then all sorts, he is trim, captain. Why dost stand forth thy canopy, forsooth; he is this palpable hit the King Henry. 9/19/2018

19 Trigrams Quadrigrams Sweet prince, Falstaff shall die.
This shall forbid it should be branded, if renown made it empty. Quadrigrams What! I will go seek the traitor Gloucester. Will you not tell me who I am? 9/19/2018

20 There are 884,647 tokens, with 29,066 word form types, in about a one million word Shakespeare corpus Shakespeare produced 300,000 bigram types out of 844 million possible bigrams: so, 99.96% of the possible bigrams were never seen (have zero entries in the table) Quadrigrams worse: What's coming out looks like Shakespeare because it is Shakespeare 9/19/2018

21 N-Gram Training Sensitivity
If we repeated the Shakespeare experiment but trained our n-grams on a Wall Street Journal corpus, what would we get? This has major implications for corpus selection or design 9/19/2018

22 What’s a word anyways? I have a can opener; but I can’t open these cans. how many words? Word form inflected form as it appears in the text can and cans ... different word forms Lemma a set of lexical forms having the same stem, same POS and same meaning can and cans … same lemma Word token: an occurrence of a word I have a can opener; but I can’t open these cans. 11 word tokens (not counting punctuation) Word type: a different realization of a word I have a can opener; but I can’t open these cans. 10 word types (not counting punctuation) 9/19/2018

23 Another example Mark Twain’s Tom Sawyer Complete Shakespeare work
71,370 word tokens 8,018 word types tokens/type ratio = 8.9 (indication of text complexity) Complete Shakespeare work 884,647 word tokens 29,066 word types tokens/type ratio = 30.4 9/19/2018

24 Some Useful Empirical Observations
A small number of events occur with high frequency A large number of events occur with low frequency You can quickly collect statistics on the high frequency events You might have to wait an arbitrarily long time to get valid statistics on low frequency events Some of the zeroes in the table are really zeros But others are simply low frequency events you haven't seen yet. How to address? 9/19/2018

25 Common words in Tom Sawyer
but words in NL have an uneven distribution… 9/19/2018

26 Frequency of frequencies
most words are rare 3993 (50%) word types appear only once they are called happax legomena (read only once) but common words are very common 100 words account for 51% of all tokens (of all text) 9/19/2018

27 Zipf’s Law Count the frequency of each word type in a large corpus
List the word types in order of their frequency Let: f = frequency of a word type r = its rank in the list Zipf’s Law says: f  1/r In other words: there exists a constant k such that: f × r = k The 50th most common word should occur with 3 times the frequency of the 150th most common word. 9/19/2018

28 Word counts are interesting...
As an indication of a text’s style As an indication of a text’s author But, because most words appear very infrequently, it is hard to predict much about the behavior of words (if they do not occur often in a corpus) --> Zipf’s Law 9/19/2018

29 Zipf’s Law on Tom Saywer
k ≈ except for The 3 most frequent words Words of frequency ≈ 100 9/19/2018

30 Plot of Zipf’s Law On chap. 1-3 of Tom Sawyer (≠ numbers from p. 25&26) f×r = k 9/19/2018

31 Plot of Zipf’s Law (con’t)
On chap. 1-3 of Tom Sawyer f×r = k ==> log(f×r) = log(k) ==> log(f)+log(r) = log(k) 9/19/2018

32 Zipf’s Law, so what? Significance of Zipf’s Law for us: There are:
A few very common words A medium number of medium frequency words A large number of infrequent words Principle of Least effort: Tradeoff between speaker and hearer’s effort Speaker communicates with a small vocabulary of common words (less effort) Hearer disambiguates messages through a large vocabulary of rare words (less effort) Significance of Zipf’s Law for us: For most words, our data about their use will be very sparse Only for a few words will we have a lot of examples 9/19/2018

33 Smoothing Techniques Every n-gram training matrix is sparse, even for very large corpora (Zipf’s law) Solution: estimate the likelihood of unseen n-grams Problems: how do you adjust the rest of the corpus to accommodate these ‘phantom’ n-grams? 9/19/2018

34 Add-one Smoothing For unigrams: For bigrams:
Add 1 to every word (type) count Normalize by N (tokens) /(N (tokens) +V (types)) Smoothed count (adjusted for additions to N) is Normalize by N to get the new unigram probability: For bigrams: Add 1 to every bigram c(wn-1 wn) + 1 Incr unigram count by vocabulary size c(wn-1) + V 9/19/2018

35 But this changes counts drastically:
Discount: ratio of new counts to old (e.g. add-one smoothing changes the BERP bigram (to|want) from 786 to 331 (dc=.42) and p(to|want) from .65 to .28) But this changes counts drastically: too much weight given to unseen ngrams in practice, unsmoothed bigrams often work better! 9/19/2018

36 Witten-Bell Discounting
A zero ngram is just an ngram you haven’t seen yet…but every ngram in the corpus was unseen once…so... How many times did we see an ngram for the first time? Once for each ngram type (T) Est. total probability of unseen bigrams as View training corpus as series of events, one for each token (N) and one for each new type (T) 9/19/2018

37 Discount values for Witten-Bell are much more reasonable than Add-One
We can divide the probability mass equally among unseen bigrams….or we can condition the probability of an unseen bigram on the first word of the bigram Discount values for Witten-Bell are much more reasonable than Add-One 9/19/2018

38 Good-Turing Discounting
Re-estimate amount of probability mass for zero (or low count) ngrams by looking at ngrams with higher counts Estimate E.g. N0’s adjusted count is a function of the count of ngrams that occur once, N1 Assumes: word bigrams follow a binomial distribution We know number of unseen bigrams (VxV-seen) 9/19/2018

39 Backoff methods (e.g. Katz ‘87)
For e.g. a trigram model Compute unigram, bigram and trigram probabilities In use: Where trigram unavailable back off to bigram if available, o.w. unigram probability E.g An omnivorous unicorn 9/19/2018

40 Summary N-gram probabilities can be used to estimate the likelihood
Of a word occurring in a context (N-1) Of a sentence occurring at all Smoothing techniques deal with problems of unseen words in a corpus 9/19/2018


Download ppt "N-Grams and Corpus Linguistics"

Similar presentations


Ads by Google