Download presentation
Presentation is loading. Please wait.
Published byCory Parker Modified over 8 years ago
1
LING/C SC 581: Advanced Computational Linguistics Lecture Notes Jan 20 th
2
Today's Topics 1. LR(k) grammar contd. Homework 1 – (Due by midnight before next lecture: i.e. Tuesday 26 th before midnight) – One PDF file writeup: email to sandiway@email.arizona.edu 2. N-gram models and Colorless green ideas sleep furiously
3
Recap: dotted rule notation notation – “dot” used to track the progress of a parse through a phrase structure rule – examples vp --> v. np means we’ve seen v and are predicting an np np -->. dt nn means we’re predicting a dt (followed by nn) vp --> vp pp. means we’ve completed the RHS of a vp
4
Recap: Parse State state – a set of dotted rules encodes the state of the parse – set of dotted rules = name of the state kernel vp --> v. np vp --> v. completion (of predict NP) np -->. dt nn np -->. nnp np -->. np sbar
5
Recap: Shift and Reduce Actions two main actions – Shift move a word from the input onto the stack Example: –np -->.dt nn – Reduce build a new constituent Example: –np --> dt nn.
6
Recap: LR State Machine Built by advancing the dot over terminals and nonterminals Start state 0: – SS -->. S $ – complete this state Shift action: LHS -->. POS … 1.move word with POS tag from input queue onto stack 2.goto new state indicated by the top of stack state x POS Reduce action: LHS --> RHS. 1.pop |RHS| items off the stack 2.wrap [ LHS..RHS..] and put back onto the stack 3.goto new state indicated by the top of the stack state x LHS
7
LR State Machine Example State 0 ss .s $ s .np vp np .np pp np .n np .d n State 0 ss .s $ s .np vp np .np pp np .n np .d n State 1 ss s.$ State 1 ss s.$ State 13 ss s $. State 13 ss s $. $ s State 4 s np.vp np np.pp vp .v np vp .v vp .vp pp pp .p np State 4 s np.vp np np.pp vp .v np vp .v vp .vp pp pp .p np State 2 np d.n State 2 np d.n State 12 np d n. State 12 np d n. State 3 np n. State 3 np n. n d n np State 5 np np pp. State 5 np np pp. pp State 6 pp p.np np .np pp np .n np .d n State 6 pp p.np np .np pp np .n np .d n State 7 vp v.np vp v. np .np pp np .n np .d n State 7 vp v.np vp v. np .np pp np .n np .d n p v n d d n State 8 s np vp. vp vp.pp pp .p np State 8 s np vp. vp vp.pp pp .p np vp State 9 vp vp pp. State 9 vp vp pp. pp State 10 vp v np. np np. pp pp .p np State 10 vp v np. np np. pp pp .p np np State 11 pp p np. np np. pp pp .p np State 11 pp p np. np np. pp pp .p np np p pp p p
8
Prolog Code Files on webpage: 1.grammar0.pl 2.lr0.pl 3.parse.pl 4.lr1.pl 5.parse1.pl
9
LR(k) in the Chomsky Hierarchy Definition: a grammar is said to be LR(k) for some k = 0,1,2.. if the LR state machine for that grammar is unambiguous – i.e. are no conflicts, only one possible action… Context-Free Languages LR(1) LR(0) RL RL = Regular Languages
10
LR(k) in the Chomsky Hierarchy If there is ambiguity, we can still use the LR Machine with: 1.Pick one action, and use backtracking for alternative actions, or 2.Run actions in parallel
11
grammar0.pl 1.rule(ss,[s,$]). 2.rule(s,[np,vp]). 3.rule(np,[dt,nn]). 4.rule(np,[nnp]). 5.rule(np,[np,pp]). 6.rule(vp,[vbd,np]). 7.rule(vp,[vbz]). 8.rule(vp,[vp,pp]). 9.rule(pp,[in,np]). 10.lexicon(the,dt). lexicon(a,dt). 11.lexicon(man,nn). lexicon(boy,nn). 12.lexicon(limp,nn). lexicon(telescope,nn). 13.lexicon(john,nnp). 14.lexicon(saw,vbd). lexicon(runs,vbz). 15.lexicon(with,in).
12
grammar0.pl 1.nonT(ss). nonT(s). nonT(np). nonT(vp). nonT(pp). 2.term(nnp). term(nn). 3.term(vbd). term(vbz). 4.term(in). term(dt). 5.term($). 6.start(ss).
13
Some useful Prolog Primitives: –tell(Filename) redirect output to Filename –told close the file and stop redirecting output Example: –tell('machine.pl'),goal,told. –means run goal and capture all output to a file called machine.pl
14
lr0.pl Example:
15
lr0.pl
16
action(State#, CStack, Input, ParseStack, CStack', Input',ParseStack')
17
parse.pl
18
lr0.pl
19
lr1.pl and parse1.p l Similar code for LR(1) – 1 symbol of lookahead
20
lr1.pl and parse1.p l Similar code for LR(1) – 1 symbol of lookahead
21
parse1.p l
22
Homework 1 Question 1: – How many states are built for the LR(0) and the LR(1) machines?
23
Homework 1 Question 2: – Examine the action predicate built by LR(0) – Assume there is no possible conflict between two shift actions, e.g. shift dt or nnp – Is grammar0.pl LR(0)? Explain. Question 3: – Is grammar0.pl LR(1)? Explain.
24
Homework 1 Question 4: – run the sentence: John saw the boy with the telescope – on both LR(0) and LR(1) machines – How many states are visited to parse both sentences completely in the two machines? – Is the LR(1) any more efficient than the LR(0) machine?
25
Homework 1 Question 5: – run the sentence: John saw the boy with a limp with the telescope – on both LR(0) and LR(1) machines – How many parses are obtained? – How many states are visited to parse the sentence completely in the two machines?
26
Homework 1 Question 6: – Compare these two states in the LR(1) machine: Can we merge these two states? Explain why or why not. How could you test your answer?
27
Break …
28
Language Models and N-grams given a word sequence – w 1 w 2 w 3... w n chain rule – how to compute the probability of a sequence of words – p(w 1 w 2 ) = p(w 1 ) p(w 2 |w 1 ) – p(w 1 w 2 w 3 ) = p(w 1 ) p(w 2 |w 1 ) p(w 3 |w 1 w 2 ) –... – p(w 1 w 2 w 3...w n ) = p(w 1 ) p(w 2 |w 1 ) p(w 3 |w 1 w 2 )... p(w n |w 1...w n-2 w n-1 ) note – It’s not easy to collect (meaningful) statistics on p(w n |w n-1 w n-2...w 1 ) for all possible word sequences
29
Language Models and N-grams Given a word sequence – w 1 w 2 w 3... w n Bigram approximation – just look at the previous word only (not all the proceedings words) – Markov Assumption: finite length history – 1st order Markov Model – p(w 1 w 2 w 3...w n ) = p(w 1 ) p(w 2 |w 1 ) p(w 3 |w 1 w 2 )...p(w n |w 1...w n-3 w n-2 w n-1 ) – p(w 1 w 2 w 3...w n ) p(w 1 ) p(w 2 |w 1 ) p(w 3 |w 2 )...p(w n |w n-1 ) note – p(w n |w n-1 ) is a lot easier to collect data for (and thus estimate well) than p(w n |w 1...w n-2 w n-1 )
30
Language Models and N-grams Trigram approximation – 2nd order Markov Model – just look at the preceding two words only – p(w 1 w 2 w 3 w 4...w n ) = p(w 1 ) p(w 2 |w 1 ) p(w 3 |w 1 w 2 ) p(w 4 |w 1 w 2 w 3 )...p(w n |w 1...w n- 3 w n-2 w n-1 ) – p(w 1 w 2 w 3...w n ) p(w 1 ) p(w 2 |w 1 ) p(w 3 |w 1 w 2 )p(w 4 |w 2 w 3 )...p(w n |w n-2 w n-1 ) note – p(w n |w n-2 w n-1 ) is a lot easier to estimate well than p(w n |w 1...w n-2 w n-1 ) but harder than p(w n |w n-1 )
31
Language Models and N-grams estimating from corpora – how to compute bigram probabilities – p(w n |w n-1 ) = f(w n-1 w n )/f(w n-1 w)w is any word – Since f(w n-1 w) = f(w n-1 ) f(w n-1 ) = unigram frequency for w n-1 – p(w n |w n-1 ) = f(w n-1 w n )/f(w n-1 )relative frequency Note: – The technique of estimating (true) probabilities using a relative frequency measure over a training corpus is known as maximum likelihood estimation (MLE)
32
Motivation for smoothing Smoothing: avoid zero probability estimates Consider what happens when any individual probability component is zero? – Arithmetic multiplication law: 0×X = 0 – very brittle! even in a very large corpus, many possible n-grams over vocabulary space will have zero frequency – particularly so for larger n-grams p(w 1 w 2 w 3...w n ) p(w 1 ) p(w 2 |w 1 ) p(w 3 |w 2 )...p(w n |w n-1 )
33
Language Models and N-grams Example: unigram frequencies w n-1 w n bigram frequencies bigram probabilities sparse matrix zeros render probabilities unusable (we’ll need to add fudge factors - i.e. do smoothing) w n-1 wnwn
34
Smoothing and N-grams sparse dataset means zeros are a problem – Zero probabilities are a problem p(w 1 w 2 w 3...w n ) p(w 1 ) p(w 2 |w 1 ) p(w 3 |w 2 )...p(w n |w n-1 ) bigram model one zero and the whole product is zero – Zero frequencies are a problem p(w n |w n-1 ) = f(w n-1 w n )/f(w n-1 )relative frequency bigram f(w n-1 w n ) doesn’t exist in dataset smoothing – refers to ways of assigning zero probability n-grams a non-zero value
35
Smoothing and N-grams Add-One Smoothing (4.5.1 Laplace Smoothing) – add 1 to all frequency counts – simple and no more zeros (but there are better methods) unigram – p(w) = f(w)/N(before Add-One) N = size of corpus – p(w) = (f(w)+1)/(N+V)(with Add-One) – f*(w) = (f(w)+1)*N/(N+V)(with Add-One) V = number of distinct words in corpus N/(N+V) normalization factor adjusting for the effective increase in the corpus size caused by Add-One bigram – p(w n |w n-1 ) = f(w n-1 w n )/f(w n-1 )(before Add-One) – p(w n |w n-1 ) = (f(w n-1 w n )+1)/(f(w n-1 )+V)(after Add-One) – f*(w n-1 w n ) = (f(w n-1 w n )+1)* f(w n-1 ) /(f(w n-1 )+V)(after Add-One) must rescale so that total probability mass stays at 1
36
Smoothing and N-grams Add-One Smoothing – add 1 to all frequency counts bigram – p(w n |w n-1 ) = (f(w n-1 w n )+1)/(f(w n-1 )+V) – (f(w n-1 w n )+1)* f(w n-1 ) /(f(w n-1 )+V) frequencies Remarks: perturbation problem add-one causes large changes in some frequencies due to relative size of V (1616) want to: 786 338 = figure 6.8 = figure 6.4
37
Smoothing and N-grams Add-One Smoothing – add 1 to all frequency counts bigram – p(w n |w n-1 ) = (f(w n-1 w n )+1)/(f(w n-1 )+V) – (f(w n-1 w n )+1)* f(w n-1 ) /(f(w n-1 )+V) Probabilities Remarks: perturbation problem similar changes in probabilities = figure 6.5 = figure 6.7
38
Smoothing and N-grams let’s illustrate the problem – take the bigram case: – w n-1 w n – p(w n |w n-1 ) = f(w n-1 w n )/f(w n-1 ) – suppose there are cases – w n-1 w zero 1 that don’t occur in the corpus probability mass f(w n-1 ) f(w n-1 w n ) f(w n-1 w zero 1 )=0 f(w n-1 w zero m )=0...
39
Smoothing and N-grams add-one – “give everyone 1” probability mass f(w n-1 ) f(w n-1 w n )+1 f(w n-1 w 0 1 )=1 f(w n-1 w 0 m )=1...
40
Smoothing and N-grams add-one – “give everyone 1” probability mass f(w n-1 ) f(w n-1 w n )+1 f(w n-1 w 0 1 )=1 f(w n-1 w 0 m )=1... V = |{w i }| redistribution of probability mass –p(w n |w n-1 ) = (f(w n- 1 w n )+1)/(f(w n-1 )+V)
41
Smoothing and N-grams Good-Turing Discounting (4.5.2) – N c = number of things (= n-grams) that occur c times in the corpus – N = total number of things seen – Formula: smoothed c for N c given by c* = (c+1)N c+1 /N c – Idea: use frequency of things seen once to estimate frequency of things we haven’t seen yet – estimate N 0 in terms of N 1 … – and so on but if N c =0, smooth that first using something like log(N c )=a+b log(c) – Formula: P*(things with zero freq) = N 1 /N – smaller impact than Add-One Textbook Example: – Fishing in lake with 8 species bass, carp, catfish, eel, perch, salmon, trout, whitefish – Sample data (6 out of 8 species): 10 carp, 3 perch, 2 whitefish, 1 trout, 1 salmon, 1 eel – P(unseen new fish, i.e. bass or carp) = N 1 /N = 3/18 = 0.17 – P(next fish=trout) = 1/18 (but, we have reassigned probability mass, so need to recalculate this from the smoothing formula…) – revised count for trout: c*(trout) = 2*N 2 /N 1 =2(1/3)=0.67 (discounted from 1) – revised P(next fish=trout) = 0.67/18 = 0.037
42
Language Models and N-grams N-gram models – data is easy to obtain any unlabeled corpus will do – they’re technically easy to compute count frequencies and apply the smoothing formula – but just how good are these n-gram language models? – and what can they show us about language?
43
Language Models and N-grams approximating Shakespeare – generate random sentences using n-grams – Corpus: Complete Works of Shakespeare Unigram (pick random, unconnected words) Bigram
44
Language Models and N-grams Approximating Shakespeare – generate random sentences using n-grams – Corpus: Complete Works of Shakespeare Trigram Quadrigram Remarks: dataset size problem training set is small 884,647 words 29,066 different words 29,066 2 = 844,832,356 possible bigrams for the random sentence generator, this means very limited choices for possible continuations, which means program can’t be very innovative for higher n
45
Language Models and N-grams A limitation: – produces ungrammatical sequences Treebank: – potential to be a better language model – Structural information: contains frequency information about syntactic rules – we should be able to generate sequences that are closer to “English”…
46
Language Models and N-grams Aside: http://hemispheresmagazine.com/contests/2004/intro.htm
47
Language Models and N-grams N-gram models + smoothing – one consequence of smoothing is that – every possible concatentation or sequence of words has a non-zero probability
48
Colorless green ideas examples – (1) colorless green ideas sleep furiously – (2) furiously sleep ideas green colorless Chomsky (1957): –... It is fair to assume that neither sentence (1) nor (2) (nor indeed any part of these sentences) has ever occurred in an English discourse. Hence, in any statistical model for grammaticalness, these sentences will be ruled out on identical grounds as equally `remote' from English. Yet (1), though nonsensical, is grammatical, while (2) is not. idea – (1) is syntactically valid, (2) is word salad Statistical Experiment (Pereira 2002)
49
Colorless green ideas examples – (1) colorless green ideas sleep furiously – (2) furiously sleep ideas green colorless Statistical Experiment (Pereira 2002) wiwi w i-1 bigram language model
50
Interesting things to Google example – colorless green ideas sleep furiously Second hit
51
Interesting things to Google example – colorless green ideas sleep furiously first hit – compositional semantics – a green idea is, according to well established usage of the word "green" is one that is an idea that is new and untried. – again, a colorless idea is one without vividness, dull and unexciting. – so it follows that a colorless green idea is a new, untried idea that is without vividness, dull and unexciting. – to sleep is, among other things, is to be in a state of dormancy or inactivity, or in a state of unconsciousness. – to sleep furiously may seem a puzzling turn of phrase but one reflects that the mind in sleep often indeed moves furiously with ideas and images flickering in and out.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.