Download presentation
Presentation is loading. Please wait.
Published byNathaniel Anthony Modified over 9 years ago
1
Sequence Models With slides by me, Joshua Goodman, Fei Xia
2
Outline Language Modeling Ngram Models Hidden Markov Models – Supervised Parameter Estimation – Probability of a sequence – Viterbi (or decoding) – Baum-Welch
3
3 A bad language model
4
4
5
5
6
6
7
What is a language model? Language Model: A distribution that assigns a probability to language utterances. e.g., P LM (“zxcv./,mwea afsido”) is zero; P LM (“mat cat on the sat”) is tiny; P LM (“Colorless green ideas sleeps furiously”) is bigger; P LM (“A cat sat on the mat.”) is bigger still.
8
What’s a language model for? Information Retrieval Handwriting recognition Speech Recognition Spelling correction Optical character recognition Machine translation …
9
Example Language Model Application Speech Recognition: convert an acoustic signal (sound wave recorded by a microphone) to a sequence of words (text file). Straightforward model: But this can be hard to train effectively (although see CRFs later).
10
Example Language Model Application Speech Recognition: convert an acoustic signal (sound wave recorded by a microphone) to a sequence of words (text file). Traditional solution: Bayes’ Rule Ignore: doesn’t matter for picking a good text Acoustic Model (easier to train) Language Model
11
Importance of Sequence So far, we’ve been making the exchangeability, or bag-of-words, assumption: The order of words is not important. It turns out, that’s actually not true (duh!). “cat mat on the sat” ≠ “the cat sat on the mat” “Mary loves John” ≠ “John loves Mary”
12
Language Models with Sequence Information Problem: How can we define a model that assigns probability to sequences of words (a language model) the probability depends on the order of the words the model can be trained and computed tractably?
13
Outline Language Modeling Ngram Models Hidden Markov Models – Supervised parameter estimation – Probability of a sequence (decoding) – Viterbi (Best hidden layer sequence) – Baum-Welch Conditional Random Fields
14
14 Smoothing: Kneser-Ney P(Francisco | eggplant) vs P(stew | eggplant) “Francisco” is common, so backoff, interpolated methods say it is likely But it only occurs in context of “San” “Stew” is common, and in many contexts Weight backoff by number of contexts word occurs in
15
15 Kneser-Ney smoothing (cont) Interpolation: Backoff:
16
Outline Language Modeling Ngram Models Hidden Markov Models – Supervised parameter estimation – Probability of a sequence (decoding) – Viterbi (Best hidden layer sequence) – Baum-Welch Conditional Random Fields
17
The Hidden Markov Model A dynamic Bayes Net (dynamic because the size can change). The O i nodes are called observed nodes. The S i nodes are called hidden nodes. NLP 17 S1S1 O1O1 S2S2 O2O2 SnSn OnOn … …
18
HMMs and Language Processing HMMs have been used in a variety of applications, but especially: – Speech recognition (hidden nodes are text words, observations are spoken words) – Part of Speech Tagging (hidden nodes are parts of speech, observations are words) NLP 18 S1S1 O1O1 S2S2 O2O2 SnSn OnOn … …
19
HMM Independence Assumptions HMMs assume that: S i is independent of S 1 through S i-2, given S i-1 (Markov assump.) O i is independent of all other nodes, given S i P(S i | S i-1 ) and P(O i | S i ) do not depend on i Not very realistic assumptions about language – but HMMs are often good enough, and very convenient. NLP 19 S1S1 O1O1 S2S2 O2O2 SnSn OnOn … …
20
HMM Formula An HMM predicts that the probability of observing a sequence o = with a particular set of hidden states s = is: To calculate, we need: - Prior: P(s 1 ) for all values of s 1 - Observation: P(o i |s i ) for all values of o i and s i - Transition: P(s i |s i-1 ) for all values of s i and s i-1
21
HMM: Pieces 1)A set of hidden states H = {h 1, …, h N } that are the values which hidden nodes may take. 2)A vocabulary, or set of states V = {v 1, …, v M } that are the values which an observed node may take. 3)Initial probabilities P(s 1 =h i ) for all i -Written as a vector of N initial probabilities, called π 4)Transition probabilities P(s t =h i | s t-1 =h j ) for all i, j -Written as an NxN ‘transition matrix’ A 5)Observation probabilities P(o t =v j |s t =h i ) for all j, i - written as an MxN ‘observation matrix’ B
22
HMM for POS Tagging 1)S = {DT, NN, VB, IN, …}, the set of all POS tags. 2)V = the set of all words in English. 3)Initial probabilities π i are the probability that POS tag can start a sentence. 4)Transition probabilities A ij represent the probability that one tag can follow another 5)Observation probabilities B ij represent the probability that a tag will generate a particular.
23
Outline Graphical Models Hidden Markov Models – Supervised parameter estimation – Probability of a sequence – Viterbi: what’s the best hidden state sequence? – Baum-Welch: unsupervised parameter estimation Conditional Random Fields
24
Supervised Parameter Estimation Given an observation sequence and states, find the HMM model ( π, A, and B) that is most likely to produce the sequence. For example, POS-tagged data from the Penn Treebank A B AAA BBBB oToT o1o1 otot o t-1 o t+1 x1x1 x t-1 xtxt x t+1 xTxT
25
Bayesian Parameter Estimation A B AAA BBBB oToT o1o1 otot o t-1 o t+1 x1x1 x t-1 xtxt x t+1 xTxT
26
Outline Graphical Models Hidden Markov Models – Supervised parameter estimation – Probability of a sequence – Viterbi – Baum-Welch Conditional Random Fields
27
What’s the probability of a sentence? Suppose I asked you, ‘What’s the probability of seeing a sentence w1, …, wT on the web?’ If we have an HMM model of English, we can use it to estimate the probability. (In other words, HMMs can be used as language models.)
28
Conditional Probability of a Sentence If we knew the hidden states that generated each word in the sentence, it would be easy:
29
Probability of a Sentence Via marginalization, we have: Unfortunately, if there are N values for each a i (s 1 through s N ), Then there are N T values for a 1,…,a T. Brute-force computation of this sum is intractable.
30
Forward Procedure oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Special structure gives us an efficient solution using dynamic programming. Intuition: Probability of the first t observations is the same for all possible t+1 length state sequences. Define:
31
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Forward Procedure
32
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Forward Procedure
33
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Forward Procedure
34
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Forward Procedure
35
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Forward Procedure
36
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Forward Procedure
37
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Forward Procedure
38
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Forward Procedure
39
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Backward Procedure Probability of the rest of the states given the first state
40
oToT o1o1 otot o t-1 o t+1 x1x1 x t+1 xTxT xtxt x t-1 Decoding Solution Forward Procedure Backward Procedure Combination
41
Outline Graphical Models Hidden Markov Models – Supervised parameter estimation – Probability of a sequence – Viterbi: what’s the best hidden state sequence? – Baum-Welch Conditional Random Fields
42
oToT o1o1 otot o t-1 o t+1 Best State Sequence Find the hidden state sequence that best explains the observations Viterbi algorithm
43
oToT o1o1 otot o t-1 o t+1 Viterbi Algorithm The state sequence which maximizes the probability of seeing the observations to time t-1, landing in state j, and seeing the observation at time t x1x1 x t-1 j
44
oToT o1o1 otot o t-1 o t+1 Viterbi Algorithm Recursive Computation x1x1 x t-1 xtxt x t+1
45
oToT o1o1 otot o t-1 o t+1 Viterbi Algorithm Compute the most likely state sequence by working backwards x1x1 x t-1 xtxt x t+1 xTxT
46
Outline Graphical Models Hidden Markov Models – Supervised parameter estimation – Probability of a sequence – Viterbi – Baum-Welch: Unsupervised parameter estimation Conditional Random Fields
47
oToT o1o1 otot o t-1 o t+1 Unsupervised Parameter Estimation Given an observation sequence, find the model that is most likely to produce that sequence. No analytic method Given a model and observation sequence, update the model parameters to better fit the observations. A B AAA BBBB
48
oToT o1o1 otot o t-1 o t+1 Parameter Estimation A B AAA BBBB Probability of traversing an arc Probability of being in state i
49
oToT o1o1 otot o t-1 o t+1 Parameter Estimation A B AAA BBBB Now we can compute the new estimates of the model parameters.
50
oToT o1o1 otot o t-1 o t+1 Parameter Estimation A B AAA BBBB Guarantee: P(o 1:T |A,B, π ) <= P(o 1:T | A ̂, B ̂, π̂ ) In other words, by repeating this procedure, we can gradually improve how well the HMM fits the unlabeled data. There is no guarantee that this will converge to the best possible HMM, however (only guaranteed to find a local maximum).
51
oToT o1o1 otot o t-1 o t+1 The Most Important Thing A B AAA BBBB We can use the special structure of this model to do a lot of neat math and solve problems that are otherwise not tractable.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.