Download presentation
Presentation is loading. Please wait.
Published byGeraldine Fleming Modified over 8 years ago
1
Lecture 16, CS5671 Hidden Markov Models (“Carnivals with High Walls”) States (“Stalls”) Emission probabilities (“Odds”) Transitions (“Routes”) Sequences (“Prize kabobs”) General question: Given a subset of the above, determine another subset General algorithmic principles: Probabilistic, Optimality, Iterative,
2
Lecture 16, CS5672 “How lucky was your trip to the carnival?” Given: HMM consisting of –Set of states {k} –Emission probabilities e b (k): Probability of emitting symbol b from state k –Transition probabilities t kl : Probability of making a transition from state k to state l (1 st order Markov chain) AND –Path R s : Sequence of states visited in generating sequence s of symbols Question: –What is P(s), the probability of observing sequence s? Ans: –P(s|R s, θ) = t 01 Π k [ t kl e b (k)]
3
Lecture 16, CS5673 “I want to win the same things. I need to figure out the route you took!” Given: HMM consisting of –{k} –e b (k) –T kl AND –Sequence s Question: –What is R s max, the most probable path that was used to generate sequence s? Ans: –R s max = argmax R P(s|R, θ) –Viterbi decoding algorithm (Dynamic programming) –Decoding = Figuring out underlying state from symbols
4
Lecture 16, CS5674 “What are the general odds of winning this prize kabob?” Given: HMM consisting of –{k} –e b (k) –T kl AND –Sequence s Question: –What is the probability of observing sequence s, not matter which path was taken? Ans: –P(s|θ) = Σ i P(s|R si, θ) where i indicates paths that can result in s –Forward algorithm (Dynamic Programming)
5
Lecture 16, CS5675 “Did he get the Kohinoor from stall k?” Given: HMM consisting of –{k} –e b (k) –T kl AND –Sequence s Question: –What is the probability that symbol at position i of sequence s was emitted by state k? (More generally, “I wonder which stall he got the Kohinoor from…..”) Ans: –P(s ki |s,θ) = Probability of path going through state k at the i th position of the sequence = P(b 1, b 2, …..b i | b ki ).P(b i+1, b i+2, ….. b l | b ki ) / P(s|θ) –Posterior probability (“Now that I have the sequence…) estimated by Backward algorithm (Dynamic programming)
6
Lecture 16, CS5676 “Guys. Show me your kabobs, and I shall tell you how to win the best ones” Given: HMM consisting of –{k} BUT unknown e b (k) and t kl AND –Set of sequences {s}and their respective paths {R s } Question: –What are the t kl and e b (k)? (“The arrangement of stalls in the carnival and the odds of winning at each stall”) Ans: –Maximum likelihood (“frequentist”) approach (with usual déjà vu caveat) e b=x (k) = N k,b=x / Σ b N k,b t kl = N kl / Σ j N kj –Pseudocounts recommended if data size small
7
Lecture 16, CS5677 “You mean you got drunk and don’t remember where you got what?!!” Given: HMM consisting of –Set of states {k} –BUT unknown e b (k) and t kl AND –Set of sequences {s} –[i.e., Truly HMM] Question: –What are the t kl and e b (k)? (“The arrangement of stalls in the carnival and the odds of winning at each stall”) Ans: –A. Maximum likelihood (“frequentist”) approach (with usual déjà vu caveat) e b=x (k) = N k,b=x / Σ b N k,b t kl = N kl / Σ j N kj
8
Lecture 16, CS5678 “You mean you got drunk and don’t remember where you got what?!!” Ans (contd.): –B. Now, use the estimates of transition and emission probabilities to calculate most probable path for each sequence –Iterate between A and B till the parameters converge –(Pseudocounts/Dirichlet priors recommended if data size is small) –Baum-Welch Expectation Maximization algorithm
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.