Download presentation
1
Dongfang Xu School of Information
Hidden Markov Model Dongfang Xu School of Information
2
Outline Markov Model Hidden Markov Model Part of speech tag example
Concept Discrimination Hidden Markov Model Notation & Example Three basic problem Part of speech tag example Goal & Idea
3
Markov model Markov Model Markov property stochastic model
a linear sequence of events Markov Property Markov property the conditional probability distribution future states only depend on present state
4
Markov model 4 kinds of Markov Model System state partially observable
System state fully observable System state partially observable System autonomous Markov Chain Hidden Markov Model System controlled Markov Decision Process Partially observable Markov decision process
5
Markov model Markov Chain (Visible Markov Model)
Rain Dry 0.7 0.3 0.2 0.8 Two states : ‘Rain’ and ‘Dry’. Transition probabilities: P(‘Rain’|‘Rain’)=0.3 , P(‘Dry’|‘Rain’)=0.7 , P(‘Rain’|‘Dry’)=0.2, P(‘Dry’|‘Dry’)=0.8 Initial probabilities: say P(‘Rain’)=0.4 , P(‘Dry’)=0.6 . The entire state of Markov Model
6
Markov model Hidden Markov Model All states are unknown.
Low High 0.7 0.3 0.2 0.8 All states are unknown. 2. Observation is probabilistically related with the state in Markov Model. 0.6 0.6 0.4 0.4 Rain Dry
7
Outline Markov Model Hidden Markov Model Part of speech tag example
Concept Discrimination Hidden Markov Model Notation & Example Three basic problem Part of speech tag example Goal & Idea
8
Hidden Markov model Problem notation Set of states:
State transition probabilities: A = {aij}, i, j ∈ S, aij= P(si | sj). Symbol emission probabilities: B=(bi (vm )), bi(vm ) = P(vm | si) Initial state probabilities: =(i), i = P(si) . State sequence X = (X1, , XT+1) Xt : S {1, , N} Output sequence O = (o1, , oT) ot ∈ K Suppose we want to calculate a probability of a sequence of observations in our example, {‘Dry’, ’Rain’}.
9
Hidden Markov model Calculation of observation sequence probability
Consider all possible hidden state sequences: P({‘Dry’,’Rain’} ) = P({‘Dry’,’Rain’} , {‘Low’,’Low’}) + P({‘Dry’,’Rain’} , {‘Low’,’High’}) + P({‘Dry’,’Rain’} , {‘High’,’Low’}) + P({‘Dry’,’Rain’} , {‘High’,’High’}) where first term is : P({‘Dry’,’Rain’} , {‘Low’,’Low’})= P({‘Dry’,’Rain’} | {‘Low’,’Low’}) P({‘Low’,’Low’}) = P(‘Low’) P(‘Dry’|’Low’) P(‘Low’|’Low)P(‘Rain’|’Low’) = 0.4*0.4*0.3*0.6
10
Hidden Markov model Three basic problem
Evaluation problem. Given the HMM M=(A, B, ) and the observation sequence O=o1 o2 ... oK , calculate the probability that model M has generated sequence O . Decoding problem. Given the HMM M=(A, B, ) and the observation sequence O=o1 o2 ... oK , calculate the most likely sequence of hidden states si that produced this observation sequence O. Learning problem. Given some training observation sequences O=o1 o2 ... oK and general structure of HMM (numbers of hidden and visible states), determine HMM parameters M=(A, B, ) that best fit training data.
11
Outline Markov Model Hidden Markov Model Part of speech tag example
Concept Discrimination Hidden Markov Model Notation & Example Three basic problem Part of speech tag example Goal & Idea
12
POS tag example Goal & Idea
To find the most probable tag sequence for a sequence of words. Two assumptions for the words: ---words are independent of each other; --- a word’s identity only depends on its tag.
13
POS tag example Goal & Idea Simplified formulation
Observation wi: the word at position i in the corpus Set of states ti: the tag of wi State transition probabilities: P(tk|tj) = C(tj, tk)/C(tj) Symbol emission probabilities: P(wl|tj ) =C(wl : tj )/C(tj )
14
POS tag example Goal & Idea Points bigram Model: P(tk|tj);
Data Sparse: unseen & rare words in corpus; Training corpus & computation complexity. A combination of HMM and Visible Markov Model Reference: Manning, C. D., & Schütze, H. (1999). Foundations of statistical natural language processing. MIT press. David D. (2003). Introduction to Hidden Markov Models [PowerPoint slides]. Retrieved from
15
Q&A Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.