Download presentation
Presentation is loading. Please wait.
1
Hidden Markov Models Part 2: Algorithms
CSE 6363 β Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington
2
Hidden Markov Model An HMM consists of: A set of states π 1 , β¦, π πΎ .
An initial state probability function π π =π( π§ 1 = π π ). A state transition matrix π¨, of size πΎΓπΎ, where π΄ π,π =π π§ π = π π π§ πβ1 = π π ) Observation probability functions, also called emission probabilities, defined as: π π π₯ =π π₯ π =π₯ π§ π = π π )
3
The Basic HMM Problems We will next see algorithms for the four problems that we usually want to solve when using HMMs. Problem 1: learn an HMM using training data. This involves: Learning the initial state probabilities π π . Learning the state transition matrix π¨. Learning the observation probability functions π π π₯ . Problems 2, 3, 4 assume we have already learned an HMM. Problem 2: given a sequence of observations π= π₯ 1 , β¦, π₯ π , compute π π HMM): the probability of π given the model. Problem 3: given an observation sequence π= π₯ 1 , β¦, π₯ π , find the hidden state sequence π= π§ 1 , β¦, π§ π maximizing π π π,HMM) Problem 4: given an observation sequence π= π₯ 1 , β¦, π₯ π , compute, for any π, π, the probability π π§ π = π π π,HMM)
4
The Basic HMM Problems Problem 1: learn an HMM using training data.
Problem 2: given a sequence of observations π= π₯ 1 , β¦, π₯ π , compute π π HMM): the probability of π given the model. Problem 3: given an observation sequence π= π₯ 1 , β¦, π₯ π , find the hidden state sequence π= π§ 1 , β¦, π§ π maximizing π π π,HMM) Problem 4: given an observation sequence π= π₯ 1 , β¦, π₯ π , compute, for any π, π, the probability π π§ π = π π π,HMM) We will first look at algorithms for problems 2, 3, 4. Last, we will look at the standard algorithm for learning an HMM using training data.
5
Probability of Observations
Problem 2: given an HMM model π, and a sequence of observations π= π₯ 1 , β¦, π₯ π , compute π π π): the probability of π given the model. Inputs: π=(π, π΄,π π ): a trained HMM, with probability functions π, π΄,π π specified. A sequence of observations π= π₯ 1 , β¦, π₯ π . Output: π π π).
6
Probability of Observations
Problem 2: given a sequence of observations π= π₯ 1 , β¦, π₯ π , compute π π π). Why do we care about this problem? What would be an example application?
7
Probability of Observations
Problem 2: given a sequence of observations π= π₯ 1 , β¦, π₯ π , compute π π π). Why do we care about this problem? We need to compute π π π) to classify π. Suppose we have multiple models π 1 ,β¦, π πΆ . For example, we can have one model for digit 2, one model for digit 3, one model for digit 4β¦ A Bayesian classifier classifies π by finding the π π that maximizes π( π π π . Using Bayes rule, we must maximize π(π π π π( π π ) π(π) . Therefore, we need to compute π(π π π for each π.
8
The Sum Rule The sum rule, which we saw earlier in the course, states that: π π = π¦βπ π π, π=π¦ We can use the sum rule, to compute π π | π as follows: π π | π = π π π, π | π According to this formula, we can compute π π | π by summing π π, π | π over all possible state sequences π.
9
The Sum Rule π π | π = π π π, π | π
π π | π = π π π, π | π According to this formula, we can compute π π | π by summing π π, π | π over all possible state sequences π. An HMM with parameters π specifies a joint distribution function π π,π | π as: π π,π | π =π π§ 1 | π π=2 π π π§ π π§ πβ1 , π) π=1 π π π₯ π π§ π ,π) Therefore: π π | π = π π π§ 1 | π π=2 π π π§ π π§ πβ1 ,π) π=1 π π π₯ π π§ π ,π)
10
The Sum Rule π π | π = π π π§ 1 | π π=2 π π π§ π π§ πβ1 ,π) π=1 π π π₯ π π§ π ,π) According to this formula, we can compute π π | π by summing π π, π | π over all possible state sequences π. What would be the time complexity of doing this computation directly? It would be linear to the number of all possible state sequences π, which is typically exponential to π (the length of π). Luckily, there is a polynomial time algorithm for computing π π | π , that uses dynamic programming.
11
Dynamic Programming for π π | π
π π | π = π π π§ 1 | π π=2 π π π§ π π§ πβ1 , π) π=1 π π π₯ π π§ π ,π) To compute π π | π , we use a dynamic programming algorithm that is called the forward algorithm. We define a 2D array of problems, of size πΓπΎ. π: length of observation sequence π. πΎ: number of states in the HMM. Problem (π,π): Compute Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π
12
The Forward Algorithm - Initialization
π π | π = π π π§ 1 | π π=2 π π π§ π π§ πβ1 , π) π=1 π π π₯ π π§ π ,π) Problem (π,π): Compute Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π Problem (1,π): ???
13
The Forward Algorithm - Initialization
π π | π = π π π§ 1 | π π=2 π π π§ π π§ πβ1 , π) π=1 π π π₯ π π§ π ,π) Problem (π,π): Compute Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π Problem (1,π): Compute Ξ± 1,π =π π₯ 1 , π§ 1 = π π | π Solution: ???
14
The Forward Algorithm - Initialization
π π | π = π π π§ 1 | π π=2 π π π§ π π§ πβ1 , π) π=1 π π π₯ π π§ π ,π) Problem (π,π): Compute Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π Problem (1,π): Compute Ξ± 1,π =π π₯ 1 , π§ 1 = π π | π Solution: Ξ± 1,π =π π₯ 1 , π§ 1 =π | π = π π π π ( π₯ 1 )
15
The Forward Algorithm - Main Loop
Problem (π,π): Compute Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π Solution:
16
The Forward Algorithm - Main Loop
Problem (π,π): Compute Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π Solution: π π₯ 1 ,β¦, π₯ π , π§ π = π π | π = π=1 πΎ π π₯ 1 ,β¦, π₯ π , π§ πβ1 = π π , π§ π = π π | π = π π ( π₯ π ) π=1 πΎ π π₯ 1 ,β¦, π₯ πβ1 , π§ πβ1 = π π , π§ π = π π | π = π π π₯ π π=1 πΎ π π₯ 1 ,β¦, π₯ πβ1 , π§ πβ1 = π π | π π΄ π,π
17
The Forward Algorithm - Main Loop
Problem (π,π): Compute Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π Solution: π π₯ 1 ,β¦, π₯ π , π§ π =π | π
18
The Forward Algorithm - Main Loop
Problem (π,π): Compute Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π Solution: π π₯ 1 ,β¦, π₯ π , π§ π =π | π =π π ( π₯ π ) π=1 πΎ Ξ±[πβ1,π] π΄ π,π Thus, Ξ± π,π =π π ( π₯ π ) π=1 πΎ Ξ±[πβ1,π] π΄ π,π . Ξ± π,π is easy to compute once all Ξ±[πβ1,π] values have been computed.
19
The Forward Algorithm Our aim was to compute π π | π .
Using the dynamic programming algorithm we just described, we can compute Ξ± π,π =π π, π§ π = π π | π . How can we use those Ξ± π,π values to compute π π | π ? We use the sum rule:
20
Problem 3: Find the Best π
Inputs: a trained HMM π, and an observation sequence π= π₯ 1 , β¦, π₯ π Output: the state sequence π= π§ 1 , β¦, π§ π maximizing π π π, π). Why do we care? Some times, we want to use HMMs for classification. In those cases, we do not really care about maximizing π π π, π), we just want to find the HMM parameters π that maximize π π π). Some times, we want to use HMMs to figure out the most likely values of the hidden states given the observations. In the tree ring example, our goal was to figure out the average temperature for each year. Those average temperatures were the hidden states. Solving problem 3 provides the most likely sequence of average temperatures.
21
Problem 3: Find the Best π
Input: an observation sequence π= π₯ 1 , β¦, π₯ π Output: the state sequence π= π§ 1 , β¦, π§ π maximizing π π π, π). We want to maximize π π π, π). Using the definition of conditional probabilities: π π π, π)= π π,π π) π π π) Since π is known, π π π) will be the same over all possible state sequences π. Therefore, to find the π that maximizes π π π, π), it suffices to find the π that maximizes π π,π π).
22
Problem 3: Find the Best π
Input: an observation sequence π= π₯ 1 , β¦, π₯ π Output: the state sequence π= π§ 1 , β¦, π§ π maximizing π π π, π), which is the same as maximizing π π,π π). Solution: (again) dynamic programming. Problem (π,π): Compute G π,π and H π,π , where: G π,π = argmax π§ 1 ,β¦, π§ πβ1 π π₯ 1 ,β¦, π₯ π , π§ 1 ,β¦, π§ πβ1 , π§ π = π π | π Note that: We optimize over all possible values π§ 1 ,β¦, π§ πβ1 . However, π π is constrained to be equal to π π . Values π₯ 1 ,β¦, π₯ π are known, and thus they are fixed.
23
Problem 3: Find the Best π
Input: an observation sequence π= π₯ 1 , β¦, π₯ π Output: the state sequence π= π§ 1 , β¦, π§ π maximizing π π π, π), which is the same as maximizing π π,π π). Solution: (again) dynamic programming. Problem (π,π): Compute G π,π and H π,π , where: H π,π =π π₯ 1 ,β¦, π₯ π ,G π,π | π H π,π is the joint probability of: the first π observations π₯ 1 ,β¦, π₯ π the sequence we stored in G π,π
24
Problem 3: Find the Best π
Input: an observation sequence π= π₯ 1 , β¦, π₯ π Output: the state sequence π= π§ 1 , β¦, π§ π maximizing π π π, π), which is the same as maximizing π π,π π). Solution: (again) dynamic programming. Problem (π,π): Compute G π,π and H π,π , where: H π,π =π π₯ 1 ,β¦, π₯ π ,G π,π | π H π,π is the joint probability of: the first π observations π₯ 1 ,β¦, π₯ π the sequence we stored in G π,π G π,π stores a sequence of state values. H π,π stores a number (a probability).
25
The Viterbi Algorithm Problem (π,π): Compute G π,π and H π,π , where:
G π,π = argmax π§ 1 ,β¦, π§ πβ1 π π₯ 1 ,β¦, π₯ π , π§ 1 ,β¦, π§ πβ1 , π§ π = π π | π H π,π =π π₯ 1 ,β¦, π₯ π ,G π,π | π We use a dynamic programming algorithm to compute G π,π π,π and H π,π π,π for all π,π such that 1β€πβ€π, 1β€πβ€πΎ.
26
The Viterbi Algorithm - Initialization
Problem (1,π): Compute G 1,π and H 1,π , where: G 1,π = argmax π§ 1 ,β¦, π§ πβ1 π π₯ 1 , π§ 1 = π π | π H 1,π =π π₯ 1 , π§ 1 = π π | π What is G 1,π ? It is the empty sequence. We must maximize over the previous states π§ 1 ,β¦, π§ πβ1 , but π=1, so there are no previous states. What is H 1,π ? H 1,π =π π₯ 1 , π§ 1 = π π | π = π π π π π₯ 1 .
27
The Viterbi Algorithm β Main Loop
Problem (π,π): Compute G π,π and H π,π , where: G π,π = argmax π§ 1 ,β¦, π§ πβ1 π π₯ 1 ,β¦, π₯ π , π§ 1 ,β¦, π§ πβ1 , π§ π = π π | π H π,π =π π₯ 1 ,β¦, π₯ π ,G π,π | π π₯ 1 ,β¦, π₯ π ,G π,π | π G π,π = πΊ β π,π β π π for some πΊ β π,π and some π. How do we find G π,π ? The last element of G π,π is some π π .
28
The Viterbi Algorithm β Main Loop
Problem (π,π): Compute G π,π and H π,π , where: G π,π = argmax π§ 1 ,β¦, π§ πβ1 π π₯ 1 ,β¦, π₯ π , π§ 1 ,β¦, π§ πβ1 , π§ π = π π | π H π,π =π π₯ 1 ,β¦, π₯ π ,G π,π | π G π,π = πΊ β π,π β π π for some πΊ β π,π and some π. We can prove that πΊ β π,π =πΊ πβ1,π , for that π. The proof is very similar to the proof that DTW finds the optimal alignment.
29
The Viterbi Algorithm β Main Loop
Problem (π,π): Compute G π,π and H π,π , where: G π,π = argmax π§ 1 ,β¦, π§ πβ1 π π₯ 1 ,β¦, π₯ π , π§ 1 ,β¦, π§ πβ1 , π§ π = π π | π H π,π =π π₯ 1 ,β¦, π₯ π ,G π,π | π Let's define π β as: π β = argmax π=1,β¦,πΎ π» πβ1,π π΄ π,π π π π₯ π argmax π=1,β¦,πΎ π» πβ1,π π΄ π,π π π π₯ π argmax π=1,β¦,πΎ π» πβ1,π π΄ π,π π π π₯ π G π,π =πΊ πβ1,π β π π for some π. Then: G π,π =πΊ πβ1, π β β π π β .
30
The Viterbi Algorithm β Output
Our goal is to find the π maximizing π π π,π), which is the same as finding the π maximizing π π,π π). The Viterbi algorithm computes G π,π and H π,π , where: G π,π = argmax π§ 1 ,β¦, π§ πβ1 π π₯ 1 ,β¦, π₯ π , π§ 1 ,β¦, π§ πβ1 , π§ π = π π | π H π,π =π π₯ 1 ,β¦, π₯ π ,G π,π | π Let's define π β as: π β = argmax π=1,β¦,πΎ π» π,π
31
State Probabilities at Specific Times
Problem 4: Inputs: a trained HMM π, and an observation sequence π= π₯ 1 , β¦, π₯ π Output: an array πΎ, of size πΓπΎ, where πΎ π,π =π π§ π = π π π,π). In words: given an observation sequence π, we want to compute, for any moment in time π and any state π π , the probability that the hidden state at that moment is π π .
32
State Probabilities at Specific Times
Problem 4: Inputs: a trained HMM π, and an observation sequence π= π₯ 1 , β¦, π₯ π Output: an array πΎ, of size πΓπΎ, where πΎ π,π =π π§ π = π π π,π). We have seen, in our solution for problem 2, that the forward algorithm computes an array Ξ±, of size πΓπΎ, where Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π . We can also define another array Ξ², of size πΓπΎ, where Ξ² π,π =π π₯ π+1 ,β¦, π₯ π | π, π§ π = π π . In words, Ξ² π,π is the probability of all observations after moment π, given that the state at moment π is π π .
33
State Probabilities at Specific Times
Problem 4: Inputs: a trained HMM π, and an observation sequence π= π₯ 1 , β¦, π₯ π Output: an array πΎ, of size πΓπΎ, where πΎ π,π =π π§ π = π π π,π). We have seen, in our solution for problem 2, that the forward algorithm computes an array Ξ±, of size πΓπΎ, where Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π . We can also define another array Ξ², of size πΓπΎ, where Ξ² π,π =π π₯ π+1 ,β¦, π₯ π | π, π§ π = π π . Then, Ξ± π,π βΞ² π,π = π π₯ 1 ,β¦, π₯ π , π§ π = π π | π βπ π₯ π+1 ,β¦, π₯ π | π, π§ π = π π = π(π, π§ π = π π | π) Therefore: πΎ π,π =π π§ π = π π π,π)= Ξ± π,π βΞ² π,π π π π)
34
State Probabilities at Specific Times
The forward algorithm computes an array Ξ±, of size πΓπΎ, where Ξ± π,π =π π₯ 1 ,β¦, π₯ π , π§ π = π π | π . We define another array Ξ², of size πΓπΎ, where Ξ² π,π =π π₯ π+1 ,β¦, π₯ π | π, π§ π = π π . Then, πΎ π,π =π π§ π = π π π,π)= Ξ± π,π βΞ² π,π π π π) In the above equation: Ξ± π,π and π π π) are computed by the forward algorithm. We need to compute Ξ² π,π . We compute Ξ² π,π in a way very similar to the forward algorithm, but going backwards this time. The resulting combination of the forward and backward algorithms is called (not surprisingly) the forward-backward algorithm.
35
The Backward Algorithm
We want to compute the values of an array Ξ², of size πΓπΎ, where Ξ² π,π =π π₯ π+1 ,β¦, π₯ π | π, π§ π = π π . Again, we will use dynamic programming. Problem (π,π) is to compute value Ξ² π,π . However, this time we start from the end of the observations. First, compute Ξ² π,π =π {}| π, π§ π = π π . In this case, the observation sequence π₯ π+1 ,β¦, π₯ π is empty, because π=π. What is the probability of an empty sequence?
36
Backward Algorithm - Initialization
We want to compute the values of an array Ξ², of size πΓπΎ, where Ξ² π,π =π π₯ π+1 ,β¦, π₯ π | π, π§ π = π π . Again, we will use dynamic programming. Problem (π,π) is to compute value Ξ² π,π . However, this time we start from the end of the observations. First, compute Ξ² π,π =π {}| π, π§ π = π π . In this case, the observation sequence π₯ π+1 ,β¦, π₯ π is empty, because π=π. What is the probability of an empty sequence? Ξ² π,π =1
37
Backward Algorithm - Initialization
Next: compute Ξ² πβ1,π . Ξ² πβ1,π =π π₯ π | π, π§ πβ1 = π π = π=1 πΎ π π₯ π , π§ π = π π | π, π§ πβ1 = π π = π=1 πΎ π π₯ π | π§ π = π π π π§ π = π π | π, π§ πβ1 = π π
38
Backward Algorithm - Initialization
Next: compute Ξ² πβ1,π . Ξ² πβ1,π =π π₯ π | π, π§ πβ1 = π π = π=1 πΎ π π₯ π π§ π = π π π( π§ π = π π | π, π§ πβ1 = π π = π=1 πΎ π π π₯ π π΄ π,π
39
Backward Algorithm β Main Loop
Next: compute Ξ² π,π , for π<πβ1. Ξ² π,π =π π₯ π+1, β¦, π₯ π | π, π§ π = π π = π=1 πΎ π π₯ π+1, β¦, π₯ π , π§ π+1 = π π | π, π§ π = π π = π=1 πΎ π π₯ π+1 | π§ π+1 = π π π π₯ π+2, β¦, π₯ π , π§ π+1 = π π | π, π§ πβ1 = π π
40
Backward Algorithm β Main Loop
Next: compute Ξ² π,π , for π<πβ1. Ξ² π,π =π π₯ π+1, β¦, π₯ π | π, π§ π = π π = π=1 πΎ π π₯ π+1 | π§ π+1 = π π π π₯ π+2, β¦, π₯ π , π§ π+1 = π π | π, π§ πβ1 = π π π=1 πΎ π π π₯ π+1 π π₯ π+2, β¦, π₯ π | π, π§ π+1 = π π π΄ π,π We will take a closer look at the last stepβ¦
41
Backward Algorithm β Main Loop
π=1 πΎ π π₯ π+1 | π§ π+1 = π π π π₯ π+2, β¦, π₯ π , π§ π+1 = π π | π, π§ π = π π = π=1 πΎ π π π₯ π+1 π π₯ π+2, β¦, π₯ π | π, π§ π+1 = π π π΄ π,π π π₯ π+1 | π§ π+1 = π π = π π π₯ π+1 , by definition of π π .
42
Backward Algorithm β Main Loop
π=1 πΎ π π₯ π+1 | π§ π+1 = π π π π₯ π+2, β¦, π₯ π , π§ π+1 = π π | π, π§ π = π π = π=1 πΎ π π π₯ π+1 π π₯ π+2, β¦, π₯ π | π, π§ π+1 = π π π΄ π,π π π₯ π+2, β¦, π₯ π , π§ π+1 = π π | π, π§ π = π π = π π₯ π+2, β¦, π₯ π | π, π§ π = π π , π§ π+1 = π π π π§ π+1 = π π | π§ π = π π by application of chain rule: π π΄,π΅ =π π΄ π΅ π(π΅), which can also be written as π π΄,π΅ | πΆ =π π΄ π΅, πΆ π π΅ πΆ)
43
Backward Algorithm β Main Loop
π=1 πΎ π π₯ π+1 | π§ π+1 = π π π π₯ π+2, β¦, π₯ π , π§ π+1 = π π | π, π§ π = π π = π=1 πΎ π π π₯ π+1 π π₯ π+2, β¦, π₯ π | π, π§ π+1 = π π π΄ π,π π π₯ π+2, β¦, π₯ π , π§ π+1 = π π | π, π§ π = π π = π π₯ π+2, β¦, π₯ π | π, π§ π = π π , π§ π+1 = π π π π§ π+1 = π π | π§ π = π π = π π₯ π+2, β¦, π₯ π | π, π§ π = π π , π§ π+1 = π π π΄ π,π (by definition of π΄ π,π )
44
Backward Algorithm β Main Loop
π=1 πΎ π π₯ π+1 | π§ π+1 = π π π π₯ π+2, β¦, π₯ π , π§ π+1 = π π | π, π§ π = π π = π=1 πΎ π π π₯ π+1 π π₯ π+2, β¦, π₯ π | π, π§ π+1 = π π π΄ π,π π π₯ π+2, β¦, π₯ π , π§ π+1 = π π | π, π§ π = π π = π π₯ π+2, β¦, π₯ π | π, π§ π = π π , π§ π+1 = π π π π§ π+1 = π π | π§ π = π π = π π₯ π+2, β¦, π₯ π | π, π§ π = π π , π§ π+1 = π π π΄ π,π = π π₯ π+2, β¦, π₯ π | π, π§ π+1 = π π π΄ π,π ( π₯ π+2, β¦, π₯ π are conditionally independent of π§ π given π§ π+1 )
45
Backward Algorithm β Main Loop
In the previous slides, we showed that Ξ² π,π =π π₯ π+1, β¦, π₯ π | π, π§ π = π π = π=1 πΎ π π π₯ π+1 π π₯ π+2, β¦, π₯ π | π, π§ π+1 = π π π΄ π,π Note that π π₯ π+2, β¦, π₯ π | π, π§ π+1 = π π is Ξ² π+1,π . Consequently, we obtain the recurrence: Ξ² π,π = π=1 πΎ π π π₯ π+1 Ξ² π+1,π π΄ π,π
46
Backward Algorithm β Main Loop
Ξ² π,π = π=1 πΎ π π π₯ π+1 Ξ² π+1,π π΄ π,π Thus, we can easily compute all values Ξ² π,π using this recurrence. The key thing is to proceed in decreasing order of π, since values Ξ² π,β depend on values Ξ² π+1,β .
47
The Forward-Backward Algorithm
Inputs: A trained HMM π. An observation sequence π= π₯ 1 , β¦, π₯ π Output: An array πΎ, of size πΓπΎ, where πΎ π,π =π π§ π = π π π,π). Algorithm: Use the forward algorithm to compute Ξ± π,π . Use the backward algorithm to compute Ξ² π,π . For π=1 to π: For π=1 to πΎ: πΎ π,π =π π§ π = π π π,π)= Ξ± π,π βΞ² π,π π π π)
48
Problem 1: Training an HMM
Goal: Estimate parameters π=( π π , π΄ π,π , π π ). The training data can consist of multiple observation sequences π 1 , π 2 , π 3 ,β¦, π π . We denote the length of training sequence π π as π π . We denote the elements of each observation sequence π π as: π π =( π₯ π,1 , π₯ π,2 ,β¦, π₯ π, π π ) Before we start training, we need to decide on: The number of states. The transitions that we will allow.
49
Problem 1: Training an HMM
While we are given π 1 , π 2 , π 3 ,β¦, π π , we are not given the corresponding hidden state sequences π 1 , π 2 , π 3 ,β¦, π π . We denote the elements of each hidden state sequence π π as: π π =( π§ π,1 , π§ π,2 ,β¦, π§ π, π π ) The training algorithm is called Baum-Welch algorithm. It is an Expectation-Maximization algorithm, similar to the algorithm we saw for learning Gaussian mixtures.
50
Expectation-Maximization
When we wanted to learn a mixture of Gaussians, we had the following problem: If we knew the probability of each object belonging to each Gaussian, we could estimate the parameters of each Gaussian. If we knew the parameters of each Gaussian, we could estimate the probability of each object belonging to each Gaussian. However, we know neither of these pieces of information. The EM algorithm resolved this problem using: An initialization of Gaussian parameters to some random or non-random values. A main loop where: The current values of the Gaussian parameters are used to estimate new weights of membership of every training object to every Gaussian. The current estimated membership weights are used to estimate new parameters (mean and covariance matrix) for each Gaussian.
51
Expectation-Maximization
When we want to learn an HMM model using observation sequences as training data, we have the following problem: If we knew, for each observation sequence, the probabilities of the hidden state values, we could estimate the parameters π. If we knew the parameters π, we could estimate the probabilities of the hidden state values. However, we know neither of these pieces of information. The Baum-Welch algorithm resolves this problem using EM: At initialization, parameters π are given (mostly) random values. In the main loop, these two steps are performed repeatedly: The current values of parameters π are used to estimate new probabilities for the hidden state values. The current probabilities for the hidden state values are used to estimate new values for parameters π.
52
Baum-Welch: Initialization
As we said before, before we start training, we need to decide on: The number of states. The transitions that we will allow. When we start training, we initialize π=( π π , π΄ π,π , π π ) to random values, with these exceptions: For any π π that we do not want to allow to ever be an initial state, we set the corresponding π π to 0. The rest of the training will keep these π π values always equal to 0. For any π π , π π such that we do not want to allow transitions from π π to π π , we set the corresponding π΄ π,π to 0. The rest of the training will keep these π΄ π,π values always equal to 0.
53
Baum-Welch: Initialization
When we start training, we initialize π=( π π , π΄ π,π , π π ) to random values, with these exceptions: For any π π that we do not want to allow to ever be an initial state, we set the corresponding π π to 0. The rest of the training will keep these π π values always equal to 0. For any π π , π π such that we do not want to allow transitions from π π to π π , we set the corresponding π΄ π,π to 0. The rest of the training will keep these π΄ π,π values always equal to 0. These initial choices constrain the topology of the resulting HMM model. For example, they can force the model to be fully connected, to be a forward model, or to be some other variation.
54
Baum-Welch: Expectation Step
Before we start the expectation step, we have some current values for the parameters of π=( π π , π΄ π,π , π π ). The goal of the expectation step is to use those values to estimate two arrays: πΎ π,π,π =π π₯ π,1 ,β¦, π₯ π,π , π§ π,π = π π | π, π π This is the same πΎ π,π that we computed earlier, using the forward-backward algorithm. Here we just need to make the array three-dimensional, because we have multiple observation sequences π π . We can compute πΎ π,β,β by running the forward-backward algorithm with π and π π as inputs. ΞΎ π,π,π,π =π π§ π,πβ1 = π π , π§ π,π = π π | π, π π
55
Baum-Welch: Expectation Step
To facilitate our computations, we will also extend arrays Ξ± and Ξ² to three dimensions, in the same way that we extended array πΎ. Ξ± π,π,π =π π₯ π,1 ,β¦, π₯ π,π , π§ π,π = π π | π Ξ² π,π,π =π π₯ π,π+1 ,β¦, π₯ π, π π | π, π§ π,π = π π Where needed, values Ξ±[m,β,β], Ξ²[m,β,β], πΎ[m,β,β] can be computed by running the forward-backward algorithm with inputs π and π π .
56
Computing the ΞΎ Values Using Bayes rule:
ΞΎ π,π,π,π =π π§ π,πβ1 = π π , π§ π,π = π π | π, π π = π π π | π, π§ π,πβ1 = π π , π§ π,π = π π π π§ π,πβ1 = π π , π§ π,π = π π | π π π π | π π π π | π can be computed using the forward algorithm. We will see how to simplify and compute: π π π | π, π§ π,πβ1 = π π , π§ π,π = π π π π§ π,πβ1 = π π , π§ π,π = π π | π
57
Computing the ΞΎ Values ΞΎ π,π,π,π =π π§ π,πβ1 = π π , π§ π,π = π π | π, π π = π π π | π, π§ π,πβ1 = π π , π§ π,π = π π π π§ π,πβ1 = π π , π§ π,π = π π | π π π π | π π π π | π, π§ π,πβ1 = π π , π§ π,π = π π = π π₯ π,1 ,β¦, π₯ π,πβ1 | π, π§ π,πβ1 = π π π π₯ π,π ,β¦, π₯ π, π π | π, π§ π,π = π π Why? Earlier observations π₯ π,1 ,β¦, π₯ π,πβ1 are conditionally independent of state π§ π,π given state π§ π,πβ1 . Later observations π₯ π,π ,β¦, π₯ π, π π are conditionally independent of state π§ π,πβ1 given state π§ π,π .
58
Computing the ΞΎ Values ΞΎ π,π,π,π =π π§ π,πβ1 = π π , π§ π,π = π π | π, π π = π π π | π, π§ π,πβ1 = π π , π§ π,π = π π π π§ π,πβ1 = π π , π§ π,π = π π | π π π π | π π π π | π, π§ π,πβ1 = π π , π§ π,π = π π = π π₯ π,1 ,β¦, π₯ π,πβ1 | π, π§ π,πβ1 = π π π π₯ π,π ,β¦, π₯ π, π π | π, π§ π,π = π π π π₯ π,π ,β¦, π₯ π, π π | π, π§ π,π = π π = π π₯ π,π | π, π§ π,π = π π π π₯ π,π+1 ,β¦, π₯ π, π π | π, π§ π,π = π π = π π π₯ π,π Ξ² π,π,π
59
Computing the ΞΎ Values ΞΎ π,π,π,π =π π§ π,πβ1 = π π , π§ π,π = π π | π, π π = π π π | π, π§ π,πβ1 = π π , π§ π,π = π π π π§ π,πβ1 = π π , π§ π,π = π π | π π π π | π π π π | π, π§ π,πβ1 = π π , π§ π,π = π π = π π₯ π,1 ,β¦, π₯ π,πβ1 | π, π§ π,πβ1 = π π π π π₯ π,π Ξ² π,π,π = π π₯ π,1 ,β¦, π₯ π,πβ1 , π§ π,πβ1 = π π | π π π§ π,πβ1 = π π | π π π π₯ π,π Ξ² π,π,π = Ξ± π,πβ1,π πΎ π,πβ1,π π π π₯ π,π Ξ² π,π,π
60
Computing the ΞΎ Values ΞΎ π,π,π,π = Ξ± π,πβ1,π πΎ π,πβ1,π π π π₯ π,π Ξ² π,π,π π π§ π,πβ1 = π π , π§ π,π = π π | π π π π | π We can further process π π§ π,πβ1 = π π , π§ π,π = π π | π : π π§ π,πβ1 = π π , π§ π,π = π π | π = π π§ π,π = π π | π, π§ π,πβ1 = π π π π§ π,πβ1 = π π | π = π΄ π,π πΎ π,πβ1,π
61
Computing the ΞΎ Values So, we get:
ΞΎ π,π,π,π = Ξ± π,πβ1,π πΎ π,πβ1,π π π π₯ π,π Ξ² π,π,π π΄ π,π πΎ π,πβ1,π π π π | π = Ξ± π,πβ1,π π π π₯ π,π Ξ² π,π,π π΄ π,π π π π | π
62
Computing the ΞΎ Values Based on the previous slides, the final formula for ΞΎ is: ΞΎ π,π,π,π = Ξ± π,πβ1,π π π π₯ π,π Ξ² π,π,π π΄ π,π π π π | π This formula can be computed using the parameters π and the algorithms we have covered: Ξ± π,πβ1,π is computed with the forward algorithm. Ξ² π,π,π is computed by the backward algorithm. π π π | π is computed with the forward algorithm. π π and π΄ π,π are specified by π.
63
Baum-Welch: Summary of E Step
Inputs: Training observation sequences π 1 , π 2 , π 3 ,β¦, π π . Current values of parameters π=( π π , π΄ π,π , π π ). Algorithm: For π=1 to π: Run the forward-backward algorithm to compute: Ξ± π,π,π for π and π such that 1β€πβ€ π π , 1β€πβ€πΎ. Ξ² π,π,π for π and π such that 1β€πβ€ π π , 1β€πβ€πΎ. πΎ π,π,π for π and π such that 1β€πβ€ π π , 1β€πβ€πΎ. π π π | π For π, π, π such that 1β€πβ€ π π , 1β€π,πβ€πΎ: ΞΎ π,π,π,π = Ξ± π,πβ1,π π π π₯ π,π Ξ² π,π,π π΄ π,π π π π | π
64
Baum-Welch: Maximization Step
Before we start the maximization step, we have some current values for: πΎ π,π,π =π π§ π,π = π π | π, π π ΞΎ π,π,π,π =π π§ π,πβ1 = π π , π§ π,π = π π | π, π π The goal of the maximization step is to use those values to estimate new values for π=( π π , π΄ π,π , π π ). In other words, we want to estimate new values for: Initial state probabilities π π . State transition probabilities π΄ π,π . Observation probabilities π π .
65
Baum-Welch: Maximization Step
Initial state probabilities π π are computed as: π π = π=1 π πΎ π,1,π π=1 π π=1 πΎ πΎ π,1,π In words, we compute for each π the ratio of these two quantities: The sum of probabilities, over all observation sequences π π , that state π π was the initial state for π π . The sum of probabilities over all observation sequences π π and all states π π that state π π was the initial state for π π .
66
Baum-Welch: Maximization Step
State transition probabilities π΄ π,π are computed as: π΄ π,π = π=1 π π=2 π π ΞΎ π,π,π,π π=1 π π=2 π π π=1 πΎ ΞΎ π,π,π,π In words, we compute, for each π,π the ratio of these two quantities: The sum of probabilities, over all state transitions of all observation sequences π π , that state π π was followed by π π . The sum of probabilities, over all state transitions of all observation sequences π π and all states π π , that state π π was followed by state π π .
67
Baum-Welch: Maximization Step
Computing the observation probabilities π π depends on how we model those probabilities. For example, we could choose: Discrete probabilities. Histograms. Gaussians. Mixtures of Gaussians. β¦ We will show how to compute distributions π π for the cases of:
68
Baum-Welch: Maximization Step
Computing the observation probabilities π π depends on how we model those probabilities. In all cases, the key idea is that we treat observation π₯ π,π as partially assigned to distribution π π . πΎ π,π,π =π π§ π,π = π π | π, π π is the weight of the assignment of π₯ π,π to π π . In other words: We don't know what hidden state π₯ π,π corresponds to, but we have computed, for each state π π , the probability πΎ π,π,π that π₯ π,π corresponds to π π . Thus, π₯ π,π influences our estimate of distribution π π with weight proportional to πΎ π,π,π .
69
Baum-Welch: Maximization Step
Suppose that the observations are discrete, and come from a finite set π= π¦ 1 ,β¦, π¦ π
. In that case, each π₯ π,π is an element of π. Define an auxiliary function Eq π₯,π¦ as: Eq π₯,π¦ = 1 if π₯=π¦ 0 if π₯β π¦ Then, π π π¦ π =π π₯ π,π = π¦ π | π§ π,π = π π , and it can be computed as: π π π¦ π = π=1 π π=1 π π πΎ π,π,π Eq π₯ π,π , π¦ π π=1 π π=1 π π πΎ π,π,π
70
Baum-Welch: Maximization Step
If observations come from a finite set π= π¦ 1 ,β¦, π¦ π
: π π π¦ π = π=1 π π=1 π π πΎ π,π,π Eq π₯ π,π , π¦ π π=1 π π=1 π π πΎ π,π,π The above formula can be seen as a weighted average of the times π₯ π,π is equal to π¦ π , where the weight of each π₯ π,π is the probability that π₯ π,π corresponds to hidden state π π .
71
Baum-Welch: Maximization Step
Suppose that observations are vectors in β π· , and that we model π π as a Gaussian distribution. In that case, for each π π we need to estimate a mean π π and a covariance matrix π π . π π π¦ π = π=1 π π=1 π π πΎ π,π,π π₯ π,π π=1 π π=1 π π πΎ π,π,π Thus, π π is a weighted average of values π₯ π,π , where the weight of each π₯ π,π is the probability that π₯ π,π corresponds to hidden state π π .
72
Baum-Welch: Maximization Step
Suppose that observations are vectors in β π· , and that we model π π as a Gaussian distribution. In that case, for each π π we need to estimate a mean π π and a covariance matrix π π . π π = π=1 π π=1 π π πΎ π,π,π π₯ π,π π=1 π π=1 π π πΎ π,π,π π π = π=1 π π=1 π π πΎ π,π,π π₯ π,π β π π π₯ π,π β π π π π=1 π π=1 π π πΎ π,π,π
73
Baum-Welch: Summary of M Step
π π = π=1 π πΎ π,1,π π=1 π π=1 πΎ πΎ π,1,π π΄ π,π = π=1 π π=2 π π ΞΎ π,π,π,π π=1 π π=2 π π π=1 πΎ ΞΎ π,π,π,π For discrete observations: π π π¦ π = π=1 π π=1 π π πΎ π,π,π Eq π₯ π,π , π¦ π π=1 π π=1 π π πΎ π,π,π
74
Baum-Welch: Summary of M Step
π π = π=1 π πΎ π,1,π π=1 π π=1 πΎ πΎ π,1,π π΄ π,π = π=1 π π=2 π π ΞΎ π,π,π,π π=1 π π=2 π π π=1 πΎ ΞΎ π,π,π,π For Gaussian observation distributions: π π is a Gaussian with mean π π and covariance matrix π π , where: π π = π=1 π π=1 π π πΎ π,π,π π₯ π,π π=1 π π=1 π π πΎ π,π,π π π = π=1 π π=1 π π πΎ π,π,π π₯ π,π β π π π₯ π,π β π π π π=1 π π=1 π π πΎ π,π,π
75
Baum-Welch Summary Initialization: Main loop:
Initialize π=( π π , π΄ π,π , π π ) with random values, but set to 0 values π π and π΄ π,π to specify the network topology. Main loop: E-step: Compute all values π π π | π , Ξ± π,π,π , Ξ² π,π,π , πΎ π,π,π using the forward-backward algorithm. Compute all values ΞΎ π,π,π,π = Ξ± π,πβ1,π π π π₯ π,π Ξ² π,π,π π΄ π,π π π π | π . M-step: Update π=( π π , π΄ π,π , π π ), using the values πΎ π,π,π and ΞΎ π,π,π,π computed at the E-step.
76
Hidden Markov Models - Recap
HMMs are widely used to model temporal sequences. In HMMs, an observation sequence corresponds to a sequence of hidden states. An HMM is defined by parameters π=( π π , π΄ π,π , π π ), and defines a joint probability π π,π | π . An HMM can be used as a generative model, to produce synthetic samples from distribution π π,π | π . HMMs can be used for Bayesian classification: One HMM π π per class. Find the class π that maximizes π π π | π .
77
Hidden Markov Models - Recap
HMMs can be used for Bayesian classification: One HMM π π per class. Find the class π that maximizes π π π | π , using probabilities π π | π π calculated by the forward algorithm. HMMs can be used to find the most likely hidden state sequence π for observation sequence π. This is done using the Viterbi algorithm. HMMs can be used to find the most likely hidden state π§ π for a specific value π₯ π of an observation sequence π. This is done using the forward-backward algorithm. HMMs are trained using the Baum-Welch algorithm, which is an Expectation-Maximization algorithm.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.