Lecture 2 Hidden Markov Model
Hidden Markov Model Motivation: We have a text partly written by Shakespeare and partly “written” by a monkey, we want to write a program that can tell the which part was written by Shakespeare and which part by the monkey.
21 century human-like monkey typing
Probability of event X occurring is P(X) Conditional probability –P(X|Y) : the probability of X occurring given Y Joint probability –P(X,Y) = P(X|Y)P(Y) –P(X,Y|Z) = P(X|Y,Z)P(Y|Z) Marginal probability –P(X) = Y P(X|Y)P(Y) Review on Probabilities
Usually we want to know probability of observation O given supposition (model) M; P(O|M) Reverse problem: given O we want to know probability that M is correct: the posterior probability P(M|O) Baye’s theorem: for any two event X, Y –P(X|Y) = P(Y|X)P(X)/P(Y) –P(M|O) = P(O|M)P(M)/P(O) Posterior Probability
Definition of Hidden Markov Model The Hidden Markov Model (HMM) is a finite set of states, each of which is associated with a probability distribution. Transitions among the states are governed by a set of probabilities called transition probabilities. In a particular state an outcome or observation can be generated, according to the associated probability distribution. It is only the outcome, not the state visible to an external observer and therefore states are ``hidden'' from the observer; hence the name Hidden Markov Model.
Examples Text written by Shakespeare and monkey Dice thrown by a dealer with two dice, one fair and one loaded A DNA sequence with coding and non-coding segments
Case Observed Hidden state symbols Text alphabet Shakespeare/monkey Dice 1-6 (rolled fair dice/loaded dice numbers) DNA A,C,G,T coding/non-coding (bases) Examples (cont’d)
In order to define an HMM completely, following elements are needed. The number of states of the model, {q i |i=1,2,..,N}. The number of observation symbols in the alphabet, {o k |k=1,2,…,M}. A set of state transition probabilities where q t denotes the current state. Transition probabilities should satisfy the normal stochastic constraints, and A
A emission probability distribution in each of the states, where k denotes the k th observation symbol in the alphabet, and o t the current parameter vector. Following stochastic constraints must be satisfied and B b j (k) is the probability of state j taking the symbol k
The initial state distribution,, where Therefore we can use the compact notation (A,B, ) to denote an HMM with discrete probability distributions. Notation Sequence of observations: O = o 1, o 2, …, o T Sequence of (hidden) states: Q = q 1, q 2, …, q T
[ M x ] Match state x. Has K emission probabilities. [ D x ] Delete state x. Non-emitter. [ I x ] Insert state x. Has K emission probabilities. [ B ] Begin state (for entering main model). Non-emitter. [ E ] End state (for exiting main model). [S] Start state. Non-emitter. [N] N-terminal unaligned sequence state. Emits on transition with K emission probabilities. Non-emitter. [C] C-terminal unaligned sequence state. Emits on transition with K emission probabilities. [J] Joining segment unaligned sequence state. Emits on transition with K emission probabilities. HMM scheme with K (DNA 4/protein 20) symbols © 2001 Per Kraulis
(1)The Markov assumption First order transition probabilities are Model with only 1 st order transition probabilities are called 1 st order Markov model. K th order Markov model involves k th order transition probabilities (2) The stationarity assumption State transition probabilities are independent of time. For any t 1 and t 2
(Cont’d) (3) The output independence assumption Current observation is statistically independent of the previous observations. Given a sequence of observations, Then, for an HMM set A,B, the probability for O to happen is This assumption has limited validity and in some cases may become a severe weakness of HMM.
Given the HMM set (A,B, ), and the observe sequence O = o 1, o 2,…o T, there are three problems of interest. (1)The Evaluation Problem: what is the probability p={O} that the observations are generated by the model? (3) The Learning Problem : Given a model and a sequence of observations O, how should we adjust the model parameters in order to maximize the probability p={O } ? (2) The Decoding Problem : Given a model and a sequence of observations O, what is the most likely state sequence Q = q 1, q 2,…q T that produced the observations? Three basic problems of HMMs
Example of Decoding Problem Have observation sequence O, find state sequence Q. (1)Text Shakespeare (s) or monkey (m) O =..aefjkuhrgnandshefoundhappinesssdmcamoe… Q =..mmmmmmssssssssssssssssssssssssssssmmmmmm… (2) Dice fair (F) or loaded (L) dice O = … … Q = …LLLLLLLLLLLLFFFFFFFFFFFFFFFFLLLLLLLLLLLLLLLLLL … (3) DNA coding (C) or non-coding (N) O = …AACCTTCCGCGCAATATAGGTAACCCCGG… Q = …NNCCCCCCCCCCCCCCCCCNNNNNNNN…
The Viterbi Algorithm Given sequence O of observations and a model, we want to find state sequence Q* with the maximum likelihood of observing O. Let Q t = q 1, q 2,…q t and O t = o 1, o 2,…o t. Suppose Q t-1 is a partial state sequence that gives maximum likelihood for observing the partial sequence O t-1, Define the quantity t (i) = max Q t-1 p{Q t-1, q t =i, O t-1 | } This can be computed recursively by starting with 1 (j) = j b j (o 1 ), for every j t+1 (j) = b j (o t+1 ) max k ( t (k) a kj ) for every j
The Viterbi Algorithm (cont’d) Keep trb j (t+1) = arg max k ( t (k) a kj ) for later traceback. The last “best” state is given by q* T = arg max k ( T (k)) Earlier states in the sequence is obtained by traceback: q* t-1 = trb t (t) Then sequence Q* giving the maximum likelihood of observing O is Q* = q* 1, q* 2,…q* T
Example: Loaded Die Two states: j = fair (F) or loaded (L) die Symbols: k = 1,2,3,4,5,6 Transition probability (for example) – a FF =.95, a FL =.05, a LF =.10 a LL =.90 Emission probability –b F (k) = 1/6, k = 1,..,6 (all faces equal) –b L (6) = 1/2, k=6; rest b L (k) = 1/10 (6 face favored)
Testing the Viterbi Algorithm A sequence of 300 tosses of fair and loaded dice
Normally, the transition probabilities are not known, and not all the emission probabilities are known. If there are data for which even the hidden states are known, then the data can be used to train parameters in the HMM set (A,B, ). In the case of gene recognition in DNA sequence, we use known genes for training. Training
In prokaryotic DNA we have only two kinds of regions (ignore regulatory sequences): coding (+) and non-coding (-), and four letters, A,C,G,T So we have 8 states: k= A+,C+,G+,T+,A-,C-,G-,T- and 4 observable symbols: i = A,C,G,T Transition probability a kl = E kl /( m E km ) where E kl is the total number of k to l transitions in all the training sequences Emission probability = 0 or 1 e.g. b A+ (A) = 1, b A+ (C)= 0 (Oversimplified) example: genes in DNA
For better result, remember that protein genes are coded in (three letter) codons, and letter usage in the 1 st, 2 nd and 3 rd positions in a codon are different. Hence use 12 states: k = A-,C-,G-,T-,A f +,C f +,G f +,T f +; f=1,2,3 Transition probability trained as before Basis for gene-finding software such as GENEMARK (Oversimplified) example: genes in DNA (cont’d)
Assume we are always using HMM, and let denote the parameters (transition and emission probabilities). For observation O, determine using the maximum likelihood criterion ML = argmax P(O| ) If is used to generate a set of observables {O i }, then the log-likelihood Oi P(O i | ) log P(O i | ) is maximized by = This gives a way to find ML by iteration (the Baum-Welch Algorithm). Maximum Likelihood
Suppose there is a probability distribution P( ) of the parameters. Then from Bayes’ theorem, given the observation O, the posteriori probability P( |O) = P(O| ) P( )/P(O) Since P(O) is independent of, the best is given by the maximum a posteriori probability estimate MAP = argmax P(O| ) P( ) Maximum a posteriori probability
Original papers by Krogh et al. –Krogh, A., Brown, M., Mian, I. S., Sjander, K., & Haussler, D. (1994a). Hidden Markov models in computational biology: Applications to protein modeling. Journal of Molecular Biology, 235, –Krogh, A., Mian, I. S., & Haussler, D. (1994b). A hidden Markov model that finds genes in e. coli DNA. Nucleic Acids Research, 22, Book (that I find most readable) –R. Durbin, S.R. Eddy, A. Krogh and G. Mitchison “Biological sequence analysis”, (Cambridge UP, 1998) References and books
This lecture partly based on article: Cold Spring Harbor –Computational Genomics Course - Profile hidden Markov models, lecture.html The Center for Computational Biology University of Washington in St. Louis School of Medicine Good websites for HMM
Where to find software ech/Section6/Recognition/myers.h mm.html Google: Hidden Markov Model Software GeneMark –opal.biology.gatech.edu/GeneMark /