Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 2 Hidden Markov Model. Hidden Markov Model Motivation: We have a text partly written by Shakespeare and partly “written” by a monkey, we want.

Similar presentations


Presentation on theme: "Lecture 2 Hidden Markov Model. Hidden Markov Model Motivation: We have a text partly written by Shakespeare and partly “written” by a monkey, we want."— Presentation transcript:

1 Lecture 2 Hidden Markov Model

2 Hidden Markov Model Motivation: We have a text partly written by Shakespeare and partly “written” by a monkey, we want to write a program that can tell the which part was written by Shakespeare and which part by the monkey.

3 21 century human-like monkey typing

4 Probability of event X occurring is P(X) Conditional probability –P(X|Y) : the probability of X occurring given Y Joint probability –P(X,Y) = P(X|Y)P(Y) –P(X,Y|Z) = P(X|Y,Z)P(Y|Z) Marginal probability –P(X) =  Y P(X|Y)P(Y) Review on Probabilities

5 Usually we want to know probability of observation O given supposition (model) M; P(O|M) Reverse problem: given O we want to know probability that M is correct: the posterior probability P(M|O) Baye’s theorem: for any two event X, Y –P(X|Y) = P(Y|X)P(X)/P(Y) –P(M|O) = P(O|M)P(M)/P(O) Posterior Probability

6 Definition of Hidden Markov Model The Hidden Markov Model (HMM) is a finite set of states, each of which is associated with a probability distribution. Transitions among the states are governed by a set of probabilities called transition probabilities. In a particular state an outcome or observation can be generated, according to the associated probability distribution. It is only the outcome, not the state visible to an external observer and therefore states are ``hidden'' from the observer; hence the name Hidden Markov Model.

7 Examples Text written by Shakespeare and monkey Dice thrown by a dealer with two dice, one fair and one loaded A DNA sequence with coding and non-coding segments

8 Case Observed Hidden state symbols Text alphabet Shakespeare/monkey Dice 1-6 (rolled fair dice/loaded dice numbers) DNA A,C,G,T coding/non-coding (bases) Examples (cont’d)

9 In order to define an HMM completely, following elements are needed. The number of states of the model, {q i |i=1,2,..,N}. The number of observation symbols in the alphabet, {o k |k=1,2,…,M}. A set of state transition probabilities where q t denotes the current state. Transition probabilities should satisfy the normal stochastic constraints, and A

10 A emission probability distribution in each of the states, where  k denotes the k th observation symbol in the alphabet, and o t the current parameter vector. Following stochastic constraints must be satisfied and B b j (k) is the probability of state j taking the symbol k

11 The initial state distribution,, where Therefore we can use the compact notation  (A,B,  ) to denote an HMM with discrete probability distributions. Notation Sequence of observations: O = o 1, o 2, …, o T Sequence of (hidden) states: Q = q 1, q 2, …, q T

12 [ M x ] Match state x. Has K emission probabilities. [ D x ] Delete state x. Non-emitter. [ I x ] Insert state x. Has K emission probabilities. [ B ] Begin state (for entering main model). Non-emitter. [ E ] End state (for exiting main model). [S] Start state. Non-emitter. [N] N-terminal unaligned sequence state. Emits on transition with K emission probabilities. Non-emitter. [C] C-terminal unaligned sequence state. Emits on transition with K emission probabilities. [J] Joining segment unaligned sequence state. Emits on transition with K emission probabilities. HMM scheme with K (DNA 4/protein 20) symbols © 2001 Per Kraulis

13 (1)The Markov assumption First order transition probabilities are Model with only 1 st order transition probabilities are called 1 st order Markov model. K th order Markov model involves k th order transition probabilities (2) The stationarity assumption State transition probabilities are independent of time. For any t 1 and t 2

14 (Cont’d) (3) The output independence assumption Current observation is statistically independent of the previous observations. Given a sequence of observations, Then, for an HMM set  A,B,  the probability for O to happen is This assumption has limited validity and in some cases may become a severe weakness of HMM.

15 Given the HMM set  (A,B,  ), and the observe sequence O = o 1, o 2,…o T, there are three problems of interest. (1)The Evaluation Problem: what is the probability p={O} that the observations are generated by the model? (3) The Learning Problem : Given a model  and a sequence of observations O, how should we adjust the model parameters in order to maximize the probability p={O  } ? (2) The Decoding Problem : Given a model and a sequence of observations O, what is the most likely state sequence Q = q 1, q 2,…q T that produced the observations? Three basic problems of HMMs

16 Example of Decoding Problem Have observation sequence O, find state sequence Q. (1)Text Shakespeare (s) or monkey (m) O =..aefjkuhrgnandshefoundhappinesssdmcamoe… Q =..mmmmmmssssssssssssssssssssssssssssmmmmmm… (2) Dice fair (F) or loaded (L) dice O = …132455644366366345566116345621661124536… Q = …LLLLLLLLLLLLFFFFFFFFFFFFFFFFLLLLLLLLLLLLLLLLLL … (3) DNA coding (C) or non-coding (N) O = …AACCTTCCGCGCAATATAGGTAACCCCGG… Q = …NNCCCCCCCCCCCCCCCCCNNNNNNNN…

17 The Viterbi Algorithm Given sequence O of observations and a model, we want to find state sequence Q* with the maximum likelihood of observing O. Let Q t = q 1, q 2,…q t and O t = o 1, o 2,…o t. Suppose Q t-1 is a partial state sequence that gives maximum likelihood for observing the partial sequence O t-1, Define the quantity  t (i) = max Q t-1 p{Q t-1, q t =i, O t-1 |  } This can be computed recursively by starting with  1 (j) =  j  b j (o 1 ), for every j  t+1 (j) = b j (o t+1 ) max k (  t (k) a kj ) for every j

18 The Viterbi Algorithm (cont’d) Keep trb j (t+1) = arg max k (  t (k) a kj ) for later traceback. The last “best” state is given by q* T = arg max k (  T (k)) Earlier states in the sequence is obtained by traceback: q* t-1 = trb t (t) Then sequence Q* giving the maximum likelihood of observing O is Q* = q* 1, q* 2,…q* T

19 Example: Loaded Die Two states: j = fair (F) or loaded (L) die Symbols: k = 1,2,3,4,5,6 Transition probability (for example) – a FF =.95, a FL =.05, a LF =.10 a LL =.90 Emission probability –b F (k) = 1/6, k = 1,..,6 (all faces equal) –b L (6) = 1/2, k=6; rest b L (k) = 1/10 (6 face favored)

20

21 Testing the Viterbi Algorithm A sequence of 300 tosses of fair and loaded dice

22 Normally, the transition probabilities are not known, and not all the emission probabilities are known. If there are data for which even the hidden states are known, then the data can be used to train parameters in the HMM set  (A,B,  ). In the case of gene recognition in DNA sequence, we use known genes for training. Training

23 In prokaryotic DNA we have only two kinds of regions (ignore regulatory sequences): coding (+) and non-coding (-), and four letters, A,C,G,T So we have 8 states: k= A+,C+,G+,T+,A-,C-,G-,T- and 4 observable symbols: i = A,C,G,T Transition probability a kl = E kl /(  m E km ) where E kl is the total number of k to l transitions in all the training sequences Emission probability = 0 or 1 e.g. b A+ (A) = 1, b A+ (C)= 0 (Oversimplified) example: genes in DNA

24 For better result, remember that protein genes are coded in (three letter) codons, and letter usage in the 1 st, 2 nd and 3 rd positions in a codon are different. Hence use 12 states: k = A-,C-,G-,T-,A f +,C f +,G f +,T f +; f=1,2,3 Transition probability trained as before Basis for gene-finding software such as GENEMARK (Oversimplified) example: genes in DNA (cont’d)

25 Assume we are always using HMM, and let  denote the parameters (transition and emission probabilities). For observation O, determine  using the maximum likelihood criterion  ML = argmax  P(O|  ) If   is used to generate a set of observables {O i }, then the log-likelihood  Oi P(O i |   ) log P(O i |  ) is maximized by  =   This gives a way to find  ML by iteration (the Baum-Welch Algorithm). Maximum Likelihood

26 Suppose there is a probability distribution P(  ) of the parameters. Then from Bayes’ theorem, given the observation O, the posteriori probability P(  |O) = P(O|  ) P(  )/P(O) Since P(O) is independent of, the best is given by the maximum a posteriori probability estimate  MAP = argmax  P(O|  ) P(  ) Maximum a posteriori probability

27

28 Original papers by Krogh et al. –Krogh, A., Brown, M., Mian, I. S., Sjander, K., & Haussler, D. (1994a). Hidden Markov models in computational biology: Applications to protein modeling. Journal of Molecular Biology, 235, 1501-1531. –Krogh, A., Mian, I. S., & Haussler, D. (1994b). A hidden Markov model that finds genes in e. coli DNA. Nucleic Acids Research, 22, 4768-4778. Book (that I find most readable) –R. Durbin, S.R. Eddy, A. Krogh and G. Mitchison “Biological sequence analysis”, (Cambridge UP, 1998) References and books

29 This lecture partly based on article: www.marypat.org/stuff/random/markov.html Cold Spring Harbor –Computational Genomics Course - Profile hidden Markov models, www.people.virginia.edu/~wrp/cshl97/hmm- lecture.html The Center for Computational Biology University of Washington in St. Louis School of Medicine www.ccb.wustl.edu Good websites for HMM

30 Where to find software www.speech.cs.cmu.edu/comp.spe ech/Section6/Recognition/myers.h mm.html www.netid.com/html/hmmpro.html Google: Hidden Markov Model Software GeneMark –opal.biology.gatech.edu/GeneMark /


Download ppt "Lecture 2 Hidden Markov Model. Hidden Markov Model Motivation: We have a text partly written by Shakespeare and partly “written” by a monkey, we want."

Similar presentations


Ads by Google