Download presentation
Presentation is loading. Please wait.
Published byMabel Underwood Modified over 9 years ago
1
CS 6243 Machine Learning Markov Chain and Hidden Markov Models
2
Outline Background on probability Hidden Markov models –Algorithms –Applications
3
Probability Basics Definition (informal) –Probabilities are numbers assigned to events that indicate “how likely” it is that the event will occur when a random experiment is performed –A probability law for a random experiment is a rule that assigns probabilities to the events in the experiment –The sample space S of a random experiment is the set of all possible outcomes
4
Probabilistic Calculus All probabilities between 0 and 1 If A, B are mutually exclusive: –P(A B) = P(A) + P(B) Thus: P(not(A)) = P(A c ) = 1 – P(A) A B S
5
Conditional probability The joint probability of two events A and B P(A B), or simply P(A, B) is the probability that event A and B occur at the same time. The conditional probability of P(A|B) is the probability that A occurs given B occurred. P(A | B) = P(A B) / P(B) P(A B) = P(A | B) P(B) P(A B) = P(B|A) P(A)
6
Example Roll a die –If I tell you the number is less than 4 –What is the probability of an even number? P(d = even | d < 4) = P(d = even d < 4) / P(d < 4) P(d = 2) / P(d = 1, 2, or 3) = (1/6) / (3/6) = 1/3
7
Independence A and B are independent iff: Therefore, if A and B are independent: These two constraints are logically equivalent
8
Examples Are P(d = even) and P(d < 4) independent? –P(d = even and d < 4) = 1/6 –P(d = even) = ½ –P(d < 4) = ½ –½ * ½ > 1/6 If your die actually has 8 faces, will P(d = even) and P(d < 5) be independent? Are P(even in first roll) and P(even in second roll) independent? Playing card, are the suit and rank independent?
9
Theorem of total probability Let B 1, B 2, …, B N be mutually exclusive events whose union equals the sample space S. We refer to these sets as a partition of S. An event A can be represented as: Since B 1, B 2, …, B N are mutually exclusive, then P(A) = P(A B 1 ) + P(A B 2 ) + … + P(A B N ) And therefore P(A) = P(A|B 1 )*P(B 1 ) + P(A|B 2 )*P(B 2 ) + … + P(A|B N )*P(B N ) = i P(A | B i ) * P(B i ) Exhaustive conditionalization Marginalization
10
Example A loaded die: –P(6) = 0.5 –P(1) = … = P(5) = 0.1 Prob of even number? P(even) = P(even | d < 6) * P (d<6) + P(even | d = 6) * P (d=6) = 2/5 * 0.5 + 1 * 0.5 = 0.7
11
Another example A box of dice: –99% fair –1% loaded P(6) = 0.5. P(1) = … = P(5) = 0.1 –Randomly pick a die and roll, P(6)? P(6) = P(6 | F) * P(F) + P(6 | L) * P(L) –1/6 * 0.99 + 0.5 * 0.01 = 0.17
12
Bayes theorem P(A B) = P(B) * P(A | B) = P(A) * P(B | A) AP BP ABP )( )( )|( = => Posterior probability Prior of A (Normalizing constant) BAP)|( Prior of B Conditional probability (likelihood) This is known as Bayes Theorem or Bayes Rule, and is (one of) the most useful relations in probability and statistics Bayes Theorem is definitely the fundamental relation in Statistical Pattern Recognition
13
Bayes theorem (cont’d) Given B 1, B 2, …, B N, a partition of the sample space S. Suppose that event A occurs; what is the probability of event B j ? P(B j | A) = P(A | B j ) * P(B j ) / P(A) = P(A | B j ) * P(B j ) / j P(A | B j )*P(B j ) B j : different models / hypotheses In the observation of A, should you choose a model that maximizes P(B j | A) or P(A | B j )? Depending on how much you know about B j ! Posterior probability Likelihood Prior of B j Normalizing constant (theorem of total probabilities)
14
Example A test for a rare disease claims that it will report positive for 99.5% of people with disease, and negative 99.9% of time for those without. The disease is present in the population at 1 in 100,000 What is P(disease | positive test)? –P(D|+) = P(+|D)P(D)/P(+) = 0.01 What is P(disease | negative test)? –P(D|-) = P(-|D)P(D)/P(-) = 5e-8
15
Another example We’ve talked about the boxes of casinos: 99% fair, 1% loaded (50% at six) We said if we randomly pick a die and roll, we have 17% of chance to get a six If we get 3 six in a row, what’s the chance that the die is loaded? How about 5 six in a row?
16
P(loaded | 666) = P(666 | loaded) * P(loaded) / P(666) = 0.5 3 * 0.01 / (0.5 3 * 0.01 + (1/6) 3 * 0.99) = 0.21 P(loaded | 66666) = P(66666 | loaded) * P(loaded) / P(66666) = 0.5 5 * 0.01 / (0.5 5 * 0.01 + (1/6) 5 * 0.99) = 0.71
17
Simple probabilistic models for DNA sequences Assume nature generates a type of DNA sequence as follows: 1.A box of dice, each with four faces: {A,C,G,T} 2.Select a die suitable for the type of DNA 3.Roll it, append the symbol to a string. 4.Repeat 3, until all symbols have been generated. Given a string say X=“GATTCCAA…” and two dice –M1 has the distribution of pA=pC=pG=pT=0.25. –M2 has the distribution: pA=pT=0.20, pC=pG=0.30 What is the probability of the sequence being generated by M1 or M2?
18
Model selection by maximum likelihood criterion X = GATTCCAA P(X | M1) = P(x 1,x 2,…,x n | M1) = i=1..n P(x i |M1) = 0.25 8 = 1.53e-5 P(X | M2) = P(x 1,x 2,…,x n | M2) = i=1..n P(x i |M2) = 0.2 5 0.3 3 = 8.64e-6 P(X|M1) / P(X|M2) = P(x i |M1)/P(x i |M2) = (0.25/0.2) 5 (0.25/0.3) 3 LLR = log(P(x i |M1)/P(x i |M2)) = n A S A + n C S C + n G S G + n T S T = 5 * log(1.25) + 3 * log(0.833) = 0.57 S i = log (P(i | M1) / P(i | M2)), i = A, C, G, T Log likelihood ratio (LLR)
19
Model selection by maximum a posterior probability criterion Take the prior probabilities of M1 and M2 into consideration if known Log (P(M1|X) / P(M2|X)) = LLR + log(P(M1)) – log(P(M2)) = n A S A + n C S C + n G S G + n T S T + log(P(M1)) – log(P(M2)) If P(M1) ~ P(M2), results will be similar to LLR test
20
Markov models for DNA sequences We have assumed independence of nucleotides in different positions - unrealistic in biology
21
Example: CpG islands CpG - 2 adjacent nucleotides, same strand (not base-pair; “p” stands for the phosphodiester bond of the DNA backbone) In mammal promoter regions, CpG is more frequent than other regions of genome –often mark gene-rich regions
22
CpG islands CpG Islands –More CpG than elsewhere –More C & G than elsewhere, too –Typical length: a few 100s to few 1000s bp Questions –Is a short sequence (say, 200 bp) a CpG island or not? –Given a long sequence (say, 10-100kb), find CpG islands?
23
Markov models A sequence of random variables is a k -th order Markov chain if, for all i, i th value is independent of all but the previous k values: First order (k=1): Second order: 0 th order: (independence)
24
First order Markov model
25
A 1 st order Markov model for CpG islands Essentially a finite state automaton (FSA) Transitions are probabilistic (instead of deterministic) 4 states: A, C, G, T 16 transitions: a st = P(x i = t | x i-1 = s) Begin/End states
26
Probability of emitting sequence x
27
Probability of a sequence What’s the probability of ACGGCTA in this model? P(A) * P(C|A) * P(G|C) … P(A|T) = a BA a AC a CG …a TA Equivalent: follow the path in the automaton, and multiply the transition probabilities on the path
28
Training Estimate the parameters of the model –CpG+ model: Count the transition frequencies from known CpG islands –CpG- model: Also count the transition frequencies from sequences without CpG islands –a st = #(s→t) / #(s → ) a + st a - st
29
Discrimination / Classification Given a sequence, is it CpG island or not? Log likelihood ratio (LLR) β CG = log 2 (a + CG /a - CG ) = log 2 (0.274/0.078) = 1.812 β BA = log 2 (a + A /a - A ) = log 2 (0.591/1.047) = -0.825
30
Example X = ACGGCGACGTCG S(X) = β BA + β AC +β CG +β GG +β GC +β CG +β GA + β AC +β CG +β GT +β TC +β CG = β BA + 2β AC +4β CG +β GG +β GC +β GA +β GT +β TC = -0.825 + 2*.419 + 4*1.812+.313 +.461 -.624 -.730 +.573 = 7.25
31
CpG island scores Figure 3.2 (Durbin book) The histogram of length-normalized scores for all the sequences. CpG islands are shown with dark grey and non-CpG with light grey.
32
Questions Q1: given a short sequence, is it more likely from CpG+ model or CpG- model? Q2: Given a long sequence, where are the CpG islands (if any)? –Approach 1: score (e.g.) 100 bp windows Pro: simple Con: arbitrary, fixed length, inflexible –Approach 2: combine +/- models.
33
Combined model Given a long sequence, predict which state each position is in. (states are hidden: Hidden Markov model)
34
Hidden Markov Model (HMM) Introduced in the 70’s for speech recognition Have been shown to be good models for biosequences –Alignment –Gene prediction –Protein domain analysis –…–… An observed sequence data that can be modeled by a Markov chain –State path unknown –Model parameter known or unknown Observed data: emission sequences X = (x 1 x 2 …x n ) Hidden data: state sequences Π = (π 1 π 2 …π n )
35
Hidden Markov model (HMM) Definition: A hidden Markov model (HMM) is a five-tuple Alphabet = { b 1, b 2, …, b M } Set of states Q = { 1,..., K } Transition probabilities between any two states a ij = transition prob from state i to state j a i1 + … + a iK = 1, for all states i = 1…K Start probabilities a 0i a 01 + … + a 0K = 1 Emission probabilities within each state e k (b) = P( x i = b | i = k) e k (b 1 ) + … + e k (b M ) = 1, for all states k = 1…K K 1 … 2
36
HMM for the Dishonest Casino A casino has two dice: Fair die P(1) = P(2) = P(3) = P(4) = P(5) = P(6) = 1/6 Loaded die P(1) = P(2) = P(3) = P(4) = P(5) = 1/10 P(6) = 1/2 Casino player switches back and forth between fair and loaded die once in a while
37
The dishonest casino model FairLOADED a LF = 0.05 e F (1) = 1/6 e F (2) = 1/6 e F (3) = 1/6 e F (4) = 1/6 e F (5) = 1/6 e F (6) = 1/6 e L (1) = 1/10 e L (2) = 1/10 e L (3) = 1/10 e L (4) = 1/10 e L (5) = 1/10 e L (6) = 1/2 Transition probability Emission probability a FL = 0.05a LL = 0.95a FF = 0.95
38
Simple scenario You don’t know the probabilities The casino player lets you observe which die he/she uses every time –The “state” of each roll is known Training (parameter estimation) –How often the casino player switches dice? –How “loaded” is the loaded die? –Simply count the frequency that each face appeared and the frequency of die switching –May add pseudo-counts if number of observations is small
39
More complex scenarios The “state” of each roll is unknown: –You are given the results of a series of rolls –You don’t know which number is generated by which die You may or may not know the parameters –How “loaded” is the loaded die –How frequently the casino player switches dice
40
The three main questions on HMMs 1.Decoding GIVENa HMM M, and a sequence x, FINDthe sequence of states that maximizes P (x, | M ) 2.Evaluation GIVEN a HMM M, and a sequence x, FIND P ( x | M ) [ or P(x) for simplicity] 3.Learning GIVENa HMM M with unspecified transition/emission probs., and a sequence x, FINDparameters = (e i (.), a ij ) that maximize P (x | ) Sometimes written as P (x, ) for simplicity.
41
Question # 1 – Decoding GIVEN A HMM with parameters. And a sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION What portion of the sequence was generated with the fair die, and what portion with the loaded die? This is the DECODING question in HMMs
42
A parse of a sequence Given a sequence x = x 1 ……x N, and a HMM with k states, A parse of x is a sequence of states = 1, ……, N 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2
43
Probability of a parse Given a sequence x = x 1 ……x N and a parse = 1, ……, N To find how likely is the parse: (given our HMM) P(x, ) = P(x 1, …, x N, 1, ……, N ) = P(x N, N | N-1 ) P(x N-1, N-1 | N-2 )……P(x 2, 2 | 1 ) P(x 1, 1 ) = P(x N | N ) P( N | N-1 ) ……P(x 2 | 2 ) P( 2 | 1 ) P(x 1 | 1 ) P( 1 ) = a 0 1 a 1 2 ……a N-1 N e 1 (x 1 )……e N (x N ) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2
44
Example What’s the probability of = Fair, Fair, Fair, Fair, Load, Load, Load, Load, Fair, Fair X = 1, 2, 1, 5, 6, 2, 1, 6, 2, 4? Fair LOADED 0.05 0.95 P(1|F) = 1/6 P(2|F) = 1/6 P(3|F) = 1/6 P(4|F) = 1/6 P(5|F) = 1/6 P(6|F) = 1/6 P(1|L) = 1/10 P(2|L) = 1/10 P(3|L) = 1/10 P(4|L) = 1/10 P(5|L) = 1/10 P(6|L) = 1/2
45
Example What’s the probability of = Fair, Fair, Fair, Fair, Load, Load, Load, Load, Fair, Fair X = 1, 2, 1, 5, 6, 2, 1, 6, 2, 4? P = ½ * P(1 | F) P(F i+1 | F i ) …P(5 | F) P(L i+1 | F i ) P(6|L) P(L i+1 | L i ) …P(4 | F) = ½ x 0.95 7 0.05 2 x (1/6) 6 x (1/10) 2 x (1/2) 2 = 5 x 10 -11 0.05
46
Decoding Parse (path) is unknown. What to do? Alternative algorithms: –Most probable single path (Viterbi algorithm) –Sequence of most probable states (Forward- backward algorithm)
47
The Viterbi algorithm Goal: to find Is equivalent to find
48
The Viterbi algorithm Find a path with the following objective: –Maximize the product of transition and emission probabilities Maximize the sum of log probabilities B P(s|F) = 1/6, for s in [1..6] P(s|L) = 1/10, for s in [1..5] P(6|L) = 1/2 LLLLLLLLLL FFFFFFFFFF Edge weight (symbol independent) Node weight (depend on symbols in seq)
49
The Viterbi algorithm B LLLLLL FFFFFF x 1 x 2 x 3 … x i x i+1 …x n-1 x n V F (i+1) = r F (x i+1 ) + max V F (i) + w FF V L (i) + w LF … … … … L F Weight for the best parse of ( x 1 …x i+1 ), with x i+1 emitted by state F Weight for the best parse of ( x 1 …x i+1 ), with x i+1 emitted by state L E w FF = log (a FF ) w FF w LF r F (x i+1 ) = log (e F (x i+1 )) V L (i+1) = r L (x i+1 ) + max V F (i) + w FL V L (i) + w LL
50
Recursion from FSA directly Fair LOADED W FL =-3.00 W LF =-3.00 w FF =-0.05 r F (s) = -1.8 s = 1...6 r L (6) = -0.7 r L (s) = -2.3 (s = 1…5) V F (i+1) = r F (x i+1 ) + max {V L (i) + W LF V F (i) + W FF } V L (i+1) = r L (x i+1 ) + max {V L (i) + W LL V F (i) + W FL } w LL =-0.05 Fair LOADED a FL =0.05 a LF =0.05 a LL =0.95 a FF =0.95 P(s|F) = 1/6 s = 1…6 P(6|L) = ½ P(s|L) = 1/10 (s = 1...5) P F (i+1) = e F (x i+1 ) max {P L (i) a LF P F (i) a FF } P L (i+1) = e L (x i+1 ) max {P L (i) a LL P F (i) a FL }
51
In general: more states / symbols Alphabet = { b 1, b 2, …, b M } Set of states Q = { 1,..., K } States are completely connected. –K 2 transitions probabilities (some may be 0) –Each state has M transition probabilities (some may be 0) 1 2 k K 1 2 … l K x i x i+1 …… … 12 l K … k
52
The Viterbi Algorithm Similar to “aligning” a set of states to a sequence Time: O(K 2 N) Space:O(KN) x 1 x 2 x 3 … … x i+1 ……… … … ……………………x N State 1 2 K V l (i+1) l
53
The Viterbi Algorithm (in log space) Input: x = x 1 ……x N Initialization: V 0 (0) = 0 (zero in subscript is the start state.) V l (0) = -inf, for all l > 0(0 in parenthesis is the imaginary first position) Iteration: for each i for each l V l (i) = r l (x i ) + max k (w kl + V k (i-1)) // r j (x i ) = log(e j (x i )), w kj = log(a kj ) Ptr l (i) = argmax k (w kl + V k (i-1)) end Termination: Prob(x, *) = exp{max k V k (N)} Traceback: N * = argmax k V k (N) i-1 * = Ptr i (i)
54
The Viterbi Algorithm (in prob space) Input: x = x 1 ……x N Initialization: P 0 (0) = 1 (zero in subscript is the start state.) P l (0) = 0, for all l > 0(0 in parenthesis is the imaginary first position) Iteration: for each i for each l P l (i) = e l (x i ) max k (a kl P k (i-1)) Ptr l (i) = argmax k (a kl P k (i-1)) end Termination: Prob(x, *) = max k P k (N) Traceback: N * = argmax k P k (N) i-1 * = Ptr i (i)
56
CpG islands Data: 41 human sequences, including 48 CpG islands of about 1kbp each Viterbi: –Found 46 of 48 –plus 121 “false positives” Post-processing: –merge within 500bp –discard < 500 –Found 46/48 –67 false positive
57
Problems with Viterbi decoding Most probable path not necessarily the only interesting one –Single optimal vs multiple sub-optimal What if there are many sub-optimal paths with slightly lower probabilities? –Global optimal vs local optimal What’s best globally may not be the best for each individual
58
Example The dishonest casino Say x = 12341623162616364616234161221341 Most probable path: = FF……F However: marked letters more likely to be L than unmarked letters Another way to interpret the problem –With Viterbi, every position is assigned a single label –Confidence level for each assignment?
59
Posterior decoding Viterbi finds the path with the highest probability We want to know k = 1 In order to do posterior decoding, we need to know P(x) and P( i = k, x), since Computing P(x) and P(x, i =k) is called the evaluation problem The solution: Forward-backward algorithm
60
Probability of a sequence P(X | M): prob that X can be generated by M Sometimes simply written as P(X) May be written as P(X | M, θ) or P(X | θ) to emphasize that we are looking for θ to optimize the likelihood (discussed later in learning) Not equal to the probability of a path P(X, ) –Many possible paths can generate X. Each with a probability –P(X) = P(X, ) = P(X | ) P( ) –How to compute without summing over all possible paths (exponential of them)? Dynamic programming
61
The forward algorithm Define f k (i) = P(x 1 …x i, i =k) –Implicitly: sum over all possible paths for x 1 …x i-1 xixi k
63
The forward algorithm xixi k
64
We can compute f k (i) for all k, i, using dynamic programming! Initialization: f 0 (0) = 1 f k (0) = 0, for all k > 0 Iteration: f k (i) = e k (x i ) j f j (i-1) a jk Termination: Prob(x) = k f k (N)
65
Relation between Forward and Viterbi VITERBI (in prob space) Initialization: P 0 (0) = 1 P k (0) = 0, for all k > 0 Iteration: P k (i) = e k (x i ) max j P j (i-1) a jk Termination: Prob(x, *) = max k P k (N) FORWARD Initialization: f 0 (0) = 1 f k (0) = 0, for all k > 0 Iteration: f k (i) = e k (x i ) j f j (i-1) a jk Termination: Prob(x) = k f k (N)
66
Posterior decoding Viterbi finds the path with the highest probability We want to know k = 1 In order to do posterior decoding, we need to know P(x) and P( i = k, x), since Have just shown how to compute this Need to know how to compute this
67
k xixi
68
The backward algorithm Define b k (i) = P(x i+1 …x n | i =k) –Implicitly: sum over all possible paths for x i …x n k xixi
69
1 This does not include the emission probability of x i xixi k
70
The forward-backward algorithm Compute f k (i) for each state k and position i Compute b k (i), for each state k and position i Compute P(x) = k f k (N) Compute P( i =k | x) = f k (i) * b k (i) / P(x)
71
The prob of x, with the constraint that x i was generated by state k state Sequence x P( i =k | x) Space: O(KN) Time: O(K 2 N) / P(X) Forward probabilitiesBackward probabilities
72
What’s P( i =k | x) good for? For each position, you can assign a probability (in [0, 1]) to the states that the system might be in at that point – confidence level Assign each symbol to the most-likely state according to this probability rather than the state on the most-probable path – posterior decoding ^ i = argmax k P( i = k | x)
73
Posterior decoding for the dishonest casino If P(fair) > 0.5, the roll is more likely to be generated by a fair die than a loaded die
74
Posterior decoding for another dishonest casino In this example, Viterbi predicts that all rolls were from the fair die.
75
CpG islands again Data: 41 human sequences, including 48 CpG islands of about 1kbp each Viterbi: Post-process: –Found 46 of 48 46/48 –plus 121 “false positives” 67 false pos Posterior Decoding: –same 2 false negatives 46/48 –plus 236 false positives 83 false pos Post-process: merge within 500; discard < 500
76
What if a new genome comes? We just sequenced the porcupine genome We know CpG islands play the same role in this genome However, we have not many known CpG islands for porcupines We suspect the frequency and characteristics of CpG islands are quite different in porcupines How do we adjust the parameters in our model? - LEARNING
77
Learning When the state path is known –We’ve already done that –Estimate parameters from labeled data (known CpG and non-CpG) –“Supervised” learning When the state path is unknown –Estimate parameters without labeled data –“unsupervised” learning
78
Basic idea 1.Estimate our “best guess” on the model parameters θ 2.Use θ to predict the unknown labels 3.Re-estimate a new set of θ 4.Repeat 2 & 3 Multiple ways
79
Viterbi training 1.Estimate our “best guess” on the model parameters θ 2.Find the Viterbi path using current θ 3.Re-estimate a new set of θ based on the Viterbi path –Count transitions/emissions on those paths, getting new θ 4.Repeat 2 & 3 until converge
80
Baum-Welch training 1.Estimate our “best guess” on the model parameters θ 2.Find P( i =k | x,θ) using forward-backward algorithm 3.Re-estimate a new set of θ based on all possible paths For example, according to Viterbi, pos i is in state k and pos (i+1) is in state l This contributes 1 count towards the frequency that transition k l is used In Baum-Welch, pos i has some prob in state k and pos (i+1) has some prob in state l. This transition is counted only partially, according to the prob of this transition 4.Repeat 2 & 3 until converge
81
Probability that a transition is used k l ii+1
82
Estimated # of k l transition
83
Viterbi vs Baum-Welch training Viterbi training –Returns a single path –Each position labeled with a fixed state –Each transition counts one –Each emission also counts one Baum-Welch training –Does not return a single path –Considers the prob that each transition is used and the prob that a symbol is generated by a certain state –They only contribute partial counts
84
Viterbi vs Baum-Welch training Both guaranteed to converges Baum-Welch improves the likelihood of the data in each iteration: P(X) –True EM (expectation-maximization) Viterbi improves the probability of the most probable path in each iteration: P(X, *) –EM-like
85
Expectation-maximization (EM) Baum-Welch algorithm is a special case of the expectation-maximization (EM) algorithm, a widely used technique in statistics for learning parameters from unlabeled data E-step: compute the expectation (e.g. prob for each pos to be in a certain state) M-step: maximum-likelihood parameter estimation Recall: clustering
87
HMM summary Viterbi – best single path Forward – sum over all paths Backward – similar Baum-Welch – training via EM and forward-backward Viterbi training – another “EM”, but Viterbi- based
88
Modular design of HMM HMM can be designed modularly Each modular has own begin / end states (silent, i.e. no emission) Each module communicates with other modules only through begin/end states
89
A+ C+G+ T+ B+E+ B- A-T- C-G- E- HMM modules and non-HMM modules can be mixed
90
HMM applications Gene finding Character recognition Speech recognition: a good tutorial on course website Machine translation Many others
91
Typed word recognition, assume all characters are separated. Character recognizer outputs probability of the image being particular character, P(image|character). 0.5 0.03 0.005 0.31 z c b a Word recognition example(1). Hidden state Observation http://www.cedar.buffalo.edu/~govind/cs661
92
Hidden states of HMM = characters. Observations = typed images of characters segmented from the image. Note that there is an infinite number of observations Observation probabilities = character recognizer scores. Transition probabilities will be defined differently in two subsequent models. Word recognition example(2). http://www.cedar.buffalo.edu/~govind/cs661
93
If lexicon is given, we can construct separate HMM models for each lexicon word. Amherstam he r s t Buffalobu ff a l o 0.5 0.03 Here recognition of word image is equivalent to the problem of evaluating few HMM models. This is an application of Evaluation problem. Word recognition example(3). 0.40.6 http://www.cedar.buffalo.edu/~govind/cs661
94
We can construct a single HMM for all words. Hidden states = all characters in the alphabet. Transition probabilities and initial probabilities are calculated from language model. Observations and observation probabilities are as before. a m he r s t b v f o Here we have to determine the best sequence of hidden states, the one that most likely produced word image. This is an application of Decoding problem. Word recognition example(4). http://www.cedar.buffalo.edu/~govind/cs661
95
The structure of hidden states is chosen. Observations are feature vectors extracted from vertical slices. Probabilistic mapping from hidden state to feature vectors: 1. use mixture of Gaussian models 2. Quantize feature vector space. Character recognition with HMM example. http://www.cedar.buffalo.edu/~govind/cs661
96
The structure of hidden states: Observation = number of islands in the vertical slice. s1s1 s2s2 s3s3 HMM for character ‘A’ : Transition probabilities: { a ij }= Observation probabilities: { b jk }= .8.2 0 0.8.2 0 0 1 .9.1 0 .1.8.1 .9.1 0 HMM for character ‘B’ : Transition probabilities: { a ij }= Observation probabilities: { b jk }= .8.2 0 0.8.2 0 0 1 .9.1 0 0.2.8 .6.4 0 Exercise: character recognition with HMM(1) http://www.cedar.buffalo.edu/~govind/cs661
97
Suppose that after character image segmentation the following sequence of island numbers in 4 slices was observed: { 1, 3, 2, 1} What HMM is more likely to generate this observation sequence, HMM for ‘A’ or HMM for ‘B’ ? Exercise: character recognition with HMM(2) http://www.cedar.buffalo.edu/~govind/cs661
98
Consider likelihood of generating given observation for each possible sequence of hidden states: HMM for character ‘A’: Hidden state sequenceTransition probabilitiesObservation probabilities s 1 s 1 s 2 s 3.8 .2 .2 .9 0 .8 .9 = 0 s 1 s 2 s 2 s 3.2 .8 .2 .9 .1 .8 .9 = 0.0020736 s 1 s 2 s 3 s 3.2 .2 1 .9 .1 .1 .9 = 0.000324 Total = 0.0023976 HMM for character ‘B’: Hidden state sequenceTransition probabilitiesObservation probabilities s 1 s 1 s 2 s 3.8 .2 .2 .9 0 .2 .6 = 0 s 1 s 2 s 2 s 3.2 .8 .2 .9 .8 .2 .6 = 0.0027648 s 1 s 2 s 3 s 3.2 .2 1 .9 .8 .4 .6 = 0.006912 Total = 0.0096768 Exercise: character recognition with HMM(3) http://www.cedar.buffalo.edu/~govind/cs661
99
HMM for gene finding Foundation for most gene finders Include many knowledge-based fine-tunes and GHMM extensions We’ll only discuss basic ideas
100
Gene structure exon1 exon2exon3 intron1intron2 transcription translation splicing Exon: coding Intron: non-coding Intergenic: non-coding 5’3’ Intergenic DNA Pre-mRNA Mature mRNA protein
101
Transcription (where genetic information is stored) (for making mRNA) Coding strand: 5’-ACGTAGACGTATAGAGCCTAG-3’ Template strand: 3’-TGCATCTGCATATCTCGGATC-5’ mRNA: 5’-ACGUAGACGUAUAGAGCCUAG-3’ Coding strand and mRNA have the same sequence, except that T’s in DNA are replaced by U’s in mRNA. DNA-RNA pair: A=U, C=G T=A, G=C
102
Translation The sequence of codons is translated to a sequence of amino acids Gene: -GCT TGT TTA CGA ATT- mRNA: -GCU UGU UUA CGA AUU - Peptide: - Ala - Cys - Leu - Arg - Ile – Start codon: AUG –Also code Met –Stop codon: UGA, UAA, UAG
103
The Genetic Code Third letter
104
Finding genes GATCGGTCGAGCGTAAGCTAGCTAG ATCGATGATCGATCGGCCATATATC ACTAGAGCTAGAATCGATAATCGAT CGATATAGCTATAGCTATAGCCTAT Human Fugu worm E.coli As the coding/non-coding length ratio decreases, exon prediction becomes more complex
105
Gene prediction in prokaryote Finding long ORFs (open reading frame) An ORF may not contain stop codons –Average ORF length = 64/3 –Expect 300bp ORF per 36kbp –Actual ORF length ~ 1000bp Codon biases –Some triplets are used more frequently than others –Codon third position biases
106
HMM for eukaryote gene finding Basic idea is the same: the distributions of nucleotides is different in exon and other regions –Alone won’t work very well More signals are needed How to combine all the signal together? exon1 exon2 exon3 intron1intron2 5’3’ Intergenic ATG 5’-UTR Promoter Splicing donor: GT Splicing acceptor: AG STOP 3’-UTR Poly-A
107
Simplest model Exon length may not be exact multiple of 3 Basically have to triple the number of states to remember the excess number of bases in the previous state Intergenic exon intron 64 triplets emission probabilities 4 emission probability Actually more accurate at the di-amino- acid level, i.e. 2 codons. Many methods use 5 th -order Markov model for all regions
108
More detailed model Intergenic Init exon intron Term exon Internal Exon Single exon
109
Sub-models START, STOP are PWMs Including start and stop codons and surrounding bases 5’-UTR STARTCDS Init exon 3’-UTR STOPCDS Term exon CDS: coding sequence
110
Sub-model for intron Sequence logos: an informative display of PWMs Within each column, relative height represents probability Height of each column reflects “information content” Splice donor Intron body Splice acceptor Intron
112
Duration modeling For any sub-path, the probability consists of two components –The product of emission probabilities Depend on symbols and state path –The product of transition probabilities Depend on state path
113
Duration modeling Model a stretch of DNA for which the distribution does not change for a certain length The simplest model implies that P(length = L) = p L-1 (1-p) i.e., length follows geometric distribution –Not always appropriate s P 1-p Duration: the number of times that a state is used consecutively without visiting other states L p
114
Duration models s P sss s P sss 1-p Negative binominal Min, then geometric 1-p PPP
115
Explicit duration modeling Intron P(A | I) = 0.3 P(C | I) = 0.2 P(G | I) = 0.2 P(T | I) = 0.3 L P ExonIntergenic Empirical intron length distribution Generalized HMM. Often used in gene finders
116
Explicit duration modeling For each position j and each state i –Need to consider the transition from all previous positions Time: O(N 2 K 2 ) N can be 10 8 x 1 x 2 x 3 ………………………………………..x N 1 2 K P k (i)
117
Speedup GHMM Restrict maximum duration length to be L –O(LNK 2 ) However, intergenic and intron can be quite long –L can be 10 5 Compromise: explicit duration for exons only, geometric for all other states Pre-compute all possible starting points of ORFs –For init exon: ATG –For internal/terminal exon: splice donor signal (GT)
118
GeneScan model
119
Approaches to gene finding Homology –BLAST, BLAT, etc. Ab initio –Genscan, Glimmer, Fgenesh, GeneMark, etc. –Each one has been tuned towards certain organisms Hybrids –Twinscan, SLAM –Use pair-HMM, or pre-compute score for potential coding regions based on alignment None are perfect, never used alone in practice
120
Current status More accurate on internal exons Determining boundaries of init and term exons is hard Biased towards multiple-exon genes Alternative splicing is hard Non-coding RNA is hard
121
State of the Art: –predictions ~ 60% similar to real proteins –~80% if database similarity used –lab verification still needed, still expensive
122
HMM wrap up We’ve talked about –Probability, mainly Bayes Theorem –Markov models –Hidden Markov models –HMM parameter estimation given state path –Decoding given HMM and parameters Viterbi F-B –Learning Baum-Welch (Expectation-Maximization) Viterbi
123
HMM wrap up We’ve also talked about –Extension to gHMMs –gHMM for gene finding We did not talk about –Higher-order Markov models –How to escape from local optima in learning
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.