CSCI2950-C Lecture 2 September 11, 2007. Comparative Genomic Hybridization (CGH) Measuring Mutations in Cancer.

Slides:



Advertisements
Similar presentations
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
Advertisements

Hidden Markov Model.
Hidden Markov Models Chapter 11. CG “islands” The dinucleotide “CG” is rare –C in a “CG” often gets “methylated” and the resulting C then mutates to T.
Rolling Dice Data Analysis - Hidden Markov Model Danielle Tan Haolin Zhu.
Bioinformatics Hidden Markov Models. Markov Random Processes n A random sequence has the Markov property if its distribution is determined solely by its.
Hidden Markov Models.
 CpG is a pair of nucleotides C and G, appearing successively, in this order, along one DNA strand.  CpG islands are particular short subsequences in.
Hidden Markov Models Modified from:
Hidden Markov Models Ellen Walker Bioinformatics Hiram College, 2008.
Statistical NLP: Lecture 11
Hidden Markov Models Theory By Johan Walters (SR 2003)
Hidden Markov Model Most pages of the slides are from lecture notes from Prof. Serafim Batzoglou’s course in Stanford: CS 262: Computational Genomics (Winter.
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Methods for copy number variation: hidden Markov model and change- point models.
Yanxin Shi 1, Fan Guo 1, Wei Wu 2, Eric P. Xing 1 GIMscan: A New Statistical Method for Analyzing Whole-Genome Array CGH Data RECOMB 2007 Presentation.
CpG islands in DNA sequences
Heuristic Local Alignerers 1.The basic indexing & extension technique 2.Indexing: techniques to improve sensitivity Pairs of Words, Patterns 3.Systems.
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Hidden Markov Models. Two learning scenarios 1.Estimation when the “right answer” is known Examples: GIVEN:a genomic region x = x 1 …x 1,000,000 where.
Hidden Markov Models. Decoding GIVEN x = x 1 x 2 ……x N We want to find  =  1, ……,  N, such that P[ x,  ] is maximized  * = argmax  P[ x,  ] We.
Hidden Markov Models. Two learning scenarios 1.Estimation when the “right answer” is known Examples: GIVEN:a genomic region x = x 1 …x 1,000,000 where.
Hidden Markov Models Lecture 6, Thursday April 17, 2003.
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Linear-Space Alignment. Linear-space alignment Using 2 columns of space, we can compute for k = 1…M, F(M/2, k), F r (M/2, N – k) PLUS the backpointers.
Interval Scores for Quality Annotated CGH Data Doron Lipson 1, Anya Tsalenko 2, Zohar Yakhini 1,2 and Amir Ben-Dor 2 1 Technion, Haifa, Israel 2 Agilent.
CpG islands in DNA sequences
Hidden Markov Models Lecture 5, Tuesday April 15, 2003.
Sequence Alignment Cont’d. Needleman-Wunsch with affine gaps Initialization:V(i, 0) = d + (i – 1)  e V(0, j) = d + (j – 1)  e Iteration: V(i, j) = max{
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
Lecture 9 Hidden Markov Models BioE 480 Sept 21, 2004.
Time Warping Hidden Markov Models Lecture 2, Thursday April 3, 2003.
Hidden Markov Models—Variants Conditional Random Fields 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov Models 1 2 K … x1 x2 x3 xK.
Bioinformatics Hidden Markov Models. Markov Random Processes n A random sequence has the Markov property if its distribution is determined solely by its.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Sequence Alignment Cont’d. Linear-space alignment Iterate this procedure to the left and right! N-k * M/2 k*k*
Hidden Markov Models.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Doug Downey, adapted from Bryan Pardo,Northwestern University
Hidden Markov models Sushmita Roy BMI/CS 576 Oct 16 th, 2014.
Sequence Alignment Cont’d. CS262 Lecture 4, Win06, Batzoglou Indexing-based local alignment (BLAST- Basic Local Alignment Search Tool) 1.SEED Construct.
CS262 Lecture 5, Win07, Batzoglou Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
. Class 5: Hidden Markov Models. Sequence Models u So far we examined several probabilistic model sequence models u These model, however, assumed that.
HMM Hidden Markov Model Hidden Markov Model. CpG islands CpG islands In human genome, CG dinucleotides are relatively rare In human genome, CG dinucleotides.
BINF6201/8201 Hidden Markov Models for Sequence Analysis
H IDDEN M ARKOV M ODELS. O VERVIEW Markov models Hidden Markov models(HMM) Issues Regarding HMM Algorithmic approach to Issues of HMM.
Hidden Markov Models Yves Moreau Katholieke Universiteit Leuven.
Hidden Markov Models Usman Roshan CS 675 Machine Learning.
CS5263 Bioinformatics Lecture 10: Markov Chain and Hidden Markov Models.
Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Advanced Algorithms and Models for Computational Biology -- a machine learning approach Computational Genomics II: Sequence Modeling & Gene Finding with.
Algorithms in Computational Biology11Department of Mathematics & Computer Science Algorithms in Computational Biology Markov Chains and Hidden Markov Model.
Eric Xing © Eric CMU, Machine Learning Mixture Model, HMM, and Expectation Maximization Eric Xing Lecture 9, August 14, 2010 Reading:
Eric Xing © Eric CMU, Machine Learning Structured Models: Hidden Markov Models versus Conditional Random Fields Eric Xing Lecture 13,
Hidden Markov Models – Concepts 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2.
Hidden Markov Models HMM Hassanin M. Al-Barhamtoshy
CSCI2950-C Lecture 10 Cancer Genomics: Duplications
Interval Scores for Quality Annotated CGH Data
Hidden Markov Models Part 2: Algorithms
CSCI2950-C Lecture 3 September 13, 2007.
Professor of Computer Science and Mathematics
CISC 667 Intro to Bioinformatics (Fall 2005) Hidden Markov Models (I)
Hidden Markov Model ..
Professor of Computer Science and Mathematics
Professor of Computer Science and Mathematics
CISC 667 Intro to Bioinformatics (Fall 2005) Hidden Markov Models (I)
Presentation transcript:

CSCI2950-C Lecture 2 September 11, 2007

Comparative Genomic Hybridization (CGH) Measuring Mutations in Cancer

CGH Analysis (1) Divide genome into segments of equal copy number Copy number profile Copy number Genome coordinate

CGH Analysis (1) Divide genome into segments of equal copy number Copy number profile Numerous methods (e.g. clustering, Hidden Markov Model, Bayesian, etc.) Segmentation Copy number Genome coordinate

CGH Analysis (1) Divide genome into segments of equal copy number Log 2 (R/G) Genomic position DeletionAmplification Genomic position

CGH Analysis (2) Identify aberrations common to multiple samples Chromosome 3 of 26 lung tumor samples on mid- density cDNA array. Common deletion located in 3p21 and common amplification – in 3q. Samples

CGH Analysis (1) Divide genome into segments of equal copy number Copy number profile Numerous methods (e.g. clustering, Hidden Markov Model, Bayesian, etc.) Segmentation Input: X i = log 2 T i / R i, clone i = 1, …, N Output: Assignment s(y i )  {S 1, …, S K } S i represent copy number states Copy number Genome coordinate

Interval Score Assume: X i are independent, normally distributed µ and  denote the mean and standard deviation of the normal genomic data. Given an interval I spanning k probes, we define its score as: Lipson, et al. J. Computational Biology, 2006

Significance of Interval Score Assume: X i ~ N(µ,  ) Lipson, et al. J. Computational Biology, 2006

The MaxInterval Problem Other intervals with high scores may be found by recursively calling this function. Exhaustive algorithm: O(n 2 ) Input:A vector X=(X 1 …X n ) Output:An interval I  [1…n], that maximizes S( I )

MaxInterval Algorithm I: LookAhead Assume given: m : An upper bound for the value of a single element X i t : A lower bound on the maximum score Complexity: Expected O(n 1.5 ) (unproved) Solve for first r for which S( I ) may exceed t. sum length score s =  j  I X j k s+m rs+m r k+rk+r I = [i,…,i+k-1] I ’ = [i,…,i+k+r-1] Benchmarking Benchmarking results of the Exhaustive, LookAhead and GFA algorithms on synthetic vectors of varying lengths. Linear regression suggests that the complexities of the Exhaustive, LookAhead and GFA algorithms are O(n 2 ), O(n 1.5 ), O(n), respectively. 8

MaxInterval Algorithm II: Geometric Family Approximation (GFA) MaxInterval Algorithm I: LookAhead Assume you are given: m – An upper bound for the value of a single element c i t – A lower bound on the maximum score If we are currently considering an interval I =[i,…,i+k-1] with a sum of s =  j  I c j, then the score of I is: The score of an interval I’ = [i,…,i+k+x-1] is then bounded by: Complexity: Expected O(n 1.5 ) (unproved) Solve for first x for which S( I ) may exceed t. sum length score s k s+mx k+x II’I’ 6 For  >0 define the following geometric family of intervals: kjkj jj  (j 1 )  (j 2 )  (j 3 ) Theorem: Let I * be the optimal scoring interval. Let J be the leftmost longest interval of  fully contained in I *. Then S(J) ≥ S( I *)/ , where   (1-  -2 ) -1. Complexity: O(n) Benchmarking Benchmarking results of the Exhaustive, LookAhead and GFA algorithms on synthetic vectors of varying lengths. Linear regression suggests that the complexities of the Exhaustive, LookAhead and GFA algorithms are O(n 2 ), O(n 1.5 ), O(n), respectively. 8

Benchmarking Linear regression suggests that the complexities of the Exhaustive, LookAhead and GFA algorithms are O(n 2 ), O(n 1.5 ), O(n), respectively. synthetic vectors of varying lengths

Applications: Single Samples Chromosome 16 of HCT116 colon carcinoma cell line on high-density oligo array (n=5,464). Chromosome 17 of several breast carcinoma cell lines on mid-density cDNA array (n=364) Mbp ERBB2 Log 2 (ratio) 0 1 FRA16BA2BP 1 050Mbp2575 Log 2 (ratio)

A Different Approach to CGH Segmentation Circular Binary Segmentation (CBS), Olshen et al Use hypothesis test to compare means of two intervals using t-test DeletionAmplification Genomic position

Another Approach to CGH Segmentation Use Hidden Markov Model (HMM) to “parse” sequence of probes into copy number states DeletionAmplification Genomic position

Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

Example: The Dishonest Casino A casino has two dice: Fair die P(1) = P(2) = P(3) = P(5) = P(6) = 1/6 Loaded die P(1) = P(2) = P(3) = P(5) = 1/10 P(6) = 1/2 Casino player switches back-&-forth between fair and loaded die once every 20 turns Game: 1.You bet $1 2.You roll (always with a fair die) 3.Casino player rolls (maybe with fair die, maybe with loaded die) 4.Highest number wins $2

Question # 1 – Evaluation GIVEN A sequence of rolls by the casino player QUESTION How likely is this sequence, given our model of how the casino works? This is the EVALUATION problem in HMMs Prob = 1.3 x

Question # 2 – Decoding GIVEN A sequence of rolls by the casino player QUESTION What portion of the sequence was generated with the fair die, and what portion with the loaded die? This is the DECODING question in HMMs. This is what we want to solve for CGH analysis FAIRLOADEDFAIR

Question # 3 – Learning GIVEN A sequence of rolls by the casino player QUESTION How “loaded” is the loaded die? How “fair” is the fair die? How often does the casino player change from fair to loaded, and back? This is the LEARNING question in HMMs Prob(6) = 64%

The dishonest casino model FAIRLOADED P(1|F) = 1/6 P(2|F) = 1/6 P(3|F) = 1/6 P(4|F) = 1/6 P(5|F) = 1/6 P(6|F) = 1/6 P(1|L) = 1/10 P(2|L) = 1/10 P(3|L) = 1/10 P(4|L) = 1/10 P(5|L) = 1/10 P(6|L) = 1/2

Definition of a hidden Markov model Definition: A hidden Markov model (HMM) Alphabet  = { b 1, b 2, …, b M } Set of states Q = { 1,..., K } Transition probabilities between any two states a ij = transition prob from state i to state j a i1 + … + a iK = 1, for all states i = 1…K Start probabilities a 0i a 01 + … + a 0K = 1 Emission probabilities within each state e i (b) = P( x i = b |  i = k) e i (b 1 ) + … + e i (b M ) = 1, for all states i = 1…K K 1 … 2

A HMM is memory-less At each time step t, the only thing that affects future states is the current state  t P(  t+1 = k | “whatever happened so far”) = P(  t+1 = k |  1,  2, …,  t, x 1, x 2, …, x t )= P(  t+1 = k |  t ) K 1 … 2

A parse of a sequence Given a sequence x = x 1 ……x N, A parse of x is a sequence of states  =  1, ……,  N 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

Likelihood of a Parse Simply, multiply all the orange arrows! (transition probs and emission probs) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

Likelihood of a parse Given a sequence x = x 1 ……x N and a parse  =  1, ……,  N, To find how likely is the parse: (given our HMM) P(x,  ) = P(x 1, …, x N,  1, ……,  N ) = P(x N,  N |  N-1 ) P(x N-1,  N-1 |  N-2 )……P(x 2,  2 |  1 ) P(x 1,  1 ) = P(x N |  N ) P(  N |  N-1 ) ……P(x 2 |  2 ) P(  2 |  1 ) P(x 1 |  1 ) P(  1 ) = a 0  1 a  1  2 ……a  N-1  N e  1 (x 1 )……e  N (x N ) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2 A compact way to write a 0  1 a  1  2 ……a  N-1  N e  1 (x 1 )……e  N (x N ) Number all parameters a ij and e i (b); n params Example: a 0Fair :  1 ; a 0Loaded :  2 ; … e Loaded (6) =  18 Then, count in x and  the # of times each parameter j = 1, …, n occurs F(j, x,  ) = # parameter  j occurs in (x,  ) (call F(.,.,.) the feature counts) Then, P(x,  ) =  j=1…n  j F(j, x,  ) = = exp [  j=1…n log(  j )  F(j, x,  ) ]

Example: the dishonest casino Let the sequence of rolls be: x = 1, 2, 1, 5, 6, 2, 1, 5, 2, 4 Then, what is the likelihood of  = Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair? (say initial probs a 0Fair = ½, a oLoaded = ½) ½  P(1 | Fair) P(Fair | Fair) P(2 | Fair) P(Fair | Fair) … P(4 | Fair) = ½  (1/6) 10  (0.95) 9 = ~= 0.5  10 -9

Example: the dishonest casino So, the likelihood the die is fair in this run is just  OK, but what is the likelihood of  = Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded? ½  P(1 | Loaded) P(Loaded, Loaded) … P(4 | Loaded) = ½  (1/10) 9  (1/2) 1 (0.95) 9 = ~= 0.16  Therefore, it somewhat more likely that all the rolls are done with the fair die, than that they are all done with the loaded die

Example: the dishonest casino Let the sequence of rolls be: x = 1, 6, 6, 5, 6, 2, 6, 6, 3, 6 Now, what is the likelihood  = F, F, …, F? ½  (1/6) 10  (0.95) 9 = 0.5  10 -9, same as before What is the likelihood  = L, L, …, L? ½  (1/10) 4  (1/2) 6 (0.95) 9 = ~= 0.5  So, it is 100 times more likely the die is loaded

A+C+G+ S1S2S3S4 HMM Model for CGH data Fridlyand et al. (2004)

A model for CGH data S1S1 S2S2 S3S3 S4S4  1,  1  2,  2  3,  3  4,  4 Homozygous Deletion (copy =0) Emissions: Gaussians K states copy numbers Duplication (copy >2) Heterozygous Deletion (copy =1) Normal (copy =2) Copy number Genome coordinate

CGH Segmentation: Model Selection How many states copy number states K? Using more K gives better fit for data, but must estimate more parameters in model. Let  = (A, B,  ) be parameters for HMM. Try different k = 1, …, K max Compute L(  | O ) by dynamic programming (forward-backward algorithm) Calculate:  (K) = -log (L (  | O ) ) + q K D(N)/N N = number of probes q k = number of parameters D(N) = 2 (AIC) or D(N) = log(N) Choose K = argmin k  (k)

Problems with HMM model Length of sequence emitted from fixed state is geometrically distributed. P(k k k k k k k k) = P(  t+1 = k |  t = k) n This is a defect of HMMs – compensated with ease of analysis & computation

CGH Segmentation: Transitions What about transitions between S i and S j states? They affect  Avg. length of segment  Avg. separation between two segments of same copy number XY 1-p 1-q pq Length distribution of region X: P[l X = 1] = 1-p P[l X = 2] = p(1-p) … P[l X = k] = p k (1-p) E[l X ] = 1/(1-p) Geometric distribution, with mean 1/(1-p)

The three main questions on HMMs 1.Evaluation GIVEN a HMM M, and a sequence x, FIND Prob[ x | M ] 2.Decoding GIVENa HMM M, and a sequence x, FINDthe sequence  of states that maximizes P[ x,  | M ] 3.Learning GIVENa HMM M, with unspecified transition/emission probs., and a sequence x, FINDparameters  = (e i (.), a ij ) that maximize P[ x |  ]