Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSCI2950-C Lecture 2 September 11, 2007. Comparative Genomic Hybridization (CGH) Measuring Mutations in Cancer.

Similar presentations


Presentation on theme: "CSCI2950-C Lecture 2 September 11, 2007. Comparative Genomic Hybridization (CGH) Measuring Mutations in Cancer."— Presentation transcript:

1 CSCI2950-C Lecture 2 September 11, 2007

2 Comparative Genomic Hybridization (CGH) Measuring Mutations in Cancer

3 CGH Analysis (1) Divide genome into segments of equal copy number Copy number profile Copy number Genome coordinate

4 CGH Analysis (1) Divide genome into segments of equal copy number Copy number profile Numerous methods (e.g. clustering, Hidden Markov Model, Bayesian, etc.) Segmentation Copy number Genome coordinate

5 CGH Analysis (1) Divide genome into segments of equal copy number 0 -0.5 0.5 Log 2 (R/G) Genomic position 0 -0.5 0.5 DeletionAmplification Genomic position

6 CGH Analysis (2) Identify aberrations common to multiple samples Chromosome 3 of 26 lung tumor samples on mid- density cDNA array. Common deletion located in 3p21 and common amplification – in 3q. Samples

7 CGH Analysis (1) Divide genome into segments of equal copy number Copy number profile Numerous methods (e.g. clustering, Hidden Markov Model, Bayesian, etc.) Segmentation Input: X i = log 2 T i / R i, clone i = 1, …, N Output: Assignment s(y i )  {S 1, …, S K } S i represent copy number states Copy number Genome coordinate

8 Interval Score Assume: X i are independent, normally distributed µ and  denote the mean and standard deviation of the normal genomic data. Given an interval I spanning k probes, we define its score as: Lipson, et al. J. Computational Biology, 2006

9 Significance of Interval Score Assume: X i ~ N(µ,  ) Lipson, et al. J. Computational Biology, 2006

10 The MaxInterval Problem Other intervals with high scores may be found by recursively calling this function. Exhaustive algorithm: O(n 2 ) Input:A vector X=(X 1 …X n ) Output:An interval I  [1…n], that maximizes S( I )

11 MaxInterval Algorithm I: LookAhead Assume given: m : An upper bound for the value of a single element X i t : A lower bound on the maximum score Complexity: Expected O(n 1.5 ) (unproved) Solve for first r for which S( I ) may exceed t. sum length score s =  j  I X j k s+m rs+m r k+rk+r I = [i,…,i+k-1] I ’ = [i,…,i+k+r-1] Benchmarking Benchmarking results of the Exhaustive, LookAhead and GFA algorithms on synthetic vectors of varying lengths. Linear regression suggests that the complexities of the Exhaustive, LookAhead and GFA algorithms are O(n 2 ), O(n 1.5 ), O(n), respectively. 8

12 MaxInterval Algorithm II: Geometric Family Approximation (GFA) MaxInterval Algorithm I: LookAhead Assume you are given: m – An upper bound for the value of a single element c i t – A lower bound on the maximum score If we are currently considering an interval I =[i,…,i+k-1] with a sum of s =  j  I c j, then the score of I is: The score of an interval I’ = [i,…,i+k+x-1] is then bounded by: Complexity: Expected O(n 1.5 ) (unproved) Solve for first x for which S( I ) may exceed t. sum length score s k s+mx k+x II’I’ 6 For  >0 define the following geometric family of intervals: kjkj jj  (j 1 )  (j 2 )  (j 3 ) Theorem: Let I * be the optimal scoring interval. Let J be the leftmost longest interval of  fully contained in I *. Then S(J) ≥ S( I *)/ , where   (1-  -2 ) -1. Complexity: O(n) Benchmarking Benchmarking results of the Exhaustive, LookAhead and GFA algorithms on synthetic vectors of varying lengths. Linear regression suggests that the complexities of the Exhaustive, LookAhead and GFA algorithms are O(n 2 ), O(n 1.5 ), O(n), respectively. 8

13 Benchmarking Linear regression suggests that the complexities of the Exhaustive, LookAhead and GFA algorithms are O(n 2 ), O(n 1.5 ), O(n), respectively. synthetic vectors of varying lengths

14 Applications: Single Samples Chromosome 16 of HCT116 colon carcinoma cell line on high-density oligo array (n=5,464). Chromosome 17 of several breast carcinoma cell lines on mid-density cDNA array (n=364). 5002575Mbp 0 1 0 1 0 1 0 1 ERBB2 Log 2 (ratio) 0 1 FRA16BA2BP 1 050Mbp2575 Log 2 (ratio)

15 A Different Approach to CGH Segmentation Circular Binary Segmentation (CBS), Olshen et al. 2004 Use hypothesis test to compare means of two intervals using t-test 0 -0.5 0.5 DeletionAmplification Genomic position

16 Another Approach to CGH Segmentation Use Hidden Markov Model (HMM) to “parse” sequence of probes into copy number states 0 -0.5 0.5 DeletionAmplification Genomic position

17 Hidden Markov Models 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

18 Example: The Dishonest Casino A casino has two dice: Fair die P(1) = P(2) = P(3) = P(5) = P(6) = 1/6 Loaded die P(1) = P(2) = P(3) = P(5) = 1/10 P(6) = 1/2 Casino player switches back-&-forth between fair and loaded die once every 20 turns Game: 1.You bet $1 2.You roll (always with a fair die) 3.Casino player rolls (maybe with fair die, maybe with loaded die) 4.Highest number wins $2

19 Question # 1 – Evaluation GIVEN A sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION How likely is this sequence, given our model of how the casino works? This is the EVALUATION problem in HMMs Prob = 1.3 x 10 -35

20 Question # 2 – Decoding GIVEN A sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION What portion of the sequence was generated with the fair die, and what portion with the loaded die? This is the DECODING question in HMMs. This is what we want to solve for CGH analysis FAIRLOADEDFAIR

21 Question # 3 – Learning GIVEN A sequence of rolls by the casino player 1245526462146146136136661664661636616366163616515615115146123562344 QUESTION How “loaded” is the loaded die? How “fair” is the fair die? How often does the casino player change from fair to loaded, and back? This is the LEARNING question in HMMs Prob(6) = 64%

22 The dishonest casino model FAIRLOADED 0.05 0.95 P(1|F) = 1/6 P(2|F) = 1/6 P(3|F) = 1/6 P(4|F) = 1/6 P(5|F) = 1/6 P(6|F) = 1/6 P(1|L) = 1/10 P(2|L) = 1/10 P(3|L) = 1/10 P(4|L) = 1/10 P(5|L) = 1/10 P(6|L) = 1/2

23 Definition of a hidden Markov model Definition: A hidden Markov model (HMM) Alphabet  = { b 1, b 2, …, b M } Set of states Q = { 1,..., K } Transition probabilities between any two states a ij = transition prob from state i to state j a i1 + … + a iK = 1, for all states i = 1…K Start probabilities a 0i a 01 + … + a 0K = 1 Emission probabilities within each state e i (b) = P( x i = b |  i = k) e i (b 1 ) + … + e i (b M ) = 1, for all states i = 1…K K 1 … 2

24 A HMM is memory-less At each time step t, the only thing that affects future states is the current state  t P(  t+1 = k | “whatever happened so far”) = P(  t+1 = k |  1,  2, …,  t, x 1, x 2, …, x t )= P(  t+1 = k |  t ) K 1 … 2

25 A parse of a sequence Given a sequence x = x 1 ……x N, A parse of x is a sequence of states  =  1, ……,  N 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

26 Likelihood of a Parse Simply, multiply all the orange arrows! (transition probs and emission probs) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2

27 Likelihood of a parse Given a sequence x = x 1 ……x N and a parse  =  1, ……,  N, To find how likely is the parse: (given our HMM) P(x,  ) = P(x 1, …, x N,  1, ……,  N ) = P(x N,  N |  N-1 ) P(x N-1,  N-1 |  N-2 )……P(x 2,  2 |  1 ) P(x 1,  1 ) = P(x N |  N ) P(  N |  N-1 ) ……P(x 2 |  2 ) P(  2 |  1 ) P(x 1 |  1 ) P(  1 ) = a 0  1 a  1  2 ……a  N-1  N e  1 (x 1 )……e  N (x N ) 1 2 K … 1 2 K … 1 2 K … … … … 1 2 K … x1x1 x2x2 x3x3 xKxK 2 1 K 2 A compact way to write a 0  1 a  1  2 ……a  N-1  N e  1 (x 1 )……e  N (x N ) Number all parameters a ij and e i (b); n params Example: a 0Fair :  1 ; a 0Loaded :  2 ; … e Loaded (6) =  18 Then, count in x and  the # of times each parameter j = 1, …, n occurs F(j, x,  ) = # parameter  j occurs in (x,  ) (call F(.,.,.) the feature counts) Then, P(x,  ) =  j=1…n  j F(j, x,  ) = = exp [  j=1…n log(  j )  F(j, x,  ) ]

28 Example: the dishonest casino Let the sequence of rolls be: x = 1, 2, 1, 5, 6, 2, 1, 5, 2, 4 Then, what is the likelihood of  = Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair, Fair? (say initial probs a 0Fair = ½, a oLoaded = ½) ½  P(1 | Fair) P(Fair | Fair) P(2 | Fair) P(Fair | Fair) … P(4 | Fair) = ½  (1/6) 10  (0.95) 9 =.00000000521158647211 ~= 0.5  10 -9

29 Example: the dishonest casino So, the likelihood the die is fair in this run is just 0.521  10 -9 OK, but what is the likelihood of  = Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded, Loaded? ½  P(1 | Loaded) P(Loaded, Loaded) … P(4 | Loaded) = ½  (1/10) 9  (1/2) 1 (0.95) 9 =.00000000015756235243 ~= 0.16  10 -9 Therefore, it somewhat more likely that all the rolls are done with the fair die, than that they are all done with the loaded die

30 Example: the dishonest casino Let the sequence of rolls be: x = 1, 6, 6, 5, 6, 2, 6, 6, 3, 6 Now, what is the likelihood  = F, F, …, F? ½  (1/6) 10  (0.95) 9 = 0.5  10 -9, same as before What is the likelihood  = L, L, …, L? ½  (1/10) 4  (1/2) 6 (0.95) 9 =.00000049238235134735 ~= 0.5  10 -7 So, it is 100 times more likely the die is loaded

31 A+C+G+ S1S2S3S4 HMM Model for CGH data Fridlyand et al. (2004)

32 A model for CGH data S1S1 S2S2 S3S3 S4S4  1,  1  2,  2  3,  3  4,  4 Homozygous Deletion (copy =0) Emissions: Gaussians K states copy numbers Duplication (copy >2) Heterozygous Deletion (copy =1) Normal (copy =2) Copy number Genome coordinate

33 CGH Segmentation: Model Selection How many states copy number states K? Using more K gives better fit for data, but must estimate more parameters in model. Let  = (A, B,  ) be parameters for HMM. Try different k = 1, …, K max Compute L(  | O ) by dynamic programming (forward-backward algorithm) Calculate:  (K) = -log (L (  | O ) ) + q K D(N)/N N = number of probes q k = number of parameters D(N) = 2 (AIC) or D(N) = log(N) Choose K = argmin k  (k)

34 Problems with HMM model Length of sequence emitted from fixed state is geometrically distributed. P(k k k k k k k k) = P(  t+1 = k |  t = k) n This is a defect of HMMs – compensated with ease of analysis & computation

35 CGH Segmentation: Transitions What about transitions between S i and S j states? They affect  Avg. length of segment  Avg. separation between two segments of same copy number XY 1-p 1-q pq Length distribution of region X: P[l X = 1] = 1-p P[l X = 2] = p(1-p) … P[l X = k] = p k (1-p) E[l X ] = 1/(1-p) Geometric distribution, with mean 1/(1-p)

36 The three main questions on HMMs 1.Evaluation GIVEN a HMM M, and a sequence x, FIND Prob[ x | M ] 2.Decoding GIVENa HMM M, and a sequence x, FINDthe sequence  of states that maximizes P[ x,  | M ] 3.Learning GIVENa HMM M, with unspecified transition/emission probs., and a sequence x, FINDparameters  = (e i (.), a ij ) that maximize P[ x |  ]


Download ppt "CSCI2950-C Lecture 2 September 11, 2007. Comparative Genomic Hybridization (CGH) Measuring Mutations in Cancer."

Similar presentations


Ads by Google