Presentation is loading. Please wait.

Presentation is loading. Please wait.

. Maximum Likelihood (ML) Parameter Estimation with applications to inferring phylogenetic trees Comput. Genomics, lecture 7a Presentation partially taken.

Similar presentations


Presentation on theme: ". Maximum Likelihood (ML) Parameter Estimation with applications to inferring phylogenetic trees Comput. Genomics, lecture 7a Presentation partially taken."— Presentation transcript:

1 . Maximum Likelihood (ML) Parameter Estimation with applications to inferring phylogenetic trees Comput. Genomics, lecture 7a Presentation partially taken from Dan Geiger, modified by Benny Chor. Background reading: Durbin et al Chapter 8.

2 2 Our Probabilistic Model (Reminder) Now we don’t know the states at internal node(s), nor the edge parameters pe1, pe2, pe3 XXYXY YXYXX YYYYX pe1pe1 pe2pe2 pe3pe3 A single edge is a fairly boring tree… ?????

3 3 Maximum Likelihood Maximize likelihood (over edge parameters), while averaging over states of unknown, internal node(s). XXYXY YXYXX YYYYX pe1pe1 pe2pe2 pe3pe3 ?????

4 4 Maximum Likelihood (2) Consider the phylogenetic tree to be a stochastic process. XYX YYX XXX XXY XXX XYX XXX The probability of transition from character a to character b along edge e is given by parameters p e. Given the complete tree, the likelihood of data is determined by the values of the p e ‘s. Observed Unobserved

5 5 Maximum Likelihood (3) We assume each site evolves independently of the others. X Y X X This allows us to decompose the likelihood of the data (sequences at leaves) to the product of each site, given the (same) tree and edge probabilities. This is the first key to an efficient DP algorithm for the tiny ML problem. (Felsenstein, 1981). Will now show how Pr(D (i) |Tree,  ) is efficiently computed. Y Y X X X X X Y Pr(D|Tree,  )=  i Pr(D (i) |Tree,  )

6 6 X p1p1 p2p2 tree 1 tree 2 Let T be a binary tree with subtrees T 1 and T 2. Let L x (D | T,  ) be the likelihood of T with X at T’s root. Define L Y (D | T,  ) similarly. Computing the Likelihood Y

7 7 By the definition of likelihood (sum over internal assignments), L(D | T,  ) = L x (D | T,  ) + L Y (D | T,  ) This is the second key to an efficient DP algorithm for the tiny ML problem. (Felsenstein, 1981) Computing the Likelihood (2) X p1p1 p2p2 tree 1 tree 2 Y

8 8 Computing L x (D | Tree,  ) X p1p1 p2p2 X Y X Y tree 1 tree 2

9 9 Computing L x (D | Tree,  ) X p1p1 p2p2 X Y X Y tree 1 tree 2 L x (D | Tree,  ) = ( L x (D | Tree 1,  )(1- p 1 )+ L Y (D | Tree 1,  ) p 1 ) * ( L x (D | Tree 2,  )(1- p 2 )+ L Y (D | Tree 2,  ) p 2 )

10 10 The Dynamic Programming Algorithm X p1p1 p2p2 X Y X Y tree 1 tree 2 The algorithm starts from the leaves and proceeds up towards the root. For each sub-tree visited, keep both L x (D | sub-tree,  ) and L Y (D | sub-tree,  ). This enables computing L x and L Y likelihoods w.r.t T using 5 multiplications and 2 additions.

11 11 The Dynamic Programming Algorithm X p1p1 p2p2 X Y X Y tree 1 tree 2 The algorithm thus takes O(1) floating point operations per internal node of the tree. If there are n leaves, the number of internal nodes is n-1, so overall complexity is O(n).

12 12 What About Initialization? X p1p1 p2p2 X Y X Y tree 1 tree 2 Well, this is easy. If T is a leaf that contains X, then L x (D | T,  ) = 1, and L x (D | T,  ) = 0. ( the case where T is a leaf that contains Y is left as a bonus assignment )

13 13 A Few More Question Marks X p1p1 p2p2 X Y X Y tree 1 tree 2 What if tree is not binary? Would it not effect complexity… What if tree unrooted? Can show symmetry of substitution probabilities implies likelihood invariant under choice of roots. Numerical questions (underflow, stability). Non binary alphabet.

14 14 From Two to Four States Model Maximize likelihood (over edge parameters), while averaging over states of unknown, internal node(s). But what do the edge probabilities mean now? ACCGT AAGTT CGGCT pe1pe1 pe2pe2 pe3pe3 ?????

15 15 From Two to Four States Model (2) u So far, our models consisted of a “regular” tree, where in addition, edges are assigned substituion probabilities. u For simplicity, assumed our “DNA” has only two states, say X and Y. u If edge e is assigned probability p e, this means that the probability of substitution (X Y) across e is p e. u Now a single p e can no longer express all 16-4=12 possible substitution probabilities.

16 16 From Two to Four States Model u Now a single p e can no longer express all 16-4=12 possible substitution probabilities. u The most general model will indeed have 12 independent parameters per edge, e.g. p e (C->A), p e (T->A), etc. It need not be symmetric. u Still, most popular models are symmetric, and use far less parameters per edge. u For example, the Jukes-Cantor substitution model assumes equal substitution probability of any unequal pair of nucleotides (across each edge separately).

17 17 The Jukes-Cantor model (1969) Jukes-Cantor assume equal prob. of change: GA TC    1-3 

18 18 Tiny ML on Four States : Like Before, Only More Cases Can handle DNA subst. models, AA subst. models,... Constant (per node) depends on alphabet size. A C G T P(G  C) *P C (left subtree)

19 19 Kimura’s K2P model (1980) Jukes-Cantor model does not take into account that transitions rates (between purines) A  G and (between pyrmidine) C  T are different from transversions rates (A  C, A  T, C  G, G  T). Kimura 2 parameter model uses a different substitution matrix:

20 20 Kimura’s K2P model (Cont) Leading using similar methods to: Where:

21 21 Additional Models There are yet more involved DNA substitution models, responding to phenomena occurring in DNA. Some of the models (like Jukes-Cantor, Kimura 2 parameters, and others) exhibit a “group-like” structure that helps analysis. The most general of these is a matrix where all rates of change are distinct (12 parameters). For AA (proteins), models typically have less structure. Further discussion is out of scope for this course. Please refer to the Molecular Evolution course (life science).

22 22 Back to the 2 States Model Showed efficient solution to the tiny ML problem. Now want to efficiently solve the tiny AML problem. XXYXY YXYXX YYYYX pe1pe1 pe2pe2 pe3pe3 ?????

23 23 Two Ways to Go In the second version (maximize over states of internal nodes) we are looking for the “most likely” ancestral states. This is called ancestral maximum likelihood (AML). In some sense AML is “between” MP (having ancestral states) and ML (because the goal is still to maximize likelihood). XXYXY YXYXX YYYYX pe1pe1 pe2pe2 pe3pe3 ?????

24 24 Two Ways to Go In some sense AML is “between” MP (having ancestral states) and ML (because the goal is still to maximize likelihood). The tiny AML algorithm will be like Fitch small MP algorithm: It goes up to the root, then back down to the leaves. XXYXY YXYXX YYYYX pe1pe1 pe2pe2 pe3pe3 ?????

25 25 Let T be a binary tree with subtrees T 1 and T 2. Let L E (D | T,  ) be the ancestral likelihood of T with E (X or Y) at the node of T’s father. Computing the Ancestral Likelihood X p1p1 p2p2 tree 1 tree 2 Y E p

26 26 By the definition of ancestral likelihood (maximizing over internal assignments), L X (D| T,  ) = max ( (1-p)L x (D | tree 1,  ) * L x (D | tree 2,  ), pL Y (D | tree 1,  )* L Y (D | tree 2,  ) ) This is key to an efficient DP algorithm for the tiny AML problem (Pupko et. al, 2000) Computing the Ancestral Likelihood (2) X p1p1 p2p2 tree 1 tree 2 Y X p

27 27 Boundary conditions: At leaves L X (D| T,  ) = 1-p if leaf label is X, p otherwise. At root: We pick label E (X or Y) that maximizes L E (D | tree 1,  ) L E (D | tree 2,  ). We now go down the tree. At each node we pick the label that maximizes the likelihood, given the (known) label of father. Total run time is O(n). Computing the Ancestral Likelihood (2) X p1p1 p2p2 tree 1 tree 2 Y X p


Download ppt ". Maximum Likelihood (ML) Parameter Estimation with applications to inferring phylogenetic trees Comput. Genomics, lecture 7a Presentation partially taken."

Similar presentations


Ads by Google