CSE 5290: Algorithms for Bioinformatics Fall 2011

Slides:



Advertisements
Similar presentations
. Phylogenetic Trees (2) Lecture 13 Based on: Durbin et al 7.4, Gusfield , Setubal&Meidanis 6.1.
Advertisements

. Class 9: Phylogenetic Trees. The Tree of Life Evolution u Many theories of evolution u Basic idea: l speciation events lead to creation of different.
Parsimony based phylogenetic trees Sushmita Roy BMI/CS 576 Sep 30 th, 2014.
. Phylogenetic Trees (2) Lecture 13 Based on: Durbin et al 7.4, Gusfield , Setubal&Meidanis 6.1.
Phylogenetics - Distance-Based Methods CIS 667 March 11, 2204.
Phylogenetic reconstruction
Molecular Evolution Revised 29/12/06
Introduction to Bioinformatics Algorithms Clustering.
Building phylogenetic trees Jurgen Mourik & Richard Vogelaars Utrecht University.
L16: Micro-array analysis Dimension reduction Unsupervised clustering.
CISC667, F05, Lec15, Liao1 CISC 667 Intro to Bioinformatics (Fall 2005) Phylogenetic Trees (II) Distance-based methods.
Computational Biology, Part 12 Expression array cluster analysis Robert F. Murphy, Shann-Ching Chen Copyright  All rights reserved.
Introduction to Bioinformatics Algorithms Clustering.
CSE182-L17 Clustering Population Genetics: Basics.
Introduction to Bioinformatics Algorithms Molecular Evolution.
Distance-Based Phylogenetic Reconstruction Tutorial #8 © Ilan Gronau, edited by Itai Sharon.
Building Phylogenies Distance-Based Methods. Methods Distance-based Parsimony Maximum likelihood.
Phylogenetic trees Sushmita Roy BMI/CS 576
Introduction to Bioinformatics Algorithms Clustering and Microarray Analysis.
9/1/ Ultrametric phylogenies By Sivan Yogev Based on Chapter 11 from “Inferring Phylogenies” by J. Felsenstein.
BIONFORMATIC ALGORITHMS Ryan Tinsley Brandon Lile May 9th, 2014.
Gene expression & Clustering (Chapter 10)
Parsimony and searching tree-space Phylogenetics Workhop, August 2006 Barbara Holland.
Phylogenetic Analysis. General comments on phylogenetics Phylogenetics is the branch of biology that deals with evolutionary relatedness Uses some measure.
Molecular phylogenetics 1 Level 3 Molecular Evolution and Bioinformatics Jim Provan Page and Holmes: Sections
BINF6201/8201 Molecular phylogenetic methods
Bioinformatics 2011 Molecular Evolution Revised 29/12/06.
Parsimony-Based Approaches to Inferring Phylogenetic Trees BMI/CS 576 Colin Dewey Fall 2010.
Phylogenetics II.
Introduction to Bioinformatics Algorithms Molecular Evolution.
Molecular Evolution.
Building phylogenetic trees. Contents Phylogeny Phylogenetic trees How to make a phylogenetic tree from pairwise distances  UPGMA method (+ an example)
Introduction to Bioinformatics Algorithms Clustering and Molecular Evolution.
Evolutionary tree reconstruction (Chapter 10). Early Evolutionary Studies Anatomical features were the dominant criteria used to derive evolutionary relationships.
Ch.6 Phylogenetic Trees 2 Contents Phylogenetic Trees Character State Matrix Perfect Phylogeny Binary Character States Two Characters Distance Matrix.
Evolutionary tree reconstruction
Microarray Data Analysis (Lecture for CS498-CXZ Algorithms in Bioinformatics) Oct 13, 2005 ChengXiang Zhai Department of Computer Science University of.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Gene expression & Clustering. Determining gene function Sequence comparison tells us if a gene is similar to another gene, e.g., in a new species –Dynamic.
Parsimony-Based Approaches to Inferring Phylogenetic Trees BMI/CS 576 Colin Dewey Fall 2015.
Phylogenetic trees Sushmita Roy BMI/CS 576 Sep 23 rd, 2014.
1 Alignment Matrix vs. Distance Matrix Sequence a gene of length m nucleotides in n species to generate an… n x m alignment matrix n x n distance matrix.
1 Microarray Clustering. 2 Outline Microarrays Hierarchical Clustering K-Means Clustering Corrupted Cliques Problem CAST Clustering Algorithm.
Distance-Based Approaches to Inferring Phylogenetic Trees BMI/CS 576 Colin Dewey Fall 2010.
Hierarchical clustering approaches for high-throughput data Colin Dewey BMI/CS 576 Fall 2015.
Distance-based methods for phylogenetic tree reconstruction Colin Dewey BMI/CS 576 Fall 2015.
Clustering [Idea only, Chapter 10.1, 10.2, 10.4].
Unsupervised Learning
Clustering CSC 600: Data Mining Class 21.
Molecular Evolution and Phylogeny
Distance based phylogenetics
CSCI2950-C Lecture 7 Molecular Evolution and Phylogeny
dij(T) - the length of a path between leaves i and j
Inferring a phylogeny is an estimation procedure.
Data Mining K-means Algorithm
Character-Based Phylogeny Reconstruction
Mean Shift Segmentation
Multiple Alignment and Phylogenetic Trees
Hierarchical clustering approaches for high-throughput data
CSE 5290: Algorithms for Bioinformatics Fall 2009
Clustering BE203: Functional Genomics Spring 2011 Vineet Bafna and Trey Ideker Trey Ideker Acknowledgements: Jones and Pevzner, An Introduction to Bioinformatics.
Clustering.
Molecular Evolution.
CSCI2950-C Lecture 8 Molecular Phylogeny: Parsimony and Likelihood
Lecture 7 – Algorithmic Approaches
Phylogeny.
Text Categorization Berlin Chen 2003 Reference:
Clustering.
Unsupervised Learning
Presentation transcript:

CSE 5290: Algorithms for Bioinformatics Fall 2011 Suprakash Datta datta@cse.yorku.ca Office: CSEB 3043 Phone: 416-736-2100 ext 77875 Course page: http://www.cse.yorku.ca/course/5290 11/24/2018 CSE 5290, Fall 2011

Next Clustering Some of the following slides are based on slides by the authors of our text. 11/24/2018 CSE 5290, Fall 2011

Clustering Arises in many different domains No single “best algorithm” exists Cluster shapes, sizes, heterogeneity depends on the data semantics Complexities of algorithms vary Dimensionality is a BIG problem Distance metrics used are critical Objective function? validation criteria? 11/24/2018 CSE 5290, Fall 2011

Clustering as learning Machine learning Supervised Classifiers, function learning Unsupervised Clustering, some neural nets, model fitting 11/24/2018 CSE 5290, Fall 2011

Applications of Clustering Viewing and analyzing vast amounts of biological data as a whole set can be perplexing It is easier to interpret the data if they are partitioned into clusters combining similar data points. 11/24/2018 CSE 5290, Fall 2011

Inferring Gene Functionality Researchers want to know the functions of newly sequenced genes Simply comparing the new gene sequences to known DNA sequences often does not give away the function of gene For 40% of sequenced genes, functionality cannot be ascertained by only comparing to sequences of other known genes Microarrays allow biologists to infer gene function even when sequence similarity alone is insufficient to infer function. 11/24/2018 CSE 5290, Fall 2011

Microarrays and Expression Analysis Microarrays measure the activity (expression level) of the genes under varying conditions/time points Expression level is estimated by measuring the amount of mRNA for that particular gene A gene is active if it is being transcribed More mRNA usually indicates more gene activity 11/24/2018 CSE 5290, Fall 2011

Microarray Experiments Produce cDNA from mRNA (more stable) Attach phosphor to cDNA to see when a particular gene is expressed Different color phosphors are available to compare many samples at once Hybridize cDNA over the micro array Scan the microarray with a phosphor-illuminating laser Illumination reveals transcribed genes Scan microarray multiple times for the different color phosphors 11/24/2018 CSE 5290, Fall 2011

Using Microarrays Track the sample over a period of time to see gene expression over time Track two different samples under the same conditions to see the difference in gene expressions Each box represents one gene’s expression over time 11/24/2018 CSE 5290, Fall 2011

Using Microarrays (contd) Green: expressed only from control Red: expressed only from experimental cell Yellow: equally expressed in both samples Black: NOT expressed in either control or experimental cells 11/24/2018 CSE 5290, Fall 2011

Microarray Data Microarray data are usually transformed into an intensity matrix (below) The intensity matrix allows biologists to make correlations between diferent genes (even if they are dissimilar) and to understand how genes functions might be related Time: Time X Time Y Time Z Gene 1 10 8 Gene 2 9 Gene 3 4 8.6 3 Gene 4 7 Gene 5 1 2 Intensity (expression level) of gene at measured time 11/24/2018 CSE 5290, Fall 2011

Clustering of Microarray Data Plot each datum as a point in N-dimensional space Make a distance matrix for the distance between every two gene points in the N-dimensional space Genes with a small distance share the same expression characteristics and might be functionally related or similar. Clustering reveal groups of functionally related genes 11/24/2018 CSE 5290, Fall 2011

Clustering of Microarray Data - contd Clusters 11/24/2018 CSE 5290, Fall 2011

Homogeneity and Separation Principles Homogeneity: Elements within a cluster are close to each other Separation: Elements in different clusters are further apart from each other …clustering is not an easy task! Given these points a clustering algorithm might make two distinct clusters as follows 11/24/2018 CSE 5290, Fall 2011

Bad Clustering This clustering violates both Homogeneity and Separation principles Close distances from points in separate clusters Far distances from points in the same cluster 11/24/2018 CSE 5290, Fall 2011

Good Clustering This clustering satisfies both Homogeneity and Separation principles 11/24/2018 CSE 5290, Fall 2011

Clustering Techniques Agglomerative: Start with every element in its own cluster, and iteratively join clusters together Divisive: Start with one cluster and iteratively divide it into smaller clusters Hierarchical: Organize elements into a tree, leaves represent genes and the length of the paths between leaves represents the distances between genes. Similar genes lie within the same subtrees 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering: Example 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering: Example 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering: Example 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering: Example 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering: Example 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering (cont’d) Hierarchical Clustering is often used to reveal evolutionary history 11/24/2018 CSE 5290, Fall 2011

Hierarchical clustering Can be top down or bottom up Well-known algorithms (available in R): Top-down: DIANA Bottom-up: AGNES Graph based algorithms 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering Algorithm Hierarchical Clustering (d , n) Form n clusters each with one element Construct a graph T by assigning one vertex to each cluster while there is more than one cluster Find the two closest clusters C1 and C2 Merge C1 and C2 into new cluster C with |C1| +|C2| elements Compute distance from C to all other clusters Add a new vertex C to T and connect to vertices C1 and C2 Remove rows and columns of d corresponding to C1 and C2 Add a row and column to d corresponding to the new cluster C return T The algorithm takes a nxn distance matrix d of pairwise distances between points as an input. 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering Algorithm Hierarchical Clustering (d , n) Form n clusters each with one element Construct a graph T by assigning one vertex to each cluster while there is more than one cluster Find the two closest clusters C1 and C2 Merge C1 and C2 into new cluster C with |C1| +|C2| elements Compute distance from C to all other clusters Add a new vertex C to T and connect to vertices C1 and C2 Remove rows and columns of d corresponding to C1 and C2 Add a row and column to d corrsponding to the new cluster C return T Different ways to define distances between clusters may lead to different clusterings 11/24/2018 CSE 5290, Fall 2011

Hierarchical Clustering: Recomputing Distances dmin(C, C*) = min d(x,y) for all elements x in C and y in C* Distance between two clusters is the smallest distance between any pair of their elements davg(C, C*) = (1 / |C*||C|) ∑ d(x,y) Distance between two clusters is the average distance between all pairs of their elements 11/24/2018 CSE 5290, Fall 2011

Squared Error Distortion Given a data point v and a set of points X, define the distance from v to X d(v, X) as the (Euclidean) distance from v to the closest point from X. Given a set of n data points V={v1…vn} and a set of k points X, define the Squared Error Distortion d(V,X) = ∑d(vi, X)2 / n 1 < i < n 11/24/2018 CSE 5290, Fall 2011

K-Means Clustering Problem: Formulation Input: A set, V, consisting of n points and a parameter k Output: A set X consisting of k points (cluster centers) that minimizes the squared error distortion d(V,X) over all possible choices of X 11/24/2018 CSE 5290, Fall 2011

1-Means Clustering Problem: an Easy Case Input: A set, V, consisting of n points Output: A single point x (cluster center) that minimizes the squared error distortion d(V,x) over all possible choices of x 11/24/2018 CSE 5290, Fall 2011

1-Means Clustering Problem: an Easy Case Input: A set, V, consisting of n points Output: A single point x (cluster center) that minimizes the squared error distortion d(V,x) over all possible choices of x 1-Means Clustering problem is easy. However, it becomes very difficult (NP-complete) for more than one center. An efficient heuristic method for K-Means clustering is the Lloyd algorithm 11/24/2018 CSE 5290, Fall 2011

K-Means Clustering: Lloyd Algorithm Arbitrarily assign the k cluster centers while the cluster centers keep changing Assign each data point to the cluster Ci corresponding to the closest cluster representative (center) (1 ≤ i ≤ k) After the assignment of all data points, compute new cluster representatives according to the center of gravity of each cluster, that is, the new cluster center is ∑v / |C | for all v in C for every cluster C *This may lead to merely a locally optimal clustering. 11/24/2018 CSE 5290, Fall 2011

x1 x2 x3 11/24/2018 CSE 5290, Fall 2011

x1 x2 x3 11/24/2018 CSE 5290, Fall 2011

x1 x3 x2 11/24/2018 CSE 5290, Fall 2011

x1 x2 x3 11/24/2018 CSE 5290, Fall 2011

Conservative K-Means Algorithm Lloyd algorithm is fast but in each iteration it moves many data points, not necessarily causing better convergence. A more conservative method would be to move one point at a time only if it improves the overall clustering cost The smaller the clustering cost of a partition of data points. the better that clustering Different methods (e.g., the squared error distortion) can be used to measure this clustering cost 11/24/2018 CSE 5290, Fall 2011

K-Means “Greedy” Algorithm ProgressiveGreedyK-Means(k) Select an arbitrary partition P into k clusters while forever bestChange  0 for every cluster C for every element i not in C if moving i to cluster C reduces its clustering cost if (cost(P) – cost(Pi  C) > bestChange bestChange  cost(P) – cost(Pi  C) i*  I C*  C if bestChange > 0 Change partition P by moving i* to C* else return P 11/24/2018 CSE 5290, Fall 2011

Clique Graphs A clique is a graph with every vertex connected to every other vertex A clique graph is a graph where each connected component is a clique 11/24/2018 CSE 5290, Fall 2011

Transforming an Arbitrary Graph into a Clique Graphs A graph can be transformed into a clique graph by adding or removing edges 11/24/2018 CSE 5290, Fall 2011

Corrupted Cliques Problem Input: A graph G Output: The smallest number of additions and removals of edges that will transform G into a clique graph 11/24/2018 CSE 5290, Fall 2011

Distance Graphs Turn the distance matrix into a distance graph Genes are represented as vertices in the graph Choose a distance threshold θ If the distance between two vertices is below θ, draw an edge between them The resulting graph may contain cliques These cliques represent clusters of closely located data points! 11/24/2018 CSE 5290, Fall 2011

Transforming Distance Graph into Clique Graph The distance graph (threshold θ=7) is transformed into a clique graph after removing the two highlighted edges After transforming the distance graph into the clique graph, the dataset is partitioned into three clusters 11/24/2018 CSE 5290, Fall 2011

Heuristics for Corrupted Clique Problem Corrupted Cliques problem is NP-Hard, some heuristics exist to approximately solve it: CAST (Cluster Affinity Search Technique): a practical and fast algorithm: CAST is based on the notion of genes close to cluster C or distant from cluster C Distance between gene i and cluster C: d(i,C) = average distance between gene i and all genes in C Gene i is close to cluster C if d(i,C)< θ and distant otherwise 11/24/2018 CSE 5290, Fall 2011

Other strategies Parametric clustering Fuzzy clustering 11/24/2018 CSE 5290, Fall 2011

CAST Algorithm CAST(S, G, θ) P  Ø while S ≠ Ø v  vertex of maximal degree in the distance graph G C  {v} while a close gene i not in C or distant gene i in C exists Find the nearest close gene i not in C and add it to C Remove the farthest distant gene i in C Add cluster C to partition P S  S \ C Remove vertices of cluster C from the distance graph G return P S – set of elements, G – distance graph, θ - distance threshold 11/24/2018 CSE 5290, Fall 2011

Parametric clustering “Explain” the data using a mathematical model Estimate the model parameters from the data E.g. Use a mixture of Gaussians, estimate means, covariances and weights of Gaussian components Many packages in R, e.g. mclust 11/24/2018 CSE 5290, Fall 2011

Soft vs hard clustering Cluster memberships are probabilities rather than boolean variables Cluster overlaps are allowed Can get hard clusters easily 11/24/2018 CSE 5290, Fall 2011

Fuzzy clustering Defines membership probabilities for non-parametric algorithms Implicit parametrization? 11/24/2018 CSE 5290, Fall 2011

Next Phylogenetic trees Some of the following slides are based on slides by the authors of our text. 11/24/2018 CSE 5290, Fall 2011

Evolution and DNA Analysis: the Giant Panda Riddle For roughly 100 years scientists were unable to figure out which family the giant panda belongs to Giant pandas look like bears but have features that are unusual for bears and typical for raccoons, e.g., they do not hibernate In 1985, Steven O’Brien and colleagues solved the giant panda classification problem using DNA sequences and algorithms 11/24/2018 CSE 5290, Fall 2011

Evolutionary Tree of Bears and Raccoons 11/24/2018 CSE 5290, Fall 2011

Evolutionary Trees: DNA-based Approach 40 years ago: Emile Zuckerkandl and Linus Pauling brought reconstructing evolutionary relationships with DNA into the spotlight In the first few years after Zuckerkandl and Pauling proposed using DNA for evolutionary studies, the possibility of reconstructing evolutionary trees by DNA analysis was hotly debated Now it is a dominant approach to study evolution. 11/24/2018 CSE 5290, Fall 2011

Who are closer? 11/24/2018 CSE 5290, Fall 2011

Human-Chimpanzee Split? 11/24/2018 CSE 5290, Fall 2011

Chimpanzee-Gorilla Split? 11/24/2018 CSE 5290, Fall 2011

Three-way Split? 11/24/2018 CSE 5290, Fall 2011

Out of Africa Hypothesis Around the time the giant panda riddle was solved, a DNA-based reconstruction of the human evolutionary tree led to the Out of Africa Hypothesis that claims our most ancient ancestor lived in Africa roughly 200,000 years ago 11/24/2018 CSE 5290, Fall 2011

Human Evolutionary Tree (cont’d) http://www.mun.ca/biology/scarr/Out_of_Africa2.htm 11/24/2018 CSE 5290, Fall 2011

The Origin of Humans: ”Out of Africa” vs Multiregional Hypothesis Humans evolved in the last two million years as a single species. Independent appearance of modern traits in different areas Humans migrated out of Africa mixing with other humanoids on the way There is a genetic continuity from Neanderthals to humans Out of Africa: Humans evolved in Africa ~150,000 years ago Humans migrated out of Africa, replacing other shumanoids around the globe There is no direct descendence from Neanderthals 11/24/2018 CSE 5290, Fall 2011

mtDNA analysis supports “Out of Africa” Hypothesis African origin of humans inferred from: African population was the most diverse (sub-populations had more time to diverge) The evolutionary tree separated one group of Africans from a group containing all five populations. Tree was rooted on branch between groups of greatest difference. 11/24/2018 CSE 5290, Fall 2011

Evolutionary Trees How are these trees built from DNA sequences? 11/24/2018 CSE 5290, Fall 2011

Evolutionary Trees How are these trees built from DNA sequences? leaves represent existing species internal vertices represent ancestors root represents the oldest evolutionary ancestor 11/24/2018 CSE 5290, Fall 2011

Rooted and Unrooted Trees In the unrooted tree the position of the root (“oldest ancestor”) is unknown. Otherwise, they are like rooted trees 11/24/2018 CSE 5290, Fall 2011

Distances in Trees Edges may have weights reflecting: Number of mutations on evolutionary path from one species to another Time estimate for evolution of one species into another In a tree T, we often compute dij(T) - the length of a path between leaves i and j dij(T) – tree distance between i and j 11/24/2018 CSE 5290, Fall 2011

Distance in Trees: example j d1,4 = 12 + 13 + 14 + 17 + 12 = 68 11/24/2018 CSE 5290, Fall 2011

Distance Matrix Given n species, we can compute the n x n distance matrix Dij Dij may be defined as the edit distance between a gene in species i and species j, where the gene of interest is sequenced for all n species. Dij – edit distance between i and j 11/24/2018 CSE 5290, Fall 2011

Edit Distance vs. Tree Distance Given n species, we can compute the n x n distance matrix Dij Dij may be defined as the edit distance between a gene in species i and species j, where the gene of interest is sequenced for all n species. Dij – edit distance between i and j Note the difference with dij(T) – tree distance between i and j 11/24/2018 CSE 5290, Fall 2011

Fitting Distance Matrix Given n species, we can compute the n x n distance matrix Dij Evolution of these genes is described by a tree that we don’t know. We need an algorithm to construct a tree that best fits the distance matrix Dij 11/24/2018 CSE 5290, Fall 2011

Fitting Distance Matrix Fitting means Dij = dij(T) Lengths of path in an (unknown) tree T Edit distance between species (known) 11/24/2018 CSE 5290, Fall 2011

Reconstructing a 3 Leaved Tree Tree reconstruction for any 3x3 matrix is straightforward We have 3 leaves i, j, k and a center vertex c Observe: dic + djc = Dij dic + dkc = Dik djc + dkc = Djk 11/24/2018 CSE 5290, Fall 2011

Reconstructing a 3 Leaved Tree (cont’d) dic + djc = Dij + dic + dkc = Dik 2dic + djc + dkc = Dij + Dik 2dic + Djk = Dij + Dik dic = (Dij + Dik – Djk)/2 Similarly, djc = (Dij + Djk – Dik)/2 dkc = (Dki + Dkj – Dij)/2 11/24/2018 CSE 5290, Fall 2011

Trees with > 3 Leaves An tree with n leaves has 2n-3 edges This means fitting a given tree to a distance matrix D requires solving a system of “n choose 2” equations with 2n-3 variables This is not always possible to solve for n > 3 11/24/2018 CSE 5290, Fall 2011

Additive Distance Matrices Matrix D is ADDITIVE if there exists a tree T with dij(T) = Dij NON-ADDITIVE otherwise 11/24/2018 CSE 5290, Fall 2011

Distance Based Phylogeny Problem Goal: Reconstruct an evolutionary tree from a distance matrix Input: n x n distance matrix Dij Output: weighted tree T with n leaves fitting D If D is additive, this problem has a solution and there is a simple algorithm to solve it 11/24/2018 CSE 5290, Fall 2011

Using Neighboring Leaves to Construct the Tree Find neighboring leaves i and j with parent k Remove the rows and columns of i and j Add a new row and column corresponding to k, where the distance from k to any other leaf m can be computed as: Dkm = (Dim + Djm – Dij)/2 Compress i and j into k, iterate algorithm for rest of tree 11/24/2018 CSE 5290, Fall 2011

Finding Neighboring Leaves To find neighboring leaves we simply select a pair of closest leaves. 11/24/2018 CSE 5290, Fall 2011

Finding Neighboring Leaves To find neighboring leaves we simply select a pair of closest leaves. WRONG 11/24/2018 CSE 5290, Fall 2011

Finding Neighboring Leaves Closest leaves aren’t necessarily neighbors i and j are neighbors, but (dij = 13) > (djk = 12) Finding a pair of neighboring leaves is a nontrivial problem! 11/24/2018 CSE 5290, Fall 2011

Degenerate Triples A degenerate triple is a set of three distinct elements 1≤i,j,k≤n where Dij + Djk = Dik Element j in a degenerate triple i,j,k lies on the evolutionary path from i to k (or is attached to this path by an edge of length 0). 11/24/2018 CSE 5290, Fall 2011

Looking for Degenerate Triples If distance matrix D has a degenerate triple i,j,k then j can be “removed” from D thus reducing the size of the problem. If distance matrix D does not have a degenerate triple i,j,k, one can “create” a degenerative triple in D by shortening all hanging edges (in the tree). 11/24/2018 CSE 5290, Fall 2011

Shortening Hanging Edges to Produce Degenerate Triples Shorten all “hanging” edges (edges that connect leaves) until a degenerate triple is found 11/24/2018 CSE 5290, Fall 2011

Finding Degenerate Triples If there is no degenerate triple, all hanging edges are reduced by the same amount δ, so that all pair-wise distances in the matrix are reduced by 2δ. Eventually this process collapses one of the leaves (when δ = length of shortest hanging edge), forming a degenerate triple i,j,k and reducing the size of the distance matrix D. The attachment point for j can be recovered in the reverse transformations by saving Dij for each collapsed leaf. 11/24/2018 CSE 5290, Fall 2011

Reconstructing Trees for Additive Distance Matrices 11/24/2018 CSE 5290, Fall 2011

AdditivePhylogeny Algorithm AdditivePhylogeny(D) if D is a 2 x 2 matrix T = tree of a single edge of length D1,2 return T if D is non-degenerate δ = trimming parameter of matrix D for all 1 ≤ i ≠ j ≤ n Dij = Dij - 2δ else δ = 0 11/24/2018 CSE 5290, Fall 2011

AdditivePhylogeny (cont’d) Find a triple i, j, k in D such that Dij + Djk = Dik x = Dij Remove jth row and jth column from D T = AdditivePhylogeny(D) Add a new vertex v to T at distance x from i to k Add j back to T by creating an edge (v,j) of length 0 for every leaf l in T if distance from l to v in the tree ≠ Dl,j output “matrix is not additive” return Extend all “hanging” edges by length δ return T 11/24/2018 CSE 5290, Fall 2011

The Four Point Condition AdditivePhylogeny provides a way to check if distance matrix D is additive An even more efficient additivity check is the “four-point condition” Let 1 ≤ i,j,k,l ≤ n be four distinct leaves in a tree 11/24/2018 CSE 5290, Fall 2011

The Four Point Condition (cont’d) Compute: 1. Dij + Dkl, 2. Dik + Djl, 3. Dil + Djk 2 3 1 2 and 3 represent the same number: the length of all edges + the middle edge (it is counted twice) 1 represents a smaller number: the length of all edges – the middle edge 11/24/2018 CSE 5290, Fall 2011

The Four Point Condition: Theorem The four point condition for the quartet i,j,k,l is satisfied if two of these sums are the same, with the third sum smaller than these first two Theorem : An n x n matrix D is additive if and only if the four point condition holds for every quartet 1 ≤ i,j,k,l ≤ n 11/24/2018 CSE 5290, Fall 2011

Least Squares Distance Phylogeny Problem If the distance matrix D is NOT additive, then we look for a tree T that approximates D the best: Squared Error : ∑i,j (dij(T) – Dij)2 Squared Error is a measure of the quality of the fit between distance matrix and the tree: we want to minimize it. Least Squares Distance Phylogeny Problem: finding the best approximation tree T for a non-additive matrix D (NP-hard). 11/24/2018 CSE 5290, Fall 2011

UPGMA: Unweighted Pair Group Method with Arithmetic Mean UPGMA is a clustering algorithm that: computes the distance between clusters using average pairwise distance assigns a height to every vertex in the tree, effectively assuming the presence of a molecular clock and dating every vertex 11/24/2018 CSE 5290, Fall 2011

UPGMA’s Weakness The algorithm produces an ultrametric tree : the distance from the root to any leaf is the same UPGMA assumes a constant molecular clock: all species represented by the leaves in the tree are assumed to accumulate mutations (and thus evolve) at the same rate. This is a major pitfall of UPGMA. 11/24/2018 CSE 5290, Fall 2011

UPGMA’s Weakness: Example 2 3 4 1 Correct tree UPGMA 11/24/2018 CSE 5290, Fall 2011

Clustering in UPGMA Given two disjoint clusters Ci, Cj of sequences, 1 dij = ––––––––– {p Ci, q Cj}dpq |Ci|  |Cj| Note that if Ck = Ci  Cj, then distance to another cluster Cl is: dil |Ci| + djl |Cj| dkl = –––––––––––––– |Ci| + |Cj| 11/24/2018 CSE 5290, Fall 2011

UPGMA Algorithm Initialization: Assign each xi to its own cluster Ci Define one leaf per sequence, each at height 0 Iteration: Find two clusters Ci and Cj such that dij is min Let Ck = Ci  Cj Add a vertex connecting Ci, Cj and place it at height dij /2 Delete Ci and Cj Termination: When a single cluster remains 11/24/2018 CSE 5290, Fall 2011

UPGMA Algorithm (cont’d) 1 4 3 2 5 11/24/2018 CSE 5290, Fall 2011

Alignment Matrix vs. Distance Matrix Sequence a gene of length m nucleotides in n species to generate an… n x m alignment matrix CANNOT be transformed back into alignment matrix because information was lost on the forward transformation Transform into… n x n distance matrix 11/24/2018 CSE 5290, Fall 2011

Character-Based Tree Reconstruction Better technique: Character-based reconstruction algorithms use the n x m alignment matrix (n = # species, m = #characters) directly instead of using distance matrix. GOAL: determine what character strings at internal nodes would best explain the character strings for the n observed species 11/24/2018 CSE 5290, Fall 2011

Character-Based Tree Reconstruction (cont’d) Characters may be nucleotides, where A, G, C, T are states of this character. Other characters may be the # of eyes or legs or the shape of a beak or a fin. By setting the length of an edge in the tree to the Hamming distance, we may define the parsimony score of the tree as the sum of the lengths (weights) of the edges 11/24/2018 CSE 5290, Fall 2011

Parsimony Approach to Evolutionary Tree Reconstruction Applies Occam’s razor principle to identify the simplest explanation for the data Assumes observed character differences resulted from the fewest possible mutations Seeks the tree that yields lowest possible parsimony score - sum of cost of all mutations found in the tree 11/24/2018 CSE 5290, Fall 2011

Parsimony and Tree Reconstruction 11/24/2018 CSE 5290, Fall 2011

Small Parsimony Problem Input: Tree T with each leaf labeled by an m-character string. Output: Labeling of internal vertices of the tree T minimizing the parsimony score. We can assume that every leaf is labeled by a single character, because the characters in the string are independent. 11/24/2018 CSE 5290, Fall 2011

Weighted Small Parsimony Problem A more general version of Small Parsimony Problem Input includes a k * k scoring matrix describing the cost of transformation of each of k states into another one For Small Parsimony problem, the scoring matrix is based on Hamming distance dH(v, w) = 0 if v=w dH(v, w) = 1 otherwise 11/24/2018 CSE 5290, Fall 2011

Scoring Matrices A T G C 1 A T G C 3 4 9 2 Small Parsimony Problem Weighted Parsimony Problem A T G C 1 A T G C 3 4 9 2 11/24/2018 CSE 5290, Fall 2011

Unweighted vs. Weighted Small Parsimony Scoring Matrix: A T G C 1 Small Parsimony Score: 5 11/24/2018 CSE 5290, Fall 2011

Unweighted vs. Weighted Weighted Parsimony Scoring Matrix: A T G C 3 4 9 2 Weighted Parsimony Score: 22 11/24/2018 CSE 5290, Fall 2011

Weighted Small Parsimony Problem: Formulation Input: Tree T with each leaf labeled by elements of a k-letter alphabet and a k x k scoring matrix (ij) Output: Labeling of internal vertices of the tree T minimizing the weighted parsimony score 11/24/2018 CSE 5290, Fall 2011

Sankoff’s Algorithm Check children’s every vertex and determine the minimum between them An example 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm: Dynamic Programming Calculate and keep track of a score for every possible label at each vertex st(v) = minimum parsimony score of the subtree rooted at vertex v if v has character t The score at each vertex is based on scores of its children: st(parent) = mini {si( left child ) + i, t} + minj {sj( right child ) + j, t} 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm (cont.) Begin at leaves: If leaf has the character in question, score is 0 Else, score is  11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm (cont.) st(v) = mini {si(u) + i, t} + minj{sj(w) + j, t} si(u) i, A sum A T  3 G 4 C 9 si(u) i, A sum A T G C si(u) i, A sum A T  3 G 4 C 9 sA(v) = 0 sA(v) = mini{si(u) + i, A} + minj{sj(w) + j, A} 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm (cont.) st(v) = mini {si(u) + i, t} + minj{sj(w) + j, t} sj(u) j, A sum A  T 3 G 4 C 9 sj(u) j, A sum A  T 3 G 4 C 9 sj(u) j, A sum A T G C sA(v) = 0 sA(v) = mini{si(u) + i, A} + minj{sj(w) + j, A} + 9 = 9 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm (cont.) st(v) = mini {si(u) + i, t} + minj{sj(w) + j, t} Repeat for T, G, and C 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm (cont.) Repeat for right subtree 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm (cont.) Repeat for root 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm (cont.) Smallest score at root is minimum weighted parsimony score In this case, 9 – so label with T 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm: Traveling down the Tree The scores at the root vertex have been computed by going up the tree After the scores at root vertex are computed the Sankoff algorithm moves down the tree and assign each vertex with optimal character. 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm (cont.) 9 is derived from 7 + 2 So left child is T, And right child is T 11/24/2018 CSE 5290, Fall 2011

Sankoff Algorithm (cont.) And the tree is thus labeled… 11/24/2018 CSE 5290, Fall 2011

Fitch’s Algorithm Solves Small Parsimony problem Dynamic programming in essence Assigns a set of letter to every vertex in the tree. If the two children’s sets of character overlap, it’s the common set of them If not, it’s the combined set of them. 11/24/2018 CSE 5290, Fall 2011

Fitch’s Algorithm (cont’d) An example: a c t a {a,c} {t,a} c t a a a a a a {a,c} {t,a} a a a c t a c t 11/24/2018 CSE 5290, Fall 2011

Fitch Algorithm 1) Assign a set of possible letters to every vertex, traversing the tree from leaves to root Each node’s set is the combination of its children’s sets (leaves contain their label) E.g. if the node we are looking at has a left child labeled {A, C} and a right child labeled {A, T}, the node will be given the set {A, C, T} 11/24/2018 CSE 5290, Fall 2011

Fitch Algorithm (cont.) 2) Assign labels to each vertex, traversing the tree from root to leaves Assign root arbitrarily from its set of letters For all other vertices, if its parent’s label is in its set of letters, assign it its parent’s label Else, choose an arbitrary letter from its set as its label 11/24/2018 CSE 5290, Fall 2011

Fitch Algorithm (cont.) 11/24/2018 CSE 5290, Fall 2011

Fitch vs. Sankoff Both have an O(nk) runtime Are they actually different? Let’s compare … 11/24/2018 CSE 5290, Fall 2011

Fitch As seen previously: 11/24/2018 CSE 5290, Fall 2011

Comparison of Fitch and Sankoff As seen earlier, the scoring matrix for the Fitch algorithm is merely: So let’s do the same problem using Sankoff algorithm and this scoring matrix A T G C 1 11/24/2018 CSE 5290, Fall 2011

Sankoff 11/24/2018 CSE 5290, Fall 2011

Sankoff vs. Fitch The Sankoff algorithm gives the same set of optimal labels as the Fitch algorithm For Sankoff algorithm, character t is optimal for vertex v if st(v) = min1<i<ksi(v) Denote the set of optimal letters at vertex v as S(v) If S(left child) and S(right child) overlap, S(parent) is the intersection Else it’s the union of S(left child) and S(right child) This is also the Fitch recurrence The two algorithms are identical 11/24/2018 CSE 5290, Fall 2011

Large Parsimony Problem Input: An n x m matrix M describing n species, each represented by an m-character string Output: A tree T with n leaves labeled by the n rows of matrix M, and a labeling of the internal vertices such that the parsimony score is minimized over all possible trees and all possible labelings of internal vertices 11/24/2018 CSE 5290, Fall 2011

Large Parsimony Problem (cont.) Possible search space is huge, especially as n increases (2n – 3)!! possible rooted trees (2n – 5)!! possible unrooted trees Problem is NP-complete Exhaustive search only possible w/ small n(< 10) Hence, branch and bound or heuristics used 11/24/2018 CSE 5290, Fall 2011

Nearest Neighbor Interchange A Greedy Algorithm A Branch Swapping algorithm Only evaluates a subset of all possible trees Defines a neighbor of a tree as one reachable by a nearest neighbor interchange A rearrangement of the four subtrees defined by one internal edge Only three different rearrangements per edge 11/24/2018 CSE 5290, Fall 2011

Nearest Neighbor Interchange (cont.) 11/24/2018 CSE 5290, Fall 2011

Nearest Neighbor Interchange (cont.) Start with an arbitrary tree and check its neighbors Move to a neighbor if it provides the best improvement in parsimony score No way of knowing if the result is the most parsimonious tree Could be stuck in local optimum 11/24/2018 CSE 5290, Fall 2011

Nearest Neighbor Interchange 11/24/2018 CSE 5290, Fall 2011

Subtree Pruning and Regrafting Another Branch Swapping Algorithm 11/24/2018 CSE 5290, Fall 2011 http://artedi.ebc.uu.se/course/BioInfo-10p-2001/Phylogeny/Phylogeny-TreeSearch/SPR.gif

Tree Bisection and Reconnection Another Branch Swapping Algorithm Most extensive swapping routine 11/24/2018 CSE 5290, Fall 2011

Homoplasy Given: 1: CAGCAGCAG 2: CAGCAGCAG 3: CAGCAGCAGCAG 4: CAGCAGCAG 5: CAGCAGCAG 6: CAGCAGCAG 7: CAGCAGCAGCAG Most would group 1, 2, 4, 5, and 6 as having evolved from a common ancestor, with a single mutation leading to the presence of 3 and 7 11/24/2018 CSE 5290, Fall 2011

Homoplasy But what if this was the real tree? 11/24/2018 CSE 5290, Fall 2011

Homoplasy 6 evolved separately from 4 and 5, but parsimony would group 4, 5, and 6 together as having evolved from a common ancestor Homoplasy: Independent (or parallel) evolution of same/similar characters Parsimony results minimize homoplasy, so if homoplasy is common, parsimony may give wrong results 11/24/2018 CSE 5290, Fall 2011

Contradicting Characters An evolutionary tree is more likely to be correct when it is supported by multiple characters, as seen below Human Lizard MAMMALIA Hair Single bone in lower jaw Lactation etc. Frog Dog Note: In this case, tails are homoplastic 11/24/2018 CSE 5290, Fall 2011

Problems with Parsimony Important to keep in mind that reliance on purely one method for phylogenetic analysis provides incomplete picture When different methods (parsimony, distance-based, etc.) all give same result, more likely that the result is correct 11/24/2018 CSE 5290, Fall 2011

Phylogenetic Analysis of HIV Virus Lafayette, Louisiana, 1994 – A woman claimed her ex-lover (who was a physician) injected her with HIV+ blood Records show the physician had drawn blood from an HIV+ patient that day But how to prove the blood from that HIV+ patient ended up in the woman? 11/24/2018 CSE 5290, Fall 2011

HIV Transmission HIV has a high mutation rate, which can be used to trace paths of transmission Two people who got the virus from two different people will have very different HIV sequences Three different tree reconstruction methods (including parsimony) were used to track changes in two genes in HIV (gp120 and RT) 11/24/2018 CSE 5290, Fall 2011

HIV Transmission Took multiple samples from the patient, the woman, and controls (non-related HIV+ people) In every reconstruction, the woman’s sequences were found to be evolved from the patient’s sequences, indicating a close relationship between the two Nesting of the victim’s sequences within the patient sequence indicated the direction of transmission was from patient to victim This was the first time phylogenetic analysis was used in a court case as evidence (Metzker, et. al., 2002) 11/24/2018 CSE 5290, Fall 2011