Clustering BE203: Functional Genomics Spring 2011 Vineet Bafna and Trey Ideker Trey Ideker Acknowledgements: Jones and Pevzner, An Introduction to Bioinformatics.

Slides:



Advertisements
Similar presentations
Clustering k-mean clustering Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein.
Advertisements

K-means Clustering Given a data point v and a set of points X,
Clustering.
Cluster Analysis: Basic Concepts and Algorithms
Clustering Categorical Data The Case of Quran Verses
PARTITIONAL CLUSTERING
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
Introduction to Bioinformatics
Lecture 6 Image Segmentation
Clustering short time series gene expression data Jason Ernst, Gerard J. Nau and Ziv Bar-Joseph BIOINFORMATICS, vol
SocalBSI 2008: Clustering Microarray Datasets Sagar Damle, Ph.D. Candidate, Caltech  Distance Metrics: Measuring similarity using the Euclidean and Correlation.
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
Introduction to Bioinformatics Algorithms Clustering.
University of CreteCS4831 The use of Minimum Spanning Trees in microarray expression data Gkirtzou Ekaterini.
L16: Micro-array analysis Dimension reduction Unsupervised clustering.
Cluster Analysis: Basic Concepts and Algorithms
Computational Biology, Part 12 Expression array cluster analysis Robert F. Murphy, Shann-Ching Chen Copyright  All rights reserved.
Introduction to Bioinformatics Algorithms Clustering.
CSE182-L17 Clustering Population Genetics: Basics.
Introduction to Bioinformatics - Tutorial no. 12
Ulf Schmitz, Pattern recognition - Clustering1 Bioinformatics Pattern recognition - Clustering Ulf Schmitz
Comp602 Bioinformatics Algorithms -m werner 2011
Introduction to Bioinformatics Algorithms Clustering and Microarray Analysis.
B IOINFORMATICS Dr. Aladdin HamwiehKhalid Al-shamaa Abdulqader Jighly Lecture 8 Analyzing Microarray Data Aleppo University Faculty of technical.
Clustering Unsupervised learning Generating “classes”
BIONFORMATIC ALGORITHMS Ryan Tinsley Brandon Lile May 9th, 2014.
Gene expression & Clustering (Chapter 10)
Lecture 11. Microarray and RNA-seq II
Potential Data Mining Techniques for Flow Cyt Data Analysis Li Xiong.
A Clustering Algorithm based on Graph Connectivity Balakrishna Thiagarajan Computer Science and Engineering State University of New York at Buffalo.
Clustering What is clustering? Also called “unsupervised learning”Also called “unsupervised learning”
Cluster Analysis Cluster Analysis Cluster analysis is a class of techniques used to classify objects or cases into relatively homogeneous groups.
Dimension reduction : PCA and Clustering Slides by Agnieszka Juncker and Chris Workman modified by Hanne Jarmer.
Microarray Data Analysis (Lecture for CS498-CXZ Algorithms in Bioinformatics) Oct 13, 2005 ChengXiang Zhai Department of Computer Science University of.
Clustering Gene Expression Data BMI/CS 576 Colin Dewey Fall 2010.
Clustering.
Course Work Project Project title “Data Analysis Methods for Microarray Based Gene Expression Analysis” Sushil Kumar Singh (batch ) IBAB, Bangalore.
Lecture 3 1.Different centrality measures of nodes 2.Hierarchical Clustering 3.Line graphs.
Gene expression & Clustering. Determining gene function Sequence comparison tells us if a gene is similar to another gene, e.g., in a new species –Dynamic.
Computational Biology Clustering Parts taken from Introduction to Data Mining by Tan, Steinbach, Kumar Lecture Slides Week 9.
Compiled By: Raj Gaurang Tiwari Assistant Professor SRMGPC, Lucknow Unsupervised Learning.
Lloyd Algorithm K-Means Clustering. Gene Expression Susumu Ohno: whole genome duplications The expression of genes can be measured over time. Identifying.
1 Microarray Clustering. 2 Outline Microarrays Hierarchical Clustering K-Means Clustering Corrupted Cliques Problem CAST Clustering Algorithm.
CZ5211 Topics in Computational Biology Lecture 4: Clustering Analysis for Microarray Data II Prof. Chen Yu Zong Tel:
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Network Partition –Finding modules of the network. Graph Clustering –Partition graphs according to the connectivity. –Nodes within a cluster is highly.
Clustering Approaches Ka-Lok Ng Department of Bioinformatics Asia University.
Hierarchical clustering approaches for high-throughput data Colin Dewey BMI/CS 576 Fall 2015.
Clustering [Idea only, Chapter 10.1, 10.2, 10.4].
Graph clustering to detect network modules
Unsupervised Learning
Semi-Supervised Clustering
Clustering CSC 600: Data Mining Class 21.
CZ5211 Topics in Computational Biology Lecture 3: Clustering Analysis for Microarray Data I Prof. Chen Yu Zong Tel:
Machine Learning Clustering: K-means Supervised Learning
dij(T) - the length of a path between leaves i and j
Data Mining K-means Algorithm
Research in Computational Molecular Biology , Vol (2008)
Haim Kaplan and Uri Zwick
Mean Shift Segmentation
Hierarchical clustering approaches for high-throughput data
CSE 5290: Algorithms for Bioinformatics Fall 2011
Clustering.
Data Mining – Chapter 4 Cluster Analysis Part 2
Cluster Analysis.
Text Categorization Berlin Chen 2003 Reference:
Clustering.
Hierarchical Clustering
Clustering.
Unsupervised Learning
Presentation transcript:

Clustering BE203: Functional Genomics Spring 2011 Vineet Bafna and Trey Ideker Trey Ideker Acknowledgements: Jones and Pevzner, An Introduction to Bioinformatics Algorithms, MIT Press (2004) Ron Shamir, Algorithms in Mol. Biology Lecture Notes http://www.cs.tau.ac.il/~rshamir/algmb/algmb-archive.htm

Outline Hierarchical Clustering An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Outline Hierarchical Clustering Optimal Ordering of Hierarchical Clusters K-Means Clustering Corrupted Cliques Problem CAST Clustering Algorithm

Applications of Clustering An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Applications of Clustering Viewing and analyzing vast amounts of biological data as a whole set can be perplexing. It is easier to interpret the data if they are partitioned into clusters combining similar data points. Ideally, points within the same cluster are highly similar while points in different clusters are very different. Clustering is a staple of gene expression analysis

Inferring Gene Functionality An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Inferring Gene Functionality Researchers want to know the functions of newly sequenced genes Simply comparing the new gene sequences to known DNA sequences often does not give away the function of gene. For 40% of sequenced genes, functionality cannot be ascertained by only comparing to sequences of other known genes Gene expression clusters allow biologists to infer gene function even when sequence similarity alone is insufficient to infer function.

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Gene Expression Data Expression data are usually transformed into an intensity matrix (below) The intensity matrix allows biologists to make correlations between different genes (even if they are dissimilar) and to understand how genes functions might be related Clustering comes into play … … Time 1 Time i Time N Gene 1 10 8 Gene 2 9 Gene 3 4 8.6 3 Gene 4 7 Gene 5 1 2 Intensity (expression level) of gene at measured time

Clustering of Expression Data An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Clustering of Expression Data Plot each gene as a point in N-dimensional space Make a distance matrix for the distance between every two gene points in the N- dimensional space Genes with a small distance share the same expression characteristics and might be functionally related or similar Clustering reveals groups of functionally related genes

Graphing the intensity matrix in multi-dimensional space An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Graphing the intensity matrix in multi-dimensional space Clusters

The Distance Matrix, d An Introduction to Bioinformatics Algorithms www.bioalgorithms.info The Distance Matrix, d

Homogeneity and Separation Principles An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Homogeneity and Separation Principles Homogeneity: Elements within a cluster are close to each other Separation: Elements in different clusters are further apart from each other …clustering is not an easy task! Given these points a clustering algorithm might make two distinct clusters as follows

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Bad Clustering This clustering violates both Homogeneity and Separation principles Close distances from points in separate clusters Far distances from points in the same cluster

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Good Clustering This clustering satisfies both Homogeneity and Separation principles

Clustering Techniques An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Clustering Techniques Agglomerative: Start with every element in its own cluster, and iteratively join clusters together Divisive: Start with one cluster and iteratively divide it into smaller clusters Hierarchical: Organize elements into a tree, leaves represent genes and the length of the paths between leaves represents the distances between genes. Similar genes lie within the same subtrees.

Hierarchical Clustering An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Hierarchical Clustering Hierarchical Clustering has been often applied to both sequences and expression Here is an example illustrating the evolution of the primates This kind of tree has been built using both DNA sequence and gene expression profiles

Hierarchical Clustering Algorithm An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Hierarchical Clustering Algorithm Hierarchical Clustering (d , n) Form n clusters each with one element Construct a graph T by assigning one vertex to each cluster while there is more than one cluster Find the two closest clusters C1 and C2 Merge C1 and C2 into new cluster C with |C1| +|C2| elements Compute distance from C to all other clusters Add a new vertex C to T and connect to vertices C1 and C2 Remove rows and columns of d corresponding to C1 and C2 Add a row and column to d corresponding to the new cluster C return T The algorithm takes a nxn distance matrix d of pairwise distances between points as an input.

Hierarchical Clustering Algorithm An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Hierarchical Clustering Algorithm Hierarchical Clustering (d , n) Form n clusters each with one element Construct a graph T by assigning one vertex to each cluster while there is more than one cluster Find the two closest clusters C1 and C2 Merge C1 and C2 into new cluster C with |C1| +|C2| elements Compute distance from C to all other clusters Add a new vertex C to T and connect to vertices C1 and C2 Remove rows and columns of d corresponding to C1 and C2 Add a row and column to d corrsponding to the new cluster C return T Different ways to define distances between clusters may lead to different clusterings

Hierarchical Clustering: Computing Distances An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Hierarchical Clustering: Computing Distances dmin(C, C*) = min d(x,y) for all elements x in C and y in C* Distance between two clusters is the smallest distance between any pair of their elements davg(C, C*) = (1 / |C*||C|) ∑ d(x,y) for all elements x in C and y in C* Distance between two clusters is the average distance between all pairs of their elements

Computing Distances (continued) However, we still need a base distance metric for pairs of gene: Euclidean distance Manhattan distance Correlation coefficient Mutual information What are some qualitative differences between these?

Comparison between metrics Euclidean and Manhattan tend to perform similarly and emphasize the overall magnitude of expression. The Pearson correlation coefficient is very useful if the ‘shape’ of the expression vector is more important than its magnitude. The above metrics are less useful for identifying genes for which the expression levels are anti-correlated. One might imagine an instance in which the same transcription factor can cause both enhancement and repression of expression. In this case, the squared correlation (r2) or mutual information is sometimes used.

Hierarchical tree building: UPGMA

Hierarchical tree building: UPGMA

Next slides courtesy of Ziv Bar-Joseph

But how many orderings can we have? 1 2 4 5 3

For n leaves there are n-1 internal nodes Each flip in an internal node creates a new linear ordering of the leaves There are therefore 2n-1 orderings E.g., flip this node 1 2 3 4 5

Bar-Joseph et al. Bioinformatics (2001)

Squared Error Distortion An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Squared Error Distortion Given a data point v and a set of points X, define the distance from v to X d(v, X) as the (Euclidean) distance from v to the closest point from X. Given a set of n data points V={v1…vn} and a set of k points X, define the Squared Error Distortion d(V,X) = ∑d(vi, X)2 / n 1 < i < n

K-Means Clustering Problem: Formulation An Introduction to Bioinformatics Algorithms www.bioalgorithms.info K-Means Clustering Problem: Formulation Input: A set, V, consisting of n points and a parameter k Output: A set X consisting of k points (cluster centers) that minimizes the squared error distortion d(V,X) over all possible choices of X This problem is NP-complete.

1-Means Clustering Problem: an Easy Case An Introduction to Bioinformatics Algorithms www.bioalgorithms.info 1-Means Clustering Problem: an Easy Case Input: A set, V, consisting of n points. Output: A single point X that minimizes d(V,X) over all possible choices of X. This problem is easy. However, it becomes very difficult for more than one center. An efficient heuristic method for K-Means clustering is the Lloyd algorithm

K-Means Clustering: Lloyd Algorithm An Introduction to Bioinformatics Algorithms www.bioalgorithms.info K-Means Clustering: Lloyd Algorithm Lloyd Algorithm Arbitrarily assign the k cluster centers while the cluster centers keep changing Assign each data point to the cluster Ci corresponding to the closest cluster representative (center) (1 ≤ i ≤ k) After the assignment of all data points, compute new cluster representatives according to the center of gravity of each cluster, that is, the new cluster representative is ∑v \ |C| for all v in C for every cluster C *This may lead to merely a locally optimal clustering.

x1 x2 x3 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info x1 x2 x3

x1 x2 x3 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info x1 x2 x3

x1 x3 x2 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info x1 x3 x2

x1 x2 x3 An Introduction to Bioinformatics Algorithms www.bioalgorithms.info x1 x2 x3

Conservative K-Means Algorithm An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Conservative K-Means Algorithm Lloyd algorithm is fast but in each iteration it moves many data points, not necessarily causing better convergence. A more conservative method would be to move one point at a time only if it improves the overall clustering cost The smaller the clustering cost of a partition of data points is the better that clustering is Different methods can be used to measure this clustering cost (for example in the last algorithm the squared error distortion was used)

K-Means “Greedy” Algorithm An Introduction to Bioinformatics Algorithms www.bioalgorithms.info K-Means “Greedy” Algorithm ProgressiveGreedyK-Means(k) Select an arbitrary partition P into k clusters while forever bestChange  0 for every cluster C for every element i not in C if moving i to cluster C reduces its clustering cost if (cost(P) – cost(Pi  C) > bestChange bestChange  cost(P) – cost(Pi  C) i*  I C*  C if bestChange > 0 Change partition P by moving i* to C* else return P

An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Clique Graphs A clique is a graph with every vertex connected to every other vertex A clique graph is a graph where each connected component is a clique

Clique Graphs (cont’d) An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Clique Graphs (cont’d) A graph can be transformed into a clique graph by adding or removing edges Example: removing two edges to make a clique graph

Corrupted Cliques Problem An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Corrupted Cliques Problem Input: A graph G Output: The smallest number of additions and removals of edges that will transform G into a clique graph

Distance Graphs Turn the distance matrix into a distance graph An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Distance Graphs Turn the distance matrix into a distance graph Choose a distance threshold θ Genes are represented as vertices in the graph If the distance between two vertices is below θ, draw an edge between them The resulting graph may contain cliques These cliques represent clusters of closely located data points!

Transforming Distance Graph into Clique Graph An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Transforming Distance Graph into Clique Graph The distance graph G (created with a threshold θ=7) is transformed into a clique graph after removing the two highlighted edges After transforming the distance graph into the clique graph, our data is partitioned into three clusters

Heuristics for Corrupted Clique Problem An Introduction to Bioinformatics Algorithms www.bioalgorithms.info Heuristics for Corrupted Clique Problem Corrupted Cliques problem is NP-Hard, some heuristics exist to approximately solve it: CAST (Cluster Affinity Search Technique): a practical and fast algorithm: CAST is based on the notion of genes close to cluster C or distant from cluster C Distance between gene i and cluster C: d(i,C) = average distance between gene i and all genes in C Gene i is close to cluster C if d(i,C)< θ and distant otherwise

CAST Algorithm An Introduction to Bioinformatics Algorithms www.bioalgorithms.info CAST Algorithm CAST(S, G, θ) P  Ø while S ≠ Ø V  vertex of maximal degree in the distance graph G C  {v} while a close gene i not in C or distant gene i in C exists Find the nearest close gene i not in C and add it to C Remove the farthest distant gene i in C Add cluster C to partition P S  S \ C Remove vertices of cluster C from the distance graph G return P S – set of genes, G – distance graph, θ – distance threshold, C – cluster, P – partition

Where does clustering fit in the signal detection paradigm? Ideker, Dutkowski, Hood. Cell 2011