Cluster Analysis for Gene Expression Data Ka Yee Yeung Center for Expression Arrays Department of Microbiology
10/18/2002Ka Yee Yeung, CEA2 Snapshot of activities in the cell Each chip represents an experiment: –time course –tissue samples (normal/cancer) …….. A gene expression data set p experiments n genes X ij
10/18/2002Ka Yee Yeung, CEA3 What is clustering? Group similar objects together Objects in the same cluster (group) are more similar to each other than objects in different clusters Data exploratory tool: to find patterns in large data sets Unsupervised approach: do not make use of prior knowledge of data
10/18/2002Ka Yee Yeung, CEA4 Applications of clustering gene expression data Cluster the genes functionally related genes Cluster the experiments discover new subtypes of tissue samples Cluster both genes and experiments find sub-patterns
10/18/2002Ka Yee Yeung, CEA5 Examples of clustering algorithms Hierarchical clustering algorithms eg. [Eisen et al 1998] K-means eg. [Tavazoie et al. 1999] Self-organizing maps (SOM) eg. [Tamayo et al. 1999] CAST [Ben-Dor, Yakhini 1999] Model-based clustering algorithms eg. [Yeung et al. 2001]
10/18/2002Ka Yee Yeung, CEA6 Overview Similarity/distance measures Hierarchical clustering algorithms –Made popular by Stanford, ie. [Eisen et al. 1998] K-means –Made popular by many groups, eg. [Tavazoie et al. 1999] Model-based clustering algorithms [Yeung et al. 2001]
10/18/2002Ka Yee Yeung, CEA7 How to define similarity? Similarity measures: –A measure of pairwise similarity or dissimilarity –Examples: Correlation coefficient Euclidean distance Experiments genes X Y X Y Raw matrixSimilarity matrix 1 n 1p n n
10/18/2002Ka Yee Yeung, CEA8 Similarity measures (for those of you who enjoy equations…) Euclidean distance Correlation coefficient
10/18/2002Ka Yee Yeung, CEA9 Example Correlation (X,Y) = 1 Distance (X,Y) = 4 Correlation (X,Z) = -1 Distance (X,Z) = 2.83 Correlation (X,W) = 1 Distance (X,W) = 1.41
10/18/2002Ka Yee Yeung, CEA10 Lessons from the example Correlation – direction only Euclidean distance – magnitude & direction Array data is noisy need many experiments to robustly estimate pairwise similarity
10/18/2002Ka Yee Yeung, CEA11 Clustering algorithms From pairwise similarities to groups Inputs: –Raw data matrix or similarity matrix –Number of clusters or some other parameters
10/18/2002Ka Yee Yeung, CEA12 Hierarchical Clustering [Hartigan 1975] Agglomerative (bottom-up) Algorithm: –Initialize: each item a cluster –Iterate: select two most similar clusters merge them –Halt: when required number of clusters is reached dendrogram
10/18/2002Ka Yee Yeung, CEA13 Hierarchical: Single Link cluster similarity = similarity of two most similar members - Potentially long and skinny clusters + Fast
10/18/2002Ka Yee Yeung, CEA14 Example: single link
10/18/2002Ka Yee Yeung, CEA15 Example: single link
10/18/2002Ka Yee Yeung, CEA16 Example: single link
10/18/2002Ka Yee Yeung, CEA17 Hierarchical: Complete Link cluster similarity = similarity of two least similar members + tight clusters - slow
10/18/2002Ka Yee Yeung, CEA18 Example: complete link
10/18/2002Ka Yee Yeung, CEA19 Example: complete link
10/18/2002Ka Yee Yeung, CEA20 Example: complete link
10/18/2002Ka Yee Yeung, CEA21 Hierarchical: Average Link cluster similarity = average similarity of all pairs + tight clusters - slow
10/18/2002Ka Yee Yeung, CEA22 Software: TreeView [Eisen et al. 1998] Fig 1 in Eisen’s PNAS 99 paper Time course of serum stinulation of primary human fibrolasts cDNA arrays with approx 8600 spots Similar to average-link Free download at:
10/18/2002Ka Yee Yeung, CEA23 Overview Similarity/distance measures Hierarchical clustering algorithms –Made popular by Stanford, ie. [Eisen et al. 1998] K-means –Made popular by many groups, eg. [Tavazoie et al. 1999] Model-based clustering algorithms [Yeung et al. 2001]
10/18/2002Ka Yee Yeung, CEA24 Partitional: K-Means [MacQueen 1965] 1 2 3
10/18/2002Ka Yee Yeung, CEA25 Details of k-means Iterate until converge: –Assign each data point to the closest centroid –Compute new centroid Objective function: Minimize
10/18/2002Ka Yee Yeung, CEA26 Properties of k-means Fast Proved to converge to local optimum In practice, converge quickly Tend to produce spherical, equal-sized clusters Related to the model-based approach Gavin Sherlock’s Xcluster:
10/18/2002Ka Yee Yeung, CEA27 What we have seen so far.. Definition of clustering Pairwise similarity: –Correlation –Euclidean distance Clustering algorithms: –Hierarchical agglomerative –K-means Different clustering algorithms different clusters Clustering algorithms always spit out clusters
10/18/2002Ka Yee Yeung, CEA28 Which clustering algorithm should I use? Good question No definite answer: on-going research Our preference: the model-based approach.
10/18/2002Ka Yee Yeung, CEA29 Model-based clustering (MBC) Gaussian mixture model: –Assume each cluster is generated by the multivariate normal distribution –Each cluster k has parameters : Mean vector: k –Location of cluster k Covariance matrix: k –volume, shape and orientation of cluster k Data transformations & normality assumption
10/18/2002Ka Yee Yeung, CEA30 More on the covariance matrix k (volume, orientation, shape) Equal volume, spherical (EI)unequal volume, spherical (VI) Equal volume, orientation, shape (EEE) Diagonal model Unconstrained (VVV)
10/18/2002Ka Yee Yeung, CEA31 Key advantage of the model- based approach: choose the model and the number of clusters Bayesian Information Criterion (BIC) [Schwarz 1978] –Approximate p(data | model) A large BIC score indicates strong evidence for the corresponding model.
10/18/2002Ka Yee Yeung, CEA32 Gene expression data sets Ovary data [Michel Schummer, Institute of Systems Biology] –Subset of data : 235 clones (portions of genes) 24 experiments (cancer/normal tissue samples) –235 clones correspond to 4 genes (external criterion)
10/18/2002Ka Yee Yeung, CEA33 BIC analysis: square root ovary data EEE and diagonal models -> first local max at 4 clusters Global max -> VI at 8 clusters
10/18/2002Ka Yee Yeung, CEA34 How do we know MBC is doing well? Answer: compare to external info Adjusted Rand: max at EEE 4 clusters (> CAST)
10/18/2002Ka Yee Yeung, CEA35 Take home messages MBC has superior performance on: –Quality of clusters –Number of clusters and model chosen (BIC) Clusters with high BIC scores tend to produce a high agreement with the external information MBC tends to produce better clusters than a leading heuristic-based clustering algorithm (CAST) Splus or R versions: