Download presentation
Presentation is loading. Please wait.
1
Cluster Analysis for Gene Expression Data Ka Yee Yeung http://staff.washington.edu/kayee/research.html Center for Expression Arrays Department of Microbiology kayee@u.washington.edu
2
10/18/2002Ka Yee Yeung, CEA2 Snapshot of activities in the cell Each chip represents an experiment: –time course –tissue samples (normal/cancer) …….. A gene expression data set p experiments n genes X ij
3
10/18/2002Ka Yee Yeung, CEA3 What is clustering? Group similar objects together Objects in the same cluster (group) are more similar to each other than objects in different clusters Data exploratory tool: to find patterns in large data sets Unsupervised approach: do not make use of prior knowledge of data
4
10/18/2002Ka Yee Yeung, CEA4 Applications of clustering gene expression data Cluster the genes functionally related genes Cluster the experiments discover new subtypes of tissue samples Cluster both genes and experiments find sub-patterns
5
10/18/2002Ka Yee Yeung, CEA5 Examples of clustering algorithms Hierarchical clustering algorithms eg. [Eisen et al 1998] K-means eg. [Tavazoie et al. 1999] Self-organizing maps (SOM) eg. [Tamayo et al. 1999] CAST [Ben-Dor, Yakhini 1999] Model-based clustering algorithms eg. [Yeung et al. 2001]
6
10/18/2002Ka Yee Yeung, CEA6 Overview Similarity/distance measures Hierarchical clustering algorithms –Made popular by Stanford, ie. [Eisen et al. 1998] K-means –Made popular by many groups, eg. [Tavazoie et al. 1999] Model-based clustering algorithms [Yeung et al. 2001]
7
10/18/2002Ka Yee Yeung, CEA7 How to define similarity? Similarity measures: –A measure of pairwise similarity or dissimilarity –Examples: Correlation coefficient Euclidean distance Experiments genes X Y X Y Raw matrixSimilarity matrix 1 n 1p n n
8
10/18/2002Ka Yee Yeung, CEA8 Similarity measures (for those of you who enjoy equations…) Euclidean distance Correlation coefficient
9
10/18/2002Ka Yee Yeung, CEA9 Example Correlation (X,Y) = 1 Distance (X,Y) = 4 Correlation (X,Z) = -1 Distance (X,Z) = 2.83 Correlation (X,W) = 1 Distance (X,W) = 1.41
10
10/18/2002Ka Yee Yeung, CEA10 Lessons from the example Correlation – direction only Euclidean distance – magnitude & direction Array data is noisy need many experiments to robustly estimate pairwise similarity
11
10/18/2002Ka Yee Yeung, CEA11 Clustering algorithms From pairwise similarities to groups Inputs: –Raw data matrix or similarity matrix –Number of clusters or some other parameters
12
10/18/2002Ka Yee Yeung, CEA12 Hierarchical Clustering [Hartigan 1975] Agglomerative (bottom-up) Algorithm: –Initialize: each item a cluster –Iterate: select two most similar clusters merge them –Halt: when required number of clusters is reached dendrogram
13
10/18/2002Ka Yee Yeung, CEA13 Hierarchical: Single Link cluster similarity = similarity of two most similar members - Potentially long and skinny clusters + Fast
14
10/18/2002Ka Yee Yeung, CEA14 Example: single link 1 2 3 4 5
15
10/18/2002Ka Yee Yeung, CEA15 Example: single link 1 2 3 4 5
16
10/18/2002Ka Yee Yeung, CEA16 Example: single link 1 2 3 4 5
17
10/18/2002Ka Yee Yeung, CEA17 Hierarchical: Complete Link cluster similarity = similarity of two least similar members + tight clusters - slow
18
10/18/2002Ka Yee Yeung, CEA18 Example: complete link 1 2 3 4 5
19
10/18/2002Ka Yee Yeung, CEA19 Example: complete link 1 2 3 4 5
20
10/18/2002Ka Yee Yeung, CEA20 Example: complete link 1 2 3 4 5
21
10/18/2002Ka Yee Yeung, CEA21 Hierarchical: Average Link cluster similarity = average similarity of all pairs + tight clusters - slow
22
10/18/2002Ka Yee Yeung, CEA22 Software: TreeView [Eisen et al. 1998] Fig 1 in Eisen’s PNAS 99 paper Time course of serum stinulation of primary human fibrolasts cDNA arrays with approx 8600 spots Similar to average-link Free download at: http://rana.lbl.gov/EisenSoftware.htm
23
10/18/2002Ka Yee Yeung, CEA23 Overview Similarity/distance measures Hierarchical clustering algorithms –Made popular by Stanford, ie. [Eisen et al. 1998] K-means –Made popular by many groups, eg. [Tavazoie et al. 1999] Model-based clustering algorithms [Yeung et al. 2001]
24
10/18/2002Ka Yee Yeung, CEA24 Partitional: K-Means [MacQueen 1965] 1 2 3
25
10/18/2002Ka Yee Yeung, CEA25 Details of k-means Iterate until converge: –Assign each data point to the closest centroid –Compute new centroid Objective function: Minimize
26
10/18/2002Ka Yee Yeung, CEA26 Properties of k-means Fast Proved to converge to local optimum In practice, converge quickly Tend to produce spherical, equal-sized clusters Related to the model-based approach Gavin Sherlock’s Xcluster: http://genome-www.stanford.edu/~sherlock/cluster.html
27
10/18/2002Ka Yee Yeung, CEA27 What we have seen so far.. Definition of clustering Pairwise similarity: –Correlation –Euclidean distance Clustering algorithms: –Hierarchical agglomerative –K-means Different clustering algorithms different clusters Clustering algorithms always spit out clusters
28
10/18/2002Ka Yee Yeung, CEA28 Which clustering algorithm should I use? Good question No definite answer: on-going research Our preference: the model-based approach.
29
10/18/2002Ka Yee Yeung, CEA29 Model-based clustering (MBC) Gaussian mixture model: –Assume each cluster is generated by the multivariate normal distribution –Each cluster k has parameters : Mean vector: k –Location of cluster k Covariance matrix: k –volume, shape and orientation of cluster k Data transformations & normality assumption
30
10/18/2002Ka Yee Yeung, CEA30 More on the covariance matrix k (volume, orientation, shape) Equal volume, spherical (EI)unequal volume, spherical (VI) Equal volume, orientation, shape (EEE) Diagonal model Unconstrained (VVV)
31
10/18/2002Ka Yee Yeung, CEA31 Key advantage of the model- based approach: choose the model and the number of clusters Bayesian Information Criterion (BIC) [Schwarz 1978] –Approximate p(data | model) A large BIC score indicates strong evidence for the corresponding model.
32
10/18/2002Ka Yee Yeung, CEA32 Gene expression data sets Ovary data [Michel Schummer, Institute of Systems Biology] –Subset of data : 235 clones (portions of genes) 24 experiments (cancer/normal tissue samples) –235 clones correspond to 4 genes (external criterion)
33
10/18/2002Ka Yee Yeung, CEA33 BIC analysis: square root ovary data EEE and diagonal models -> first local max at 4 clusters Global max -> VI at 8 clusters
34
10/18/2002Ka Yee Yeung, CEA34 How do we know MBC is doing well? Answer: compare to external info Adjusted Rand: max at EEE 4 clusters (> CAST)
35
10/18/2002Ka Yee Yeung, CEA35 Take home messages MBC has superior performance on: –Quality of clusters –Number of clusters and model chosen (BIC) Clusters with high BIC scores tend to produce a high agreement with the external information MBC tends to produce better clusters than a leading heuristic-based clustering algorithm (CAST) Splus or R versions: http://www.stat.washington.edu/fraley/mclust/
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.