Download presentation
Presentation is loading. Please wait.
Published byGeorgia Tyler Modified over 9 years ago
1
CZ5225: Modeling and Simulation in Biology Lecture 3: Clustering Analysis for Microarray Data I Prof. Chen Yu Zong Tel: 6874-6877 Email: yzchen@cz3.nus.edu.sg http://xin.cz3.nus.edu.sg Room 07-24, level 7, SOC1, NUS yzchen@cz3.nus.edu.sg http://xin.cz3.nus.edu.sgyzchen@cz3.nus.edu.sg http://xin.cz3.nus.edu.sg
2
2 Clustering Algorithms Be weary - confounding computational artifacts are associated with all clustering algorithms. -You should always understand the basic concepts behind an algorithm before using it. Anything will cluster! Garbage In means Garbage Out.
3
3 Supervised vs. Unsupervised Learning Supervised: there is a teacher, class labels are known Support vector machines Backpropagation neural networks Unsupervised: No teacher, class labels are unknown Clustering Self-organizing maps
4
4 Gene Expression Data Gene expression data on p genes for n samples Genes mRNA samples Gene expression level of gene i in mRNA sample j = Log (Red intensity / Green intensity) Log(Avg. PM - Avg. MM) sample1sample2sample3sample4sample5 … 1 0.46 0.30 0.80 1.51 0.90... 2-0.10 0.49 0.24 0.06 0.46... 3 0.15 0.74 0.04 0.10 0.20... 4-0.45-1.03-0.79-0.56-0.32... 5-0.06 1.06 1.35 1.09-1.09...
5
5 Expression Vectors Gene Expression Vectors encapsulate the expression of a gene over a set of experimental conditions or sample types. -0.8 0.8 1.5 1.8 0.5 -1.3 -0.4 1.5 Line Graph -2 2 Numeric Vector Heatmap
6
6 Expression Vectors As Points in ‘ Expression Space ’ Experiment 1 Experiment 2 Experiment 3 Similar Expression -0.8 -0.6 0.91.2 -0.3 1.3 -0.7 t 1t 2t 3 G1 G2 G3 G4 G5 -0.4 -0.8 -0.7 1.30.9 -0.6
7
7 Cluster Analysis Group a collection of objects into subsets or “clusters” such that objects within a cluster are closely related to one another than objects assigned to different clusters.
8
8 How can we do this? What is closely related? Distance or similarity metric What is close? Clustering algorithm How do we minimize distance between objects in a group while maximizing distances between groups?
9
9 Distance Metrics Euclidean Distance measures average distance Manhattan (City Block) measures average in each dimension Correlation measures difference with respect to linear trends Gene Expression 1 Gene Expression 2 (5.5,6) (3.5,4)
10
10 Clustering Gene Expression Data Cluster across the rows, group genes together that behave similarly across different conditions. Cluster across the columns, group different conditions together that behave similarly across most genes. Genes Expression Measurements i j
11
11 Clustering Time Series Data Measure gene expression on consecutive days Gene Measurement matrix G1= [1.2 4.0 5.0 1.0] G2= [2.0 2.5 5.5 6.0] G3= [4.5 3.0 2.5 1.0] G4= [3.5 1.5 1.2 1.5]
12
12 Euclidean Distance Distance is the square root of the sum of the squared distance between coordinates 05.34.35.1 5.306.46.5 4.36.402.3 5.16.52.30
13
13 City Block or Manhattan Distance G1= [1.2 4.0 5.0 1.0] G2= [2.0 2.5 5.5 6.0] G3= [4.5 3.0 2.5 1.0] G4= [3.5 1.5 1.2 1.5] Distance is the sum of the absolute value between coordinates 07.86.89.1 7.801111.3 6.81104.3 9.111.34.30
14
14 Correlation Distance Pearson correlation measures the degree of linear relationship between variables, [-1,1] Distance is 1-(pearson correlation), range of [0,2] 0.91.981.6.9101.91.7.981.90.22 1.61.7.220
15
15 Similarity Measurements Pearson Correlation Two profiles (vectors) and +1 Pearson Correlation – 1
16
16 Similarity Measurements Cosine Correlation +1 Cosine Correlation – 1
17
17 Hierarchical Clustering (HCL-1) IDEA: Iteratively combines genes into groups based on similar patterns of observed expression By combining genes with genes OR genes with groups algorithm produces a dendrogram of the hierarchy of relationships. Display the data as a heatmap and dendrogram Cluster genes, samples or both
18
18 Hierarchical Clustering Dendrogram Venn Diagram of Clustered Data
19
19 Hierarchical clustering Merging (agglomerative): start with every measurement as a separate cluster then combine Splitting: make one large cluster, then split up into smaller pieces What is the distance between two clusters?
20
20 Distance between clusters Single-link: distance is the shortest distance from any member of one cluster to any member of the other cluster Complete link: distance is the longest distance from any member of one cluster to any member of the other cluster Average: Distance between the average of all points in each cluster Ward: minimizes the sum of squares of any two clusters
21
21 Hierarchical Clustering-Merging Euclidean distance Average linking Gene expression time series Distance between clusters when combined
22
22 Manhattan Distance Average linking Gene expression time series Distance between clusters when combined
23
23 Correlation Distance
24
24 Data Standardization Data points are normalized with respect to mean and variance, “sphering” the data After sphering, Euclidean and correlation distance are equivalent Standardization makes sense if you are not interested in the size of the effects, but in the effect itself Results are misleading for noisy data
25
25 Distance Comments Every clustering method is based SOLELY on the measure of distance or similarity E.G. Correlation: measures linear association between two genes What if data are not properly transformed? What about outliers? What about saturation effects? Even good data can be ruined with the wrong choice of distance metric
26
26 ABCD Dist ABCD A2072 B1025 C3 D Distance MatrixInitial Data Items Hierarchical Clustering
27
27 ABCD Dist ABCD A2072 B1025 C3 D Distance MatrixInitial Data Items Hierarchical Clustering
28
28 Current Clusters Single Linkage Hierarchical Clustering Dist ABCD A2072 B1025 C3 D Distance Matrix ABCD 2
29
29 Dist ADBC 203 B10 C Distance MatrixCurrent Clusters Single Linkage Hierarchical Clustering ABCD
30
30 ABCD Dist ADBC 203 B10 C Distance MatrixCurrent Clusters Single Linkage Hierarchical Clustering
31
31 Dist ADBC 203 B10 C Distance MatrixCurrent Clusters Single Linkage Hierarchical Clustering ABCD 3
32
32 Dist AD C B 10 B Distance MatrixCurrent Clusters Single Linkage Hierarchical Clustering ABCD
33
33 ABCD Dist AD C B 10 B Distance MatrixCurrent Clusters Single Linkage Hierarchical Clustering
34
34 Dist AD C B 10 B Distance MatrixCurrent Clusters Single Linkage Hierarchical Clustering ABCD 10
35
35 ABCD Dist AD CB Distance MatrixFinal Result Single Linkage Hierarchical Clustering
36
36 Hierarchical Clustering Gene 1 Gene 2 Gene 3 Gene 4 Gene 5 Gene 6 Gene 7 Gene 8
37
37 Hierarchical Clustering Gene 1 Gene 2 Gene 3 Gene 4 Gene 5 Gene 6 Gene 7 Gene 8
38
38 Hierarchical Clustering Gene 1 Gene 2 Gene 3 Gene 4 Gene 5 Gene 6 Gene 7 Gene 8
39
39 Hierarchical Clustering Gene 1 Gene 2 Gene 3 Gene 4 Gene 5 Gene 6 Gene 7 Gene 8
40
40 Hierarchical Clustering Gene 1 Gene 2 Gene 3 Gene 4 Gene 5 Gene 6 Gene 7 Gene 8
41
41 Hierarchical Clustering Gene 1 Gene 2 Gene 3 Gene 4 Gene 5 Gene 6 Gene 7 Gene 8
42
42 Hierarchical Clustering Gene 1 Gene 2 Gene 3 Gene 4 Gene 5 Gene 6 Gene 7 Gene 8
43
43 Hierarchical Clustering Gene 1 Gene 2 Gene 3 Gene 4 Gene 5 Gene 6 Gene 7 Gene 8
44
44 Hierarchical Clustering HL
45
45 Hierarchical Clustering The Leaf Ordering Problem: Find ‘optimal’ layout of branches for a given dendrogram architecture 2 N-1 possible orderings of the branches For a small microarray dataset of 500 genes, there are 1.6*E150 branch configurations Samples Genes
46
46 Hierarchical Clustering The Leaf Ordering Problem:
47
47 Hierarchical Clustering Pros: –Commonly used algorithm –Simple and quick to calculate Cons: –Real genes probably do not have a hierarchical organization
48
48 Using Hierarchical Clustering 1.Choose what samples and genes to use in your analysis 2.Choose similarity/distance metric 3.Choose clustering direction 4.Choose linkage method 5.Calculate the dendrogram 6.Choose height/number of clusters for interpretation 7.Assess results 8.Interpret cluster structure
49
49 Choose what samples/genes to include Very important step Do you want to include housekeeping genes or genes that didn’t change in your results? How do you handle replicates from the same sample? Noisy samples? Dendrogram is a mess if everything is included in large datasets Gene screening
50
50 No Filtering
51
51 Filtering 100 relevant genes
52
52 2. Choose distance metric Metric should be a valid measure of the distance/similarity of genes Examples –Applying Euclidean distance to categorical data is invalid –Correlation metric applied to highly skewed data will give misleading results
53
53 3. Choose clustering direction Merging clustering (bottom up) Divisive –split so that genes in the two clusters are the most similar, maximize distance between clusters
54
54 Nearest Neighbor Algorithm Nearest Neighbor Algorithm is an agglomerative approach (bottom-up). Starts with n nodes (n is the size of our sample), merges the 2 most similar nodes at each step, and stops when the desired number of clusters is reached.
55
55 Nearest Neighbor, Level 3, k = 6 clusters.
56
56 Nearest Neighbor, Level 4, k = 5 clusters.
57
57 Nearest Neighbor, Level 5, k = 4 clusters.
58
58 Nearest Neighbor, Level 6, k = 3 clusters.
59
59 Nearest Neighbor, Level 7, k = 2 clusters.
60
60 Nearest Neighbor, Level 8, k = 1 cluster.
61
61 Calculate the similarity between all possible combinations of two profiles Two most similar clusters are grouped together to form a new cluster Calculate the similarity between the new cluster and all remaining clusters. Hierarchical Clustering Keys Similarity Clustering
62
62 Hierarchical Clustering C1C1 C2C2 C3C3 Merge which pair of clusters ?
63
63 + + Hierarchical Clustering Single Linkage C1C1 C2C2 Dissimilarity between two clusters = Minimum dissimilarity between the members of two clusters Tend to generate “long chains”
64
64 + + Hierarchical Clustering Complete Linkage C1C1 C2C2 Dissimilarity between two clusters = Maximum dissimilarity between the members of two clusters Tend to generate “clumps”
65
65 + + Hierarchical Clustering Average Linkage C1C1 C2C2 Dissimilarity between two clusters = Averaged distances of all pairs of objects (one from each cluster).
66
66 + + Hierarchical Clustering Average Group Linkage C1C1 C2C2 Dissimilarity between two clusters = Distance between two cluster means.
67
67 Which one? Both methods are “step-wise” optimal, at each step the optimal split or merge is performed Doesn’t mean that the final result is optimal Merging: Computationally simple Precise at bottom of tree Good for many small clusters Divisive More complex, but more precise at the top of the tree Good for looking at large and/or few clusters For Gene expression applications, divisive makes more sense
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.