Download presentation
Presentation is loading. Please wait.
1
Lecture 10. Clustering Algorithms
The Chinese University of Hong Kong CSCI3220 Algorithms for Bioinformatics
2
Lecture outline Numeric datasets and clustering
Some clustering algorithms Hierarchical clustering Dendrograms K-means Subspace and bi-clustering algorithms Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
3
Numeric Datasets and Clustering
Part 1 Numeric Datasets and Clustering
4
Numeric datasets in bioinformatics
So far we have mainly studied problems related to biological sequences Sequences represent the static states of an organism Program and data stored in hard disk Numeric measurements represent the dynamic states Program and data loaded into memory at run time Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
5
Gene expression Protein abundance is the best way to measure activity of a protein coding gene However, not much data is available due to difficult experiments mRNA levels are not ideal indicators of gene activity mRNA level and protein level are not very correlated due to mRNA degradation, translational efficiency, translation and post-translational modifications, and so on However, it is very easy to measure mRNA levels High-throughput experiments to measure the mRNA levels of many genes at a time: microarrays and RNA (cDNA) sequencing Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
6
RNA sequencing RNA sequencing is becoming a standard technology for measuring mRNA levels Convert RNAs back to cDNAs, sequence them, and identify which genes they correspond to “Digital”: expression level represented by read counts No need to have prior knowledge about the sequences If a sequence is not unique to a gene, cannot determine which gene it comes from Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
7
RNA sequencing Image credit: Wang et al., Nature Review Genetics 10(1):57-63, (2009) Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
8
Processing RNA-seq data
Many steps and we will not go into the details Quality check Read trimming and filtering Read mapping (BWT, suffix array, etc.) Data normalization ... Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
9
Gene expression data Final form of data from microarray or RNA-seq:
A matrix of real numbers Each row corresponds to a gene Each column corresponds to a sample/experiment: A particular condition A cell type (e.g., cancer) Questions: Are there genes that show similar changes to their expression levels across experiments? The genes may have related functions Are there samples with similar set of genes expressed? The samples may be of the same type Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
10
Clustering of gene expression data
Clustering: Grouping of related objects into clusters An object could be a gene or a sample Usually clustering is done on both. When genes are the objects, each sample is an attribute. When samples are the objects, each gene is an attribute Goals: Similar objects are in the same cluster Dissimilar objects are in different clusters Could define a scoring function to evaluate how good a set of clusters is Most clustering problems are NP hard We will study heuristic algorithms Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
11
Heatmap and clustering results
Color: expression level Clustering Samples Genes Image credit: Borries and Wang, Computational Statistics & Data Analysis 53(12): , (2009); Alizadeh et al., Nature 403(6769): , (2000) Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
12
Some Clustering Algorithms
Part 2 Some Clustering Algorithms
13
Hierarchical clustering
One of the most commonly used clustering algorithms is agglomerative hierarchical clustering Agglomerative: Merging There are also divisive hierarchical clustering algorithms The algorithm: Treat each object as a cluster by itself Compute the distance between every pair of clusters Merge the two closest clusters Re-compute distance values between the merged cluster and each other cluster Repeat #3 and #4 until only one cluster is left Same as UPGMA, but without the phylogenetic context Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
14
Hierarchical clustering
2D illustration: Each point is a gene The coordinate of a point indicates the expression value of the gene in two samples Dendrogram (similar to a phylogenetic tree) Sample 1 Sample 2 A B C D E F A B C D E F Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
15
Representing a dendrogram
Since a dendrogram is essentially a tree, we can represent it using any tree format For example, the Newick format: (((A,B),(C,(D,E))),F); We could use the Newick format to also specify how the leaves should be ordered in a visualization. For example, for the Newick string (F,((B,A),((D,E),C))); from the same merge order of the clusters but with the leaves ordered differently, the corresponding dendrogram is: A B C D E F B A C D E F Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
16
More details Three questions:
How to compute the distance between two points? How to compute the distance between two clusters? How to efficiently perform these computations? Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
17
Distance Most common: Euclidean distance
xi1j is the expression level of the i1-th object (say, gene) and the j-th attribute (say, sample) and m is the total number of attributes Need to normalize the attributes Also common: (1 - Pearson correlation) / 2. Pearson correlation is a similarity measure with value between -1 and 1: , where Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
18
Euclidean distance vs. correlation
Two points have small Euclidean distance if their attribute values are close (but not necessarily correlated) Two points have large Pearson correlation if their attribute values have consistent trends (but not necessarily close) Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
19
Which one to use? Sometimes absolute expression values are more important Example: When there is a set of homogenous samples (e.g., all of a certain cancer type), and the goal is to find out genes that are all highly expressed or lowly expressed Usually the increase-decrease trend is more important than absolute expression values Example: When detecting changes between two sets of samples or across a number of time points Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
20
Similarity between two clusters
Several schemes (using Euclidean distance as example): Average-link: average between all pairs of points (used by UPGMA) Single-link: closest among all pairs of points Complete-link: farthest among all pairs of points Centroid-link: between the centroids Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
21
Similarity between two clusters
Average-link: equal vote by all members of the clusters, preferring to merge clusters liked by many Single-link: merge two clusters even if just one pair likes it very much Complete-link: not merge two clusters even if just one pair does not like it Centroid-link: similar to average-link, but easier to compute Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
22
Similarity between two clusters
Suppose clusters C1 and C2 have already been formed Average-link prefers to merge I and C2 next, as their points are close on average Single-link prefers to merge C1 and E next, as C and E are very close Complete-link prefers to merge I and C2 next, as I is not too far from F, G or H (as compared to A-E, C-H, E-H, etc.) Centroid-link prefers to merge C1 and C2 next, as their centroids are close (and not so affected by the long distance between C and H) C1 C2 H A F G D B C I E Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
23
Updating To determine which two clusters are most similar, we need to compute the distance between every pair of clusters At the beginning, this involves O(n2) computations for n objects, followed by a way to find out the smallest value, either Linear scan, which would take O(n2) time OR Sorting, which would take O(n2 log n2) = O(n2 log n) time After a merge, we need to remove the distances involving the two merging clusters, and add back the distances of the new cluster with all other clusters: O(n) between-cluster distance calculations (assuming that takes constant time for now – will come back to this topic later), followed by either Linear scan of new list, which would take O(n2) time OR Re-sorting, which would take O(n2 log n) time OR Binary search and removing/inserting distances, which would take O(n log n2) = O(n log n) time Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
24
Updating Summary: At the beginning After each merge In total,
Linear scan: O(n2) time OR Sorting: O(n2 log n) time After each merge Re-sorting: O(n2 log n) time OR Binary search and removing/inserting distances: O(n log n) time In total, Linear scan: O(n3) time Maintaining a sorted list: O(n2 log n) time Can these be done faster? Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
25
Heap A heap (also called a priority queue) is for maintaining the minimum value of a list of numbers without sorting Ideas: Build a binary tree structure with each node storing one of the numbers, and the root of a sub-tree is always smaller than all other nodes in the sub-tree Store the tree in an array that allows efficient updates Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
26
Heap Example: tree representation (each node is the distance between two clusters) Corresponding array representation (notice that the array is not entirely sorted) If first entry has index 0, then the children of node at entry i are at entries 2i+1 and 2i+2 Smallest value always at the first entry of the array 1 4 9 5 6 10 13 12 8 11 1 4 9 5 6 10 13 12 8 11 2 3 7 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
27
Constructing a heap Staring with any input array
From node at entry N/2 down to node at entry 1, swap with smallest child iteratively if it is smaller than the current node N is the total number of nodes, which is equal to n(n-1)/2, the number of pairs for n clusters Example input: 13, 5, 10, 4, 11, 1, 9, 12, 8, 6 13 5 10 4 11 1 9 12 8 6 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
28
Constructing a heap 13 5 10 4 11 1 9 12 8 6 13 5 10 4 6 1 9 12 8 11 13 5 10 4 6 1 9 12 8 11 13 5 10 4 6 1 9 12 8 11 13 5 1 4 6 10 9 12 8 11 13 5 1 4 6 10 9 12 8 11 13 4 1 5 6 10 9 12 8 11 13 4 1 5 6 10 9 12 8 11 1 4 9 5 6 10 13 12 8 11 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
29
Constructing a heap Input: Output: 13 5 10 13 5 10 4 11 1 9 12 8 6 4
Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
30
Constructing a heap Time needed:
Apparently, for each of the O(N) nodes, up to O(log N) swaps are needed, so O(N log N) time in total Same as sorting However, by doing a careful amortized analysis, actually only O(N) time is needed Why? Because only one node could have log N swaps, two nodes could have log N - 1 swaps, etc. For example, for 15 nodes: N log N = 15 log2 15 15(3) = 45 3 + 2(2) + 4(1) + 8(0) = 11 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
31
Deleting a value For many applications of heaps, deletion is done to remove the value at the root only In our clustering application, for each cluster we maintain the entries corresponding to the distances related to it, so after the cluster is merged we remove all these distance values from the heap In both cases, deletion is done by moving the last value to the deleted entry, then re- “heapify” Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
32
Deleting a value Example: deleting 4 Each deletion takes O(log N) time
After each merge of two clusters, need to remove O(n) distances from heap – O(n log N) = O(n log n) time in total 1 1 11 9 5 6 10 13 12 8 1 5 9 8 6 10 13 12 11 4 9 5 6 10 13 12 8 11 1 4 9 5 6 10 13 12 8 11 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
33
Inserting a new value Add to the end
Iteratively swap with parent until larger than parent Example: Adding value 3 Each insertion takes O(log N) time After each merge of two clusters, need to insert O(n) distances to heap – O(n log N) = O(n log n) time in total 1 1 3 9 8 5 10 13 12 11 6 5 9 8 6 10 13 12 11 3 1 5 9 8 6 10 13 12 11 3 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
34
Total time and space O(N) = O(n2) space
Initial construction: O(N) = O(n2) time After each merge: O(n log n) time O(n) merges in total Therefore in total O(n2 log n) time is needed Now we study another structure that needs O(n2) space but only O(n2) time in total Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
35
Quad tree Proposed in Eppstein, Proceedings of the Ninth Annual ACM-SIAM Symposium on Discrete Algorithms , (1998) Main idea: Group the objects iteratively to form a tree, with the minimum distance between all objects stored at the root of the sub-tree Example: Distances between 9 objects Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
36
Quad tree 1 6 6 5 15 17 11 14 10 16 12 13 9 20 8 1 2 3 4 7 18 2 5 10 3 15 16 12 4 17 12 20 17 5 11 13 8 4 17 6 11 13 8 4 18 1 7 14 9 16 14 4 13 14 8 11 6 13 12 7 11 12 3 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 12 11 1 2 3 4 5 6 7 8 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
37
Updating the quad tree After a merge, the algorithm needs to
Delete distance values in two rows and two columns Add back distance values into a row and a column If we do not want to compact the tree, simply fill in to the other row and column Re-compute minimum values at upper levels Example: Merging clusters 5 and 6 Suppose new distance between this merged cluster and other clusters are: 1 2 3 4 7 8 11 13 17 14 12 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
38
Merging clusters 5 and 6 1 6 6 5 15 17 11 14 10 16 12 13 9 20 8 1 2 3 4 5,6 7 2 5 10 Distances with new cluster: 3 15 16 12 1 2 3 4 7 8 11 13 17 14 12 4 17 12 20 17 5 11 13 8 4 17 6 11 13 8 4 18 1 7 14 9 16 14 4 13 14 8 11 6 13 12 7 11 12 3 1 2 3 4 5 6 7 6 5 15 17 11 14 10 16 12 13 9 20 8 1 2 3 4 5,6 7 1 2 3 4 5,6 7 8 5 12 11 17 6 1 2 3 4 5,6 7 8 5 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
39
Space and time analysis
Space needed: O(n2) Initial construction: O(n2 + n2/4 + n2/ ) = O(n2) time After each merge, number of values to update: O(2n + 2n/2 + 2n/ ) = O(n), each taking a constant amount of time Time needed for the whole clustering process = O(n2) More efficient than using a heap There are data structures that require less space but more time Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
40
Computing within-cluster distances
If Ci and Cj are merged, how to compute d(CiCj,Ck) based on d(Ci,Ck) and d(Cj,Ck)? Single-link: d(CiCj,Ck) = min{d(Ci,Ck), d(Cj,Ck)} Complete-link: d(CiCj,Ck) = max{d(Ci,Ck), d(Cj,Ck)} Average-link: d(CiCj,Ck) = [d(Ci,Ck)|Ci||Ck| + d(Cj,Ck)|Cj||Ck|] / [(|Ci|+|Cj|)|Ck|] Centroid-link: Cen(CiCj) = [Cen(Ci)|Ci| + Cen(Cj)|Cj|] / (|Ci|+|Cj|) All can be performed in constant time Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
41
A minor issue What if the number of rows/columns in the distance matrix is not a power of 2? Just add extra rows to the quad tree and fill them with Or just ignore these rows, which can save some space as compared to the above solution Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
42
A minor issue 1 6 6 5 15 17 11 14 10 16 12 13 9 20 8 1 2 3 4 7 18 2 5 10 3 15 16 12 4 17 12 20 17 5 11 13 8 4 17 6 11 13 8 4 18 1 7 14 9 16 14 4 13 14 8 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 12 11 9 14 1 2 3 4 5 6 7 8 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
43
K-means K-means is another classical clustering algorithm
MacQueen, Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability , (1967) Instead of hierarchically merging clusters, k-means iteratively partitions the objects into k clusters by repeating two steps until stabilized: Determining cluster representatives Randomly determined initially Centroids of current members in subsequent iterations Assigning each object to the cluster with the closest representative Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
44
Example (k=2) C1 A B C D E F I G H E G A B C D E F I G H A F C2 H B D
Random initial representatives C1 Assignment A B C D E F I G H E G Re-determining representatives A B C D E F I G H A F C2 H B D I C Assignment A B C D E F I G H C1 C2 Assignment A B C D E F I G H C1 Re-determining representatives A B C D E F I G H C2 Re-determining representatives A B C D E F I G H Assignment A B C D E F I G H C2 C1 Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
45
Hierarchical clustering vs. k-means
Advantages Providing the whole clustering tree (dendrogram), can cut to get any number of clusters No need to pre-determine k Fast Low memory consumption An object can move to another cluster Disadvantages Slow High memory consumption Once assigned, an object always stays in a cluster Providing only final clusters Need to pre-determine k There are hundreds of other clustering algorithms proposed: More efficient Allowing other data types Considering domain knowledge Model-based Finding clusters in subspaces (coming up next) Density approach ... Less sensitive to outliers Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
46
Embedded clusters Euclidean distance and Pearson correlation consider all attributes equally It is possible that for each cluster, only some attributes are relevant Image credit: Pomeroy et al., Nature 415(6870): , (2002) Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
47
Finding clusters in a subspace
One way is not to distinguish between objects and attributes, but to find a subset of rows and a subset of columns (a bicluster) so that the values inside the bicluster exhibit some coherent patterns Here we study one bi-clustering algorithm Cheng and Church, 8th Annual International Conference on Intelligent Systems for Molecular Biology, , (2000) Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
48
Cheng and Church biclustering
Notations: I is a subset of the rows J is a subset of the columns (I, J) defines a bicluster Model: Each value aij (at row i and column j) in a cluster is influenced by: Background of the whole cluster Effect of the i-th row Effect of the j-th column Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
49
Cheng and Church biclustering
Assumption: In the ideal case, aij = aiJ + aIj – aIJ, where is the mean of values in the cluster at row i is the mean of values in the cluster at column j is the mean of all values in the cluster Goal of the algorithm is to find I and J such that the following mean squared residue score is minimized: Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
50
Example Suppose the values in a cluster are generated according to the following row and column effects: Then the corresponding averages values are: a11 – a1J – aI1 + aIJ = 12 – 12.5 – = 0 You can verify for other i’s and j’s Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
51
Why this model? Assuming the expression level of a gene in a particular sample is determined by three additive effects: The cluster background E.g., the activity of the whole functional pathway The gene E.g., some genes are intrinsically more active The sample E.g., in some samples, all the genes in the cluster are activated Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
52
Algorithm How to find out clusters (i.e., (I, J)) that have small H values? It is proved that finding the largest cluster with H less than a fixed threshold is NP hard Heuristic method: Randomly determine I and J Try all possible addition/deletion of one row/column, and accept the one that results in smallest H Some variations involve addition or deletion only, or allowing addition or deletion of multiple rows/columns Repeat #2 until H does not decrease or it is smaller than threshold Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
53
More details Obviously, if the cluster contains only one row and one column, the residue H must be 0 Could limit the minimum number of rows/columns A cluster containing genes that do not change their expression values across different samples may not be interesting Could use the variance of expression values across samples as a secondary score How to find more than one cluster? After finding a cluster, replace it with random values before calling the algorithm again Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
54
Some clusters found Each line is a gene. The horizontal axis represents different time points Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
55
Case Study, Summary and Further Readings
Epilogue Case Study, Summary and Further Readings
56
Case study: Successful stories
Clustering of gene expression data has led to the discovery of disease subtypes and key genes to some biological processes Example 1: Automatic identification of cancer subtypes acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) without prior knowledge of these classes 2 clusters 4 clusters Image credit: Golub et al., Science 286(5439): , (1999) Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
57
Case study: Successful stories
Example 2: Identification of genes involved in the response to external stress Each triangle: multiple time points after producing an environmental change, such as heat shock or amino acid starvation Image credit: Gasch et al., Molecular Biology of the Cell 11(12): , (2000) Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
58
Case study: Successful stories
Example 3: Segmentation of the human genome into distinct region classes Image credit: The ENCODE Project Consortium, Nature 489(7414):57-74, (2012) Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
59
Summary Clustering is the process to group similar things into clusters Many applications in bioinformatics, the most well-known one is on gene expression analysis Classical clustering algorithms Agglomerative hierarchical clustering K-means Subspace/bi-clustering algorithms Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
60
Further readings The book by Leonard Kaufman and Peter J. Rousseeuw, Finding Groups in Data: An Introduction to Cluster Analysis, Wiley Inter-Science 1990 A classical reference book on cluster analysis Last update: 24-Aug-2017 CSCI3220 Algorithms for Bioinformatics | Kevin Yip-cse-cuhk | Fall 2017
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.