Download presentation
Presentation is loading. Please wait.
1
Clustering (Part II) 10/07/09
2
Outline Affinity propagation Quality evaluation
3
Affinity propagation: main idea Data points can be exemplar (cluster center) or non-exemplar (other data points). Message is passed between exemplar (centroid) and non-exemplar data points. The total number of clusters will be automatically found by the algorithm.
4
Responsibility r(j,k) A non-exemplar data point informs each candidate exemplar whether it is suitable for joining as a member. candidate exemplar k data point j
5
Availability a(j,k) A candidate exemplar data point informs other data points whether it is a good exemplar. candidate exemplar k data point j
6
Self-availability a(k,k) A candidate exemplar data point evaluates itself whether it is a good exemplar. candidate exemplar k data point j
7
An iterative procedure Update r(j, k) candidate exemplar k data point j r(j,k) a(j,k’) similarity between i and k
8
An iterative procedure Update a(j, k) candidate exemplar k data point j r(j’,k) a(j,k)
9
An iterative procedure Update a(k, k)
10
Step-by-step affinity propagation
11
Applications Multi-exon gene detection in mouse. Expression level at different exons within a gene are corregulated among different tissue types. 37 mouse tissues involved. 12 tiling arrays. (Frey et al. 2005)
12
“Algorithms for unsupervised classification or cluster analysis abound. Unfortunately however, algorithm development seems to be a preferred activity to algorithm evaluation among methodologists. …… No consensus or clear guidelines exist to guide these decisions. Cluster analysis always produces clustering, but whether a pattern observed in the sample data characterizes a pattern present in the population remains an open question. Resampling-based methods can address this last point, but results indicate that most clusterings in microarray data sets are unlikely to reflect reproducible patterns or patterns in the overall population.” -Allison et al. (2006)
13
Stability of a cluster Motivation: Real clusters should be reproducible under perturbation: adding noise, omission of data, etc. Procedure: Perturb observed data by adding noise. Apply clustering procedure to cluster the perturbed data. Repeat the above procedures, generate a sample of clusters. Global test Cluster-specific tests: R-index, D-index. (McShane et al. 2002)
14
1 2 6 34 5 1 2 6 34 5
15
Where is the “truth”? “ In the context of unsupervised learning, there is no such direct measure of success. It is difficult to ascertain the validity of inference drawn from the output of most unsupervised learning algorithms. One must often resort to heuristic arguments not only for motivating the algorithm, but also for judgments as to the quality of results. This uncomfortable situation has led to heavy proliferation of proposed methods, since effectiveness is a matter of opinion and cannot be verified directly.” Hastie et al. 2001; ESL
16
Global test Null hypothesis: Data come from a multivariate Gaussian distribution. Procedure: Consider a subspace spanned by top principle components. Estimate distribution of “nearest neighbor” distances Compare observed with simulated data.
17
R-index If cluster i contains n i objects, then it contains m i = n i *(n i – 1)/2 of pairs. Let c i be the number of pairs that fall in the same cluster for the re-clustered perturbed data. r i = c i /m i measures the robustness of the cluster i. R-index = i c i / i m i measures overall stability of a clustering algorithm.
18
D-index For each cluster, determine the closest cluster for the perturbed data Calculated the average discrepancy between the clusters for the original and perturbed data: omission vs addition. D-index is a summation of all cluster- specific discrepancy.
19
Applications 16 prostate cancer; 9 benign tumor 6500 genes Use hierarchical clustering to obtain 2,3, and 4 clusters. Questions: are these clusters reliable?
22
Issues with calculating R and D indices How big is the size of perturbation? How to quantify the significance level? What about nested consistency?
23
Biclustering
24
Gene expression conditions genes 1D-approach: To identify condition cluster, all genes are used. But probably only a few genes are differentially expressed. Motivation
25
Gene expression conditions genes 1D-approach: To identify gene cluster, all conditions are used. But a set of genes may only be expressed under a few conditions. Motivation
26
Gene expression conditions genes Bi-clustering Objective: To isolate genes that are co- expressed under a specific set of conditions. Motivation
27
Coupled Two-Way Clustering An iterative procedure involving the following two steps. –Within a cluster of conditions, search for gene clusters. –Using features from a cluster of genes, search for condition clusters. (Getz et al. 2001)
28
SAMBA – A bipartite graph model V = GenesU = Conditions Tanay et al. 2002
29
V = GenesU = Conditions E = “respond” = differential expression Tanay et al. 2002 SAMBA – A bipartite graph model
30
V = GenesU = Conditions E = “respond” = differential expression Cluster = subgraph (U’, V’, E’) =subset of corregulated genes V’ in conditions U’ Tanay et al. 2002 SAMBA – A bipartite graph model
31
SAMBA -- algorithm Goal: Find the “heaviest” subgraphs. H = (U’, V’, E’) Tanay et al. 2002
32
SAMBA -- algorithm Goal: Find the “heavy” subgraphs. missing edge H = (U’, V’, E’) Tanay et al. 2002
33
SAMBA -- algorithm p u,v -- probability of edge expected at random p c – probability of edge within cluster Compute a weight score for H. H = (U’, V’, E’) Tanay et al. 2002
34
SAMBA -- algorithm Finding the heaviest graph is an NP-hard problem. Use a polynomial algorithm to search for minima efficiently. H = (U’, V’, E’) Tanay et al. 2002
35
Significance of weight Let H = (U’, V’, E’) be a subgraph. Fix U’, random select a new V” with the same size as V’. The weight for the new subgraph (U’, V”, E”) gives a background distribution. Estimate p-value bp comparing log L(H) with the background distribution.
36
Model evaluation The p-value distribution for the top candidate clusters. If biological classification data are available, evaluate the purity of class membership within each bicluster.
37
Reading List Frey and Dueck 2007 –Affinity propagation McShine et al. 2002 –Clustering model evaluation Tanay et al. 2002 –SAMBA for biclustering
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.