Download presentation
Presentation is loading. Please wait.
Published bySabina Curtis Modified over 9 years ago
1
Line detection Assume there is a binary image, we use F(ά,X)=0 as the parametric equation of a curve with a vector of parameters ά=[α 1, …, α m ] and X=[x 1,x 2 ] the coordinates of a pixel in the image.
2
Current clustering problem The mathematical equations of the four lines are, L 1 : x 1 +x 2 =1, L 2 : x 1 ─x 2 =1, L 3 : -x 1 - x 2 =1, and L 4 : -x 1 +x 2 =1. Each line segment in the actual digital binary image consists of 100 pixels. The problem is to detect the parametric vectors (1, 1), (1, 1), ( 1, 1) and ( 1, 1) so that each line can be labeled according to the mapping rule defined
3
Feature space transformation All the pixels mapped into the same parametric vector constitute a curve; each parametric vector stands for one curve in the image. The feature space in this section refers to the parametric vector space. Thus, we transform each input sample pixel X=[x 1,x 2 ], into Z=[z 1,z 2 ](which can be treated as a “virtual” data point in the feature space). Hence, the learning process is performed in the feature space with as the input stimulus.
4
Questions About the prior knowledge –If we do not know the data points are distributed like line –If we do not transform the original feature space to the current parametric space What results will be achieved if we only apply the clustering algorithm on the original feature space –The distance between data points can reveal the ‘real’ distance?
5
Discussions Any good solutions for the automatic transformation of feature space? –PCA, ICA, LDA How to deal with the high-dimensional feature space? –Kernel-based method, hypo-plane How to measure the similarity of data points? –Various feature dimension gives equal contribution to the similarity?
6
Clusters’ scales There are five clusters, S 1,…,S 5, in the data set. Among them S 3, S 4, and S 5 are overlapped to some degree. The numbers of sample points for these five clusters are 150, 200, 300, 250, and 250, respectively; and their corresponding Gaussian variances are 0.10, 0.12, 0.15, 0.18, and 0.20.
7
Current results It demonstrates the splitting processes and learning trajectories obtained by SSCL. As we can see, the splitting occurred four times. Therefore, five clusters were discovered finally; each cluster was associated with a prototype located at its center. According to the nearest neighbor condition, each sample point was labeled by its nearest prototype.
8
Questions How many clusters are there? 3 or 5 How to decide whether a set of data points should be split or not. Must all the clusters have the same clusters’ scales? If the clusters have different scales, how to adaptively deal with it?
9
Discussions Any good evaluation methods for the number of clusters Divergence and convergence Transform the data points into a If the clusters have different scales, how to adaptively deal with it?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.