Download presentation
Presentation is loading. Please wait.
1
University at BuffaloThe State University of New York WaveCluster A multi-resolution clustering approach qApply wavelet transformation to the feature space Both grid-based and density-based Input parameters: qNumber of grid cells for each dimension qThe wavelet qThe number of applications of wavelet transform
2
University at BuffaloThe State University of New York What are Wavelets
3
University at BuffaloThe State University of New York What Is Wavelet Transform? Decomposes a signal into different frequency subbands qApplicable to n-dimensional signals Data are transformed to preserve relative distance between objects at different levels of resolution Allow natural clusters to become more distinguishable
4
University at BuffaloThe State University of New York Intuition Behind Using Wavelet Transform Wavelet transform filters makes clusters more distinct Effective removal of outliers Multi-resolution property of wavelet transform can help detecting clusters at different levels of accuracy Cost-efficiency
5
University at BuffaloThe State University of New York Wavelet Transformation
6
University at BuffaloThe State University of New York Why Is Wavelet Transform? Use hat-shape filters qEmphasize region where points cluster qSuppress weaker information in their boundaries Effective removal of outliers qInsensitive to noise, insensitive to input order Multi-resolution qDetect arbitrary shaped clusters at different scales Efficient qComplexity O(N) Only applicable to low dimensional data
7
University at BuffaloThe State University of New York WaveCluster: Method Summarize the data by imposing a multidimensional grid structure on to data space qMultidimensional spatial data objects are represented in an n-dimensional feature space Apply wavelet transform on feature space to find the dense regions in the feature space Apply wavelet transform multiple times qResult in clusters at different scales from fine to coarse By Dr. Aidong Zhang’s group
8
University at BuffaloThe State University of New York WaveCluster
9
University at BuffaloThe State University of New York Shrinking: Intuition & Purpose For data points in a data set, what if we could make them move towards the centroid of the natural subgroup they belong to? Natural sparse subgroups become denser, thus easier to be detected; noises are further isolated.
10
University at BuffaloThe State University of New York The Concept of Shrinking A data preprocessing technique It aims to optimize the inner structure of real data sets Each data point is “attracted” by other data points and moves to the direction in which way the attraction is the strongest Can be applied in different fields
11
University at BuffaloThe State University of New York Data Shrinking Each data point moves along the direction of the density gradient and the data set shrinks towards the inside of the clusters. Points are “attracted” by their neighbors and move to create denser clusters. Proceeds iteratively; repeated until the data are stabilized or the number of iterations exceeds a threshold.
12
University at BuffaloThe State University of New York Apply shrinking into clustering field Multi- attribute hyperspace Shrink the natural sparse clusters to make them much denser to facilitate further cluster-detecting process.
13
University at BuffaloThe State University of New York Overall Structure
14
University at BuffaloThe State University of New York Data Shrinking (Cont’d) Space subdivision Normalization of data space Given the side length 1/k of grid cells, the normalized data space is subdivided into k d cells. Each grid g contains the average position (grid point) and number of data points in it. Neighboring relationship of points is grid-based. In each iteration, data points move toward the data centroid of the neighboring grids. Grid scale: qApply different grid scales, choose best clustering results.
15
University at BuffaloThe State University of New York Data Shrinking (Cont’d) Multi-scale solution: choose multiple grids scales for data shrinking 1.Determination of a proper cell size 2.Advantages for handling clusters of various densities
16
University at BuffaloThe State University of New York Data Shrinking (Cont’d) Acquirement of Multi-scale A straightforward solution: use a sequence of grids of exponentially increasing cell sizes. Smin, Smin*Eg, … Smin*(Eg) ŋ = Smax, for some ŋ N Disadvantage: 1) Smin depends on the granularity of data 2) Losing important grid scale candidates
17
University at BuffaloThe State University of New York Data Shrinking (Cont’d) A histogram-based approach to get reasonable grid scales qGet histograms for dimensions: H={h 1,h 2, …,h d } qDensity span: a combination of consecutive bins’ segments on a certain dimension in which the amount of data points exceeds a threshold. qStart from the largest bin, get density spans. qRegard density spans with similar sizes as identical ones, and choose those with largest frequencies as grid scale candidates.
18
University at BuffaloThe State University of New York Data Shrinking (Cont’d) An example of density span processing
19
University at BuffaloThe State University of New York Data Shrinking (Cont’d) An example of data movement Solution: qTreat the points in each cell as a rigid body which is pulled as a unit toward the data centroid of the surrounding cells which have more points.
20
University at BuffaloThe State University of New York Experiments Original data set data set after iteration 1 data set after iteration 3 data set after iteration 2 2d example
21
University at BuffaloThe State University of New York Cluster Detection Neighboring dense cells are connected and a neighboring graph G of the dense cells is constructed. Use a breadth-first search algorithm to find the components of graph G. Each component is a cluster. Label data points with cluster ids.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.