Download presentation
Presentation is loading. Please wait.
Published byGeorgia Miles Modified over 9 years ago
1
Chung Sheng CHEN, Nauful SHAIKH, Panitee CHAROENRATTANARUK, Christoph F. EICK, Nouhad RIZK and Edgar GABRIEL Department of Computer Science, University of Houston Talk Organization 1. Randomized Hill Climbing 2. CLEVER—A Prototype-based Clustering Algorithm which Supports Fitness Functions 3. OpenMP and CUDA Versions of Clever 4. Experimental Results 5. Summary 1 Design and Evaluation of a Parallel Execution Framework for the CLEVER Clustering Algorithm
2
1. Randomized Hill Climbing Neighborhood Randomized Hill Climbing: Sample p points randomly in the neighborhood of the currently best solution; determine the best solution of the n sampled points. If it is better than the current solution, make it the new current solution and continue the search; otherwise, terminate returning the current solution. Advantages: easy to apply, does not need many resources, usually fast. Problems: How do I define my neighborhood; what parameter p should I choose? Eick et al., ParCo11, Ghent
3
Maximize f(x,y,z)=|x-y-0.2|*|x*z-0.8|*|0.3-z*z*y| with x,y,z in [0,1] Neighborhood Design: Create solutions 50 solutions s, such that: s= (min(1, max(0,x+r1)), min(1, max(0,y+r2)), min(1, max(0, z+r3)) with r1, r2, r3 being random numbers in [-0.05,+0.05]. Example Randomized Hill Climbing Eick et al., ParCo11, Ghent
4
2. CLEVER: Clustering with Plug-in Fitness Functions In the last 5 years, the UH-DMML Research Group at the University of Houston developed families of clustering algorithms that find contiguous spatial clusters by maximizing a plug-in fitness function. This work is motivated by a mismatch between evaluation measures of traditional clustering algorithms (such as cluster compactness) and what domain experts are actually looking for. Plug-in Fitness Functions allow domain experts to instruct clustering algorithms with respect to desirable properties of “good” clusters the clustering algorithm should seek for. 4 Eick et al., ParCo11, Ghent
5
Region Discovery Framework 8 Eick et al., ParCo11, Ghent
6
Region Discovery Framework3 The algorithms we currently investigate solve the following problem: Given: A dataset O with a schema R A distance function d defined on instances of R A fitness function q(X) that evaluates clusterings X={c 1,…,c k } as follows: q(X)= c X reward(c)= c X i(c) size(c) with 1 Objective: Find c 1,…,c k O such that: 1. c i c j = if i j 2. X={c 1,…,c k } maximizes q(X) 3. All cluster c i X are contiguous (each pair of objects belonging to c i has to be delaunay- connected with respect to c i and to d) 4. c 1 … c k O 5. c 1,…,c k are usually ranked based on the reward each cluster receives, and low reward clusters are frequently not reported 10 Eick et al., ParCo11, Ghent
7
Example1: Finding Regional Co-location Patterns in Spatial Data Objective: Find co-location regions using various clustering algorithms and novel fitness functions. Applications: 1. Finding regions on planet Mars where shallow and deep ice are co-located, using point and raster datasets. In figure 1, regions in red have very high co-location and regions in blue have anti co-location. 2. Finding co-location patterns involving chemical concentrations with values on the wings of their statistical distribution in Texas ’ ground water supply. Figure 2 indicates discovered regions and their associated chemical patterns. Figure 1: Co-location regions involving deep and shallow ice on Mars Figure 2: Chemical co-location patterns in Texas Water Supply 12
8
Example 2: Regional Regression Geo-regression approaches: Multiple regression functions are used that vary depending on location. Regional Regression: I. To discover regions with strong relationships between dependent & independent variables II. Construct regional regression functions for each region III. When predicting the dependent variable of an object, use the regression function associated with the location of the object 13 Eick et al., ParCo11, Ghent
9
Representative-based Clustering Attribute2 Attribute1 1 2 3 4 Objective: Find a set of objects O R such that the clustering X obtained by using the objects in O R as representatives minimizes q(X). Characteristic: cluster are formed by assigning objects to the closest representative Popular Algorithms: K-means, K-medoids/PAM, CLEVER, CLEVER, 9 Eick et al., ParCo11, Ghent
10
10 A prototype-based clustering algorithm which supports plug- in fitness function Uses a randomized hill climbing procedure to find a “good” set of prototype data objects that represent clusters “good” maximize the plug-in fitness function Search for the “correct number of cluster” CLEVER is powerful but usually slow; Hill Climbing Procedure CLEVER Plug-in fitness function Neighboring solutions generator Assign cluster members Eick et al., ParCo11, Ghent
11
Inputs: Dataset O, k’, neighborhood-size, p, q, , object-distance-function d or distance matrix D, i-max Outputs: Clustering X, fitness q(X), rewards for clusters in X Algorithm: 1. Create a current solution by randomly selecting k’ representatives from O. 2. If i-max iterations have been done terminate with the current solution 3. Create p neighbors of the current solution randomly using the given neighborhood definition. 4. If the best neighbor improves the fitness q, it becomes the current solution. Go back to step 2. 5. If the fitness does not improve, the solution neighborhood is re-sampled by generating p’ (more precisely, first 2*p solutions and then (q-2)*p solutions are re-sampled) more neighbors. If re-sampling does not lead to a better solution, terminate returning the current solution (however, clusters that receive a reward of 0 will be considered outliers and non-reward clusters are therefore not returned); otherwise, go back to step 2 replacing the current solution by the best solution found by re-sampling. Pseudo Code of CLEVERs) 11
12
3. PAR-CLEVER : A Faster Clustering Algorithm OpenMP CUDA (GPU computing) MPI Map/Reduce 12 Eick et al., ParCo11, Ghent
13
13 1 0Ovals Size:3,359 Fitness function: purity Earthquake Size: 330,561 Fitness function: find clusters with high variance with respect to earthquake depth Yahoo Ads Clicks full size: 3,009,071,396; subset:2,910,613 Fitness function: minimum intra-cluster distance Eick et al., ParCo11, Ghent
14
14 1. Assign cluster members: O(n*k) 1. Data parallelization 2. Highly independent 3. The first priority for parallelization 2. Fitness value calculation : ~ O(n) 3. Neighboring solutions generation: ~ O(p) n:= number of object in the dataset k:= number of clusters in the current solution p:= sampling rate (how many neighbors of the current solution are sampled) Eick et al., ParCo11, Ghent
15
15 crill-001 to crill-016 (OpenMP) Processor : 4 x AMD Opteron(tm) Processor 6174 CPU cores : 48 Core speed : 2200 MHz Memory : 64 GB crill-101 and crill-102 (GPU Computing—NVIDIA CUDA) Processor : 2 x AMD Opteron(tm) Processor 6174 CPU cores : 24 Core speed : 2200 MHz Memory : 32 GB GPU Device : 4 x Tesla M2050, Memory : 3 Gb CUDA cores : 448 Eick et al., ParCo11, Ghent
16
16 100val Dataset ( size = 3359 ) p=100, q=27, k’=10, η = 1.1, th=0.6, β = 1.6, Interestingness Function=Purity Threads16122448 Loop-level Time(sec)248.4950.5230.0920.5816.39 Speedup1.004.928.2612.0715.16 Efficiency1.000.820.690.500.32 Loop-level + Incremental Updating Time(sec)229.8849.4329.9920.2815.61 Speedup1.004.657.6711.3414.73 Efficiency1.000.780.640.470.31 Task-level Time(sec)248.4941.8321.6711.446.40 Speedup1.005.9411.4721.7238.84 Efficiency1.000.990.960.900.81 Iterations = 14, Evaluated neighbor solutions = 15200, k = 5, Fitness = 77187.7 Eick et al., ParCo11, Ghent
17
17 Eick et al., ParCo11, Ghent
18
18 Earthquake Dataset ( size = 330,561 ) p=50, q=12, k’=100, η =2, th=1.2, β = 1.4, Interestingness Function=Variance High Threads 16122448 Loop-level Time(hours) 185.3935.2723.1712.3810.20 Speedup15.268.0014.9718.18 Efficiency10.880.670.620.38 Loop-level + Incremental Updating Time(hours) 30.249.186.896.066.84 Speedup13.294.394.994.42 Efficiency10.550.370.210.09 Task-level Time(hours) 185.3931.9517.199.76 6.14 Speedup15.8010.7919.00 30.18 Efficiency10.970.900.79 0.63 Iterations = 216, Evaluated neighbor solutions = 21,950, k = 115 Eick et al., ParCo11, Ghent
19
19 Eick et al., ParCo11, Ghent
20
20 Yahoo Reduced Dataset ( size = 2910613 ) p=48, q=7, k’=80, η =1.2, th=0, β = 1.000001, Interestingness Function=Average Distance to Medoid Threads16122448 Loop-level Time(hours) 154.6229.2516.7412.129.94 Speedup15.299.2412.7515.55 Efficiency10.880.770.530.32 Loop-level + Incremental Updating Time(hours) 28.308.156.715.555.68 Speedup13.474.225.104.98 Efficiency10.580.350.210.10 Task-level Time(hours) 154.6225.7812.976.633.42 Speedup16.0011.9223.3345.21 Efficiency11.000.990.97 0.94 Iterations = 10, Evaluated neighbor solutions = 480, k = 94 Eick et al., ParCo11, Ghent
21
21 Eick et al., ParCo11, Ghent
22
22 100val Dataset ( size = 3359 ) p=100, q=27, k’=10, η = 1.1, th=0.6, β = 1.6, Interestingness Function=Purity Run Time (seconds) 1.331.321.341.321.331.32Avg:1.327 Iterations = 12, Evaluated neighbor solutions = 5100, k = 5 CUDA version evaluate 5100 solutions in 1.327 seconds 15200 solutions in 3.95 seconds Speed up = Time(CPU) / Time(GPU) 63x speed up compares to sequential version 1.62x speed up compares to 48 threads OpenMP OpenMP #threadsSequential61224 48 Task-levelTime(sec)248.4941.8321.6711.446.40 Iterations = 14, Evaluated neighbor solutions = 15200, k = 5, Fitness = 77187.7 vs.
23
23 Earthquake Dataset ( size = 330561 ) p=50, q=12, k’=100, η =2, th=1.2, β = 1.4, Interestingness Function=Variance High Run Time (seconds) 138.95146.56143.82139.10146.19147.03Avg:143.61 Iterations = 158, Evaluated neighbor solutions = 28,900, k = 92 OpenMP #threads Sequential61224 48 Task-levelTime(hours) 185.3931.9517.199.76 6.14 Iterations = 216, Evaluated neighbor solutions = 21950, k = 115 CUDA version evaluate 28000 solutions in 143.61 seconds 21950 solutions in 109.07 seconds Speed up = Time(CPU) / Time(GPU) 6119x speed up compares to sequential version 202x speed up compares to 48 threads OpenMP vs. Eick et al., ParCo11, Ghent
24
The representatives are read frequently in the computation that assigns objects to clusters. The results presented earlier cached the representatives into the shared memory for a faster access. The following table compares the performances between CLEVER with and without caching the representatives on the earthquake data set. The data size of the representatives being cached is 2MB The result shows that caching the representatives has very little improvement on the runtime (0.09%) based on the Earthquake Dataset ( size = 330561 ) p=50, q=12, k’=100, η =2, th=1.2, β = 1.4, Interestingness Function=Variance High Run Time (seconds) Cache138.95146.56143.82139.10146.19147.03Avg:143.61 No-cache144.63139.9144.27144.5144.71144.44Avg:143.74 Iterations = 158, Evaluated neighbor solutions = 28,900, k = 92 24 Eick et al., ParCo11, Ghent
25
The OpenMP version uses a object oriented programming (OOP) design inherited from its original implementation but the redesigned CUDA version is more a procedural programming implementation. CUDA hardware has higher bandwidth which contributed to the speedup a little Caching contributes little of the speedup (we already analyzed that) 25 Eick et al., ParCo11, Ghent
26
26 CUDA and OpenMP results indicate good scalability parallel algorithm using multi-core processors—computations which take days can now be performed in minutes/hours. OpenMP Easy to implement Good Speed up Limited by the number of cores and the amount of RAM CUDA GPU Extra attentions needed for CUDA programming Lower level of programming: registers, cache memory… GPU memory hierarchy is different from CPU Only support for some data structures; Synchronization between threads in blocks is not possible Super speed up, some of which are still subject of investigation Eick et al., ParCo11, Ghent
27
More work on the CUDA version Conduct more experiments which explain what works well and which doesn’t and why it does/does not work well Analyze impact of the capability to search many more solutions on solution quality in more depth. Implement a version of CLEVER which conducts multiple randomized hill climbing searches in parallel and which employs dynamic load balancing more resources are allocated to the “more promising” searches Reuse code for speeding up other data mining algorithms which uses randomized hill climbing. 27 Eick et al., ParCo11, Ghent
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.