Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Mining & Machine Learning Group ADMA09 Rachsuda Jianthapthaksin, Christoph F. Eick and Ricardo Vilalta University of Houston, Texas, USA A Framework.

Similar presentations


Presentation on theme: "Data Mining & Machine Learning Group ADMA09 Rachsuda Jianthapthaksin, Christoph F. Eick and Ricardo Vilalta University of Houston, Texas, USA A Framework."— Presentation transcript:

1 Data Mining & Machine Learning Group CS@UH ADMA09 Rachsuda Jianthapthaksin, Christoph F. Eick and Ricardo Vilalta University of Houston, Texas, USA A Framework for Multi-objective Clustering and Its Application to Co-location Mining Beijing, China August 17, 2009

2 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Talk Outline 1.What is unique about this work with respect to clustering? 2.Multi-objective Clustering (MOC)—Objectives and an Architecture 3.Clustering with Plug-in Fitness Functions 4.Filling the Repository with Clusters 5.Creating Final Clusterings 6.Related Work 7.Co-location Mining Case Study 8.Conclusion and Future Work

3 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta 1. What is unique about this work with respect to clustering?  Clustering algorithms that support plug-in fitness function are used.  Clustering algorithms are run multiple times to create clusters.  Clusters are stored in a repository that is updated on the fly; cluster generation is separated from creating the final clustering.  The final clustering is created from the clusters in the repository based on user preferences.  Our approach needs to seeks for alternative, overlapping clusters.

4 Data Mining & Machine Learning Group CS@UH 2. Multi-Objective Clustering (MOC) The particular problem investigated in this work: Input: Given a spatial dataset & a set of objectives Task: Find sets of clusters that a good with respect to two or more objectives Dataset: (longitude,latitude, +) Multi-Objective Clustering Texas

5 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Survey MOC Approach  Clustering algorithms are run multiple times maximizing different subsets of objectives that are captured in compound fitness functions.  Uses a repository to store promising candidates.  Only clusters that satisfying two or more objectives are considered as candidates.  After a sufficient number of clusters has been created, final clustering are generated based on user- preferences. 5

6 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta An Architecture for MOC 6 Cluster Summarization Unit Storage Unit Clustering Algorithm Goal-driven Fitness Function Generator A Spatial Dataset M Q’ X M’ Steps in multi-run clustering: S1: Generate a compound fitness function. S2: Run a clustering algorithm. S3: Update the cluster repository M. S4: Summarize clusters discovered M’. S1S2 S3 S4

7 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta 3. Clustering with Plug-in Fitness Functions Motivation:  Finding subgroups in geo-referenced datasets has many applications.  However, in many applications the subgroups to be searched for do not share the characteristics considered by traditional clustering algorithms, such as cluster compactness and separation.  Domain or task knowledge frequently imposes additional requirements concerning what constitutes a “good” subgroup.  Consequently, it is desirable to develop clustering algorithms that provide plug-in fitness functions that allow domain experts to express desirable characteristics of subgroups they are looking for.

8 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Current Suite of Spatial Clustering Algorithms  Representative-based: SCEC, SRIDHCR, SPAM, CLEVER  Grid-based: SCMRG  Agglomerative: MOSAIC  Density-based: SCDE, DCONTOUR (not really plug-in but some fitness functions can be s imulated) Clustering Algorithms Density-based Agglomerative-basedRepresentative-based Grid-based Remark: All algorithms partition a dataset into clusters by maximizing a reward-based, plug-in fitness function.

9 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta 4. Filling the Repository with Clusters  Plug-in Reward functions Reward q (x) are used to assess to which extend an objective q is satisfied for a cluster x.  User defined thresholds  q are used to determine if an objective q is satisfied by a cluster x (Reward q (x)>  q ).  Only clusters that satisfy 2 or more objectives are stored in the repository.  Only non-dominated clusters are stored in the repository.  Dominance relations only apply to pairs of clusters that have a certain degree of agreement (overlap)  sim.

10 Data Mining & Machine Learning Group CS@UH Dominance between clusters x and y with respect to multiple objectives Q. Dominance Constraint with Respect to the Repository 10 Dominance and Multi-Objective Clusters

11 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Compound Fitness Functions  The goal-driven fitness function generator selects a subset Q’(  Q) of the objectives Q and creates a compound fitness function q Q’ relying on a penalty function approach [Baeck et al. 2000]. CmpReward(x)= (  q  Q’ Reward q (x)) * Penalty(Q’,x) 11

12 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Updating the Cluster Repository 12 M:= clusters in the repository X:= “new” clusters generated by a single run of the clustering algorithm

13 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta 5. Creating a Final Clustering  Final clusterings are subsets of the clusters in the repository M.  Inputs: The user provides her own individual objective function Reward U and a reward threshold  U and cluster similarity threshold  rem that indicates how much cluster overlap she likes to tolerate.  Goal: Find X  M that maximizes: subject to: 1.  x  X  x’  X (x  x’  Similarity(x,x’)<  rem ) 2.  x  X (Reward U (x)>  U )  Our paper introduces MO-Dominance-guided Cluster Reduction algorithm (MO-DCR) to create the final clustering.

14 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta  The algorithm loops over the following 2 steps until M is empty: 1.Include dominant clusters D which are the highest reward clusters in M’ 2.Remove D and their dominated clusters in the  rem -proximity from M. MO-Dominance-guided Cluster Reduction(MO-DCR) algorithm (MO-DCR) 14 A E F AE Dominance graphs : a dominant cluster : dominated clusters A B C D E F sim(A,B)=0.8 0.7 0.6  rem =0.5 Remark: A  B  Reward U (A)>Reward U (B)  Similarity(A,B)>  rem M’

15 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta 6. Related Work  Multi-objective clustering based on evolutionary algorithms (MOEA): VIENNA [Handl and Knowles 2004], MOCLE [Faceli et al. 2007] In comparison, MOC relies on clustering algorithms with plug-in fitness functions and multi-run clustering that explores different combinations of fitness objectives.  Moreover, MOC relies on cluster repositories that store individual clusters and not clusterings and summarization algorithms to create the final clustering. 15

16 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta 7. Case Study: Co-location Mining  Goal: Finding regional co-location patterns where high concentrations of Arsenic are co-located with a lot of other factors in Texas.  Remark: Each binary co-location is treated as a single objective.  Dataset:  TWDB has monitored water quality and collected the data for 105,814 wells in Texas over last 25 years.  we use a subset of Arsenic_10_avg data set: longitude and latitude, Arsenic (As), Molybdenum (Mo), Vanadium (V), Boron (B), Fluoride (F - ), Chloride (Cl - ), Sulfate (SO 4 2- ) and Total Dissolved Solids (TDS). 16

17 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Objective Functions Used 17 Reward B (x) =  (B,x)  |x| . Q = {q {As ,Mo  }, q {As ,V  }, q {As ,B  }, q {As ,F -  }, q {As ,Cl -  }, q {As ,SO4 2-  }, q {As ,TDS  } } Q’  Q

18 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Steps of the Experiment 18 Spatial dataset and fitness functions (Q) MOC Step 1-3 MOC Step 4 Regions (M) Regions M’ (  M) with associated co-location pattern MOC Users Queries Step 1-3: use CLEVER with all pairs of 7 different objective functions: q {As ,Mo  }, q {As ,V  }, q {As ,B  }, q {As ,F -  }, q {As ,Cl -  }, q {As ,SO4 2-  }, q {As ,TDS  }. Step 4: query clusters in the repository by separately using the given single-objective functions, the removal threshold  rem = 0.1 and the following user-defined reward thresholds (  7 final clusterings):  q{As ,Mo  } =13,  q{As ,V  } =15,  q{As ,B  } =10,  q{As ,F -  } =25,  q{As ,Cl -  } =7,  q{As , SO4 2-  } =6,  q{As ,TDS  } =8.

19 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Experimental Results  MOC is able to identify:  Multi-objective clusters  Alternative clusters e.g. Rank1 regions of (a) and Rank2 regions of (b)  Nested clusters e.g. in (b) Rank3-5 regions are sub-regions of Rank1 region.  Particularly discriminate among companion elements such as Vanadium (Rank3 region), or Chloride, Sulfate and Total Dissolved Solids (Rank4 region). 19 (a) (b) Fig. 7.6 The top 5 regions and patterns with respect to two queries: query 1 ={As ,Mo  } and query 2 ={As ,B  } are shown in Figure (a) and (b), respectively.

20 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta 8. Conclusion and Future Work Building blocks for Future Multi-Objective Clustering Systems were provided in this work; namely:  A dominance relation for problems in which only a subset of the objectives can be satisfied was introduced.  Clustering algorithms with plug-in fitness functions and the capability to create compound fitness functions are excessively used in our approach.  Initially, a repository of potentially useful clusters is generated based on a large set of objectives. Individualized, specific clusterings are then generated based on user preferences.  The approach is highly generic and incorporates specific domain needs in form of single-objective fitness functions.  The approach was evaluated in a case study and turned out more suitable than a single-objective clustering approach that was used for the same application in a previous paper [ACM- GIS 2008].

21 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Challenges in Multi-objective Clustering (MOC) 1.Find clusters that are individually good with respect to multiple objectives in an automated fashion. 2.Provide search engine style capabilities to summarize final clustering obtained from multiple runs of clustering algorithms. 21

22 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Traditional Clustering Algorithms & Fitness Functions 1.Traditional clustering algorithms consider only domain independent and task independent characteristics to form a solution. 2.Different domain tasks require different fitness functions. 22 No Fitness Function Provides Plug-in Fitness Function Fixed Fitness Function DBSCAN Hierarchical Clustering Implicit Fitness Function K-Means CHAMELEON Our Work PAM Clustering Algorithms Traditional Clustering Algorithms

23 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Code MO-DCR Algorithm 23

24 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Challenges Cluster Summarization 24 X X A B X A B C A B C Original Clusters Typical Output DCR Output : Eliminated clusters X C

25 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta Interestingness of a Pattern  Interestingness of a pattern B (e.g. B= {C , D , E  }) for an object o,  Interestingness of a pattern B for a region c, Remark: Purity (i(B,o)>0) measures the percentage of objects that exhibit pattern B in region c.

26 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta 26 RankRegion IdSizeRewardInterestingness 1981843,741.031.49 293162423.620.20 33016541.890.01 4220827.991.23 57412220.190.01 Table 7.7 Top 5 Regions Ranked by Reward of the Query {As ,Mo  } RankRegion IdSizeRewardInterestingness 1271471,828.21.03 2122179350.950.15 3251151.091.40 4138540.023.58 5178610.880.74 Table 7.8 Top 5 Regions Ranked by Reward of the Query {As , B  } Characteristics of the Top5 Regions

27 Data Mining & Machine Learning Group CS@UH Representative-based Clustering Attribute2 Attribute1 1 2 3 4 Objective: Find a set of objects O R such that the clustering X obtained by using the objects in O R as representatives minimizes q(X). Properties: Cluster shapes are convex polygons Popular Algorithms: K-means. K-medoids

28 Data Mining & Machine Learning Group CS@UH ADMA, Beijing 09 Jiamthapthaksin, Eick, Vilalta 5. CLEVER (ClustEring using representatiVEs and Randomized hill climbing)  Is a representative-based, sometimes called prototype- based clustering algorithm  Uses variable number of clusters and larger neighborhood sizes to battle premature termination and randomized hill climbing and adaptive sampling to reduce complexity.  Searches for optimal number of clusters


Download ppt "Data Mining & Machine Learning Group ADMA09 Rachsuda Jianthapthaksin, Christoph F. Eick and Ricardo Vilalta University of Houston, Texas, USA A Framework."

Similar presentations


Ads by Google