Presentation is loading. Please wait.

Presentation is loading. Please wait.

UH Data Mining & Machine Learning Group May 1, 2009 Christoph F. Eick Department of Computer Science University of Houston A Domain-Driven Framework.

Similar presentations


Presentation on theme: "UH Data Mining & Machine Learning Group May 1, 2009 Christoph F. Eick Department of Computer Science University of Houston A Domain-Driven Framework."— Presentation transcript:

1 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Christoph F. Eick Department of Computer Science University of Houston A Domain-Driven Framework for Clustering with Plug-in Fitness Functions and its Application to Spatial Data Mining CACS Lafayette (LA), May 1, 2009

2 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Talk Outline 1.Domain-driven Data Mining (D 3 M, DDDM) 2.A Framework for Clustering with Plug-in Fitness Functions 3.MOSAIC---a Clustering Algorithm that Supports Plug-in Fitness Functions 4.Popular Fitness Functions 5.Case Studies: Applications to Spatial Data Mining a.Co-location Mining b.Multi-objective Clustering c.Change Analysis in Spatial Data 6.Summary and Conclusion.

3 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Other Contributors to the Work Presented Today Current PhD Students:  Oner-Ulvi Celepcikay  Chun-Shen Chen  Rachsuda Jiamthapthaksin,  Vadeerat Rinsurongkawong Former PhD Student:  Wei Ding (Assistant Professor, UMASS, Boston) Former Master Students:  Rachana Parmar  Dan Jiang  Seungchan Lee Domain Experts:  Jean-Philippe Nicot (Bureau of Economic Geology, UT Austin)  Tomasz F. Stepinski (Lunar and Planetary Institute, Houston)  Michael Twa (College of Optometry, University of Houston),

4 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 DDDM—what is it about?  Differences concerning the objectives of data mining created a gap between academia and applications of data mining in business and science.  Traditional data mining targets the production of generic, domain- independent algorithms and tools; as a result, data mining algorithms have little capability to adapt to external, domain- specific constraints and evaluation measures.  To overcome this mismatch, the need to incorporate domain intelligence into data mining algorithms has been recognized by current research. Domain intelligence requires:  the involvement of domain knowledge and experts,  the consideration of domain constraints and domain-specific evaluation measures  the discovery of in-depth patterns based on a deep domain model  On top of the data-driven framework, DDDM aims to develop novel methodologies and techniques for integrating domain knowledge as well as actionability measures into the KDD process and to actively involves humans.

5 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 The Vision of DDDM “DDDM…can assist in a paradigm shift from “data-driven hidden pattern mining” to “domain-driven actionable knowledge discovery”, and provides support for KDD to be translated to the real business situations as widely expected.” [CZ07]

6 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 IEEE TKDE Special Issue

7 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 2. Clustering with Plug-in Fitness Functions Motivation:  Finding subgroups in geo-referenced datasets has many applications.  However, in many applications the subgroups to be searched for do not share the characteristics considered by traditional clustering algorithms, such as cluster compactness and separation.  Domain knowledge frequently imposes additional requirements concerning what constitutes a “good” subgroup.  Consequently, it is desirable to develop clustering algorithms that provide plug-in fitness functions that allow domain experts to express desirable characteristics of subgroups they are looking for.  Only very few clustering algorithms published in the literature provide plug-in fitness functions; consequently existing clustering paradigms have to be modified and extended by our research to provide such capabilities.

8 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 Clustering with Plug-In Fitness Functions Clustering algorithms No fitness function Provide plug-in fitness function Fixed Fitness Function DBSCAN Hierarchical Clustering Implicit Fitness Function K-Means CHAMELEON MOSAIC PAM

9 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Current Suite of Spatial Clustering Algorithms  Representative-based: SCEC[1], SPAM[3], CLEVER[4]  Grid-based: SCMRG[1]  Agglomerative: MOSAIC[2]  Density-based: SCDE [4], DCONTOUR[8] (not really plug-in but some fitness functions can be simulated ) Clustering Algorithms Density-based Agglomerative-basedRepresentative-based Grid-based Remark: All algorithms partition a dataset into clusters by maximizing a reward-based, plug-in fitness function.

10 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Spatial Clustering Algorithms  Datasets are assumed to have the following structure: ( ; ) e.g. (longitude, latitude; + )  Clusters are found in the subspace of the spatial attributes, called regions in the following.  The non-spatial attributes are used by the fitness function but neither in distance computations nor by the clustering algorithm itself.  Clustering algorithms are assumed to maximize reward-based fitness functions that have the following structure: where  is a parameter that determines the premium put on cluster size (larger values  fewer, larger clusters)

11 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 3. MOSAIC—a Clustering Algorithm that Supports Plug-in Fitness Functions Fig. 6: An illustration of MOSAIC’s approach (a) input (b) output MOSAIC[2] supports plug-in fitness functions and provides a generic framework that integrates representative-based clustering, agglomerative clustering, and proximity graphs, and which approximates arbitrary shape clusters using unions of small convex polygons.

12 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 3.1 Representative-based Clustering Attribute2 Attribute1 1 2 3 4 Objective: Find a set of objects O R such that the clustering X obtained by using the objects in O R as representatives minimizes q(X). Properties: Uses 1NN queries to assign objects to a cluster Cluster shapes are limited to convex polygons Popular Algorithms: K-means, K-medoids, CLEVER, SPAM

13 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 3.2 MOSAIC and Agglomerative Clustering Traditional Agglomerative Clustering Algorithms Decision which clusters to merge next is made solely based on distances between clusters. In particular, two clusters that are closest to each other with respect to a distance measure (single link, group average,…) are merged. Use of some distance measures might lead to non-contiguous clusters. Example: If group average is used, clusters C3 and C4 would be merged next

14 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 MOSAIC and Agglomerative Clustering Advantages MOSAIC over traditional agglomerative clustering: Plug-in fitness function Conducts a wider search —considers all neighboring clusters and merges the pair of clusters that enhances fitness the most Clusters are always contiguous Expensive algorithm is only run for 20-1000 iterations Highly generic algorithm

15 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 3.3 Proximity Graphs How to identify neighbouring clusters for representative-based clustering algorithms? Proximity graphs provide various definitions of “neighbour”: NNG = Nearest Neighbour Graph MST = Minimum Spanning Tree RNG = Relative Neighbourhood Graph GG = Gabriel Graph DT = Delaunay Triangulation (neighbours of a 1NN-classifier)

16 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 Proximity Graphs: Delaunay The Delaunay Triangulation is the dual of the Voronoi diagram Three points are each others neighbours if their tangent sphere contains no other points Complete: captures all neighbouring clusters Time-consuming to compute; impossible to compute in high dimensions.

17 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 Proximity Graphs: Gabriel The Gabriel graph is a subset of the Delaunay Triangulation (some decision boundary might be missed) Points are neighbours only if their (diametral) sphere of influence is empty Can be computed more efficiently: O(k 3 ) Approximate algorithms with faster complexity exist

18 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 MOSAIC’s Input Fig. 10: Gabriel graph for clusters generated by a representative-based clustering algorithm

19 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 3.4 Pseudo Code MOSAIC 1. Run a representative-based clustering algorithm to create a large number of clusters. 2. Read the representatives of the obtained clusters. 3. Create a merge candidate relation using proximity graphs. 4. WHILE there are merge-candidates (C i,C j ) left BEGIN Merge the pair of merge-candidates (C i,C j ), that enhances fitness function q the most, into a new cluster C’ Update merge-candidates: C Merge-Candidate(C’,C)  Merge-Candidate(C i,C) Merge-Candidate(C j,C) END RETURN the best clustering X found.

20 Ch. Eick et al.: MOSAIC…, DaWaK, Regenburg 2007 Complexity MOSAIC Let n be the number of objects in the dataset k be the number of clusters generated by the representative- based algorithm Complexity MOSAIC: O(k 3 + k 2 *O(q(x))) Remarks: The above formula assumes that fitness is computed from the scratch when a new clustering is obtained Lower complexities can be obtained with incrementally reusing results of previous fitness computations Our current implementation assumes that only additive fitness functions are used

21 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 4. Interestingness Measure for Spatial Clustering with Plug-in Fitness Functions  Clustering algorithms maximize fitness functions that must have the following structure  Various interestingness functions i have been introduced in our preliminary work:  For supervised clustering [1]  Maximizing the variance of a continuous variable [5]  For regional association rule scoping [9]  For co-location patterns involving continuous variables [4]  ….  Some examples of fitness functions will be presented in the case studies

22 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 5. Case Studies 1.Co-location patterns involving arsenic pollution 2.Multi-objective Clustering 3.Change analysis involving earth quake patterns

23 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 5.1 Co-location Patterns Involving Arsenic Pollution

24 Data Mining & Machine Learning Group CS@UH Regional Co-location Mining Goal: To discover regional co-location patterns involving continuous variables in which continuous variables take values from the wings of their statistical distribution Dataset: (longitude,latitude, +) Regional Co-location Mining

25 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Summary Co-location Approach  Pattern Interestingness in a region is evaluated using products of (cut-off) z-scores. In general, products of z-scores measure correlation.  Additionally, purity is considered that is controlled by a parameter .  Finally, the parameter  determines how much premium is put on the size of a region when computing region rewards.

26 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 1. Define problem 2. Create/Select a fitness function 4. Select parameters of the clustering algorithm, parameters of the fitness function and constraints with respect to which patterns are considered 3. Select a clustering algorithm 5. Run the clustering algorithm to discover interesting regions and their associated patterns 6. Analyze the results Hydrologist Domain-Driven Clustering for Co-location Mining

27 Table 5. Top 5 regions ranked by reward (as per formula 8). Exp. No. Top 5 Regi- ons Region SizeRegion Reward Maximum Valued Pattern in theRegion Purity Average Product for maximum valued pattern Exp. 2 118161684.5323 As  Mo  V  F -  0.4952.1019 28024040.6315 As  B  Cl -  TDS  0.4870.7322 34671884.8856 As  TDS  0.910.2047 423701.7072 As  Cl -  SO 4 2-  TDS  0.788.1287 5189587.9790 As  F -  0.780.2909 Exp. 4 1711669.7965 As  B  Cl -  TDS  1.0630.1097 211710407.3250 As  V  F -  0.9112.8550 342203.2526 As  V  SO 4 2-  TDS  1.0275.4066 421531.4887 As  Mo  V  B  1.0541.4630 55301426.9140 As  TDS  0.900.1939 Example: 2 Sets of Results Using Medium/High Rewards for Purity All: (As  B or As  B) and |B|<5 Experiment 2  = 1.5, θ=1.0 Experiment 4  = 1.5, θ=5.0

28 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Challenges Regional RCLM  Kind of “seeking a needle in a haystack” problem, because we search for both interesting places and interesting patterns.  Our current Interestingness measure is not anti-monotone: a superset of a co-location set might be more interesting.  Observation: different fitness function parameter settings lead to quite different results, many of which are valuable to domain experts; therefore, it is desirable combine results of many runs.  “Clustering of the future”: run clustering algorithms multiple times with multiple fitness functions, and summarize the results  multi-run/multi-objective clustering

29 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 5.2 Multi-Run Clustering  Find clusters that good with respect to multiple objectives in automated fashion. Each objective is captured in a reward- based fitness function.  To achieve the goal, we run clustering algorithms multiple times with respect to compound fitness functions that capture multiple objectives and store non-dominated clusters in a cluster repository.  Summarization tools are provided that create final clusterings with respect to a user’s perspective.

30 An Architecture for Multi-objective Clustering Clustering Algorithm Storage Unit Goal-driven Fitness Function Generator Cluster Summarization Unit A Spatial Dataset M Q’ X M’ Steps in multi-run clustering: S1: Generate a compound fitness functions. S2: Run a clustering algorithm. S3: Update the cluster list M. S4: Summarize clusters discovered M’. S1 S4 S2 S3 Given: set of objectives Q that need to be satisfied; moreover, Q’  Q.

31 As  Mo  V  B  F -  Cl -  SO 4 2-  TDS  (Rank 1) As  Mo  V  F -  Cl -  SO 4 2-  TDS  (Rank 3) As  Mo  V  B  F -  Cl -  SO 4 2-  TDS  (Rank 2) As  Mo  B  Cl -  SO 4 2-  TDS  (Rank 5) As  Mo  Cl -  SO 4 2-  TDS  (Rank 4) Figure a: the top 5 regions ordered by rewards using user-defined query {As ,Mo  } Example: Multi-Objective RCLM Example: Finding co-location patterns with respect to Arsenic and a single other chemical is a single objective; we are interested in finding co-location regions that satisfy multiple of those objectives; that is, where high arsenic concentrations are co-located with high concentrations of many other chemicals.

32 Data Mining & Machine Learning Group CS@UH Question: How do interesting regions where deep earthquakes are in close proximity to shallow earthquakes change? Cluster Interestingness Measure: Variance of Earthquake Depth Red: clusters in O old ; Blue: clusters in O new 5.3 Change Analysis in Spatial Data

33 Data Mining & Machine Learning Group CS@UH Novelty Regions in O new Novelty Change Predicate: Novelty(r)  |(r — (r’ 1  r’ k ))|>0 with r  X new ; X old ={r’ 1,...,r’ k }

34 Data Mining & Machine Learning Group CS@UH 1.Determine two datasets O old and O new for which change patterns have to be extracted 3. Determine relevant change predicates and select thresholds of change predicates 2. Cluster both datasets with respect to an interestingness perspective to obtain clusters for each dataset. 4. Instantiate change predicates based on the results of step 3. 6. Analyze emergent patterns Geologist 5. Summarize emergent patterns Domain-Driven Change Analysis in Spatial Data

35 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 6. Conclusion  A generic, domain-driven clustering framework has been introduced  It incorporates domain intelligence into domain-specific plug-in fitness functions that are maximized by clustering algorithms.  Clustering algorithms are independent of the fitness function employed. Several clustering algorithms including prototype- based, agglomerative, and grid-based clustering algorithms have been designed and implemented in our past research.  We conducted several case studies in our past research that illustrate the capability of the proposed domain-driven spatial clustering framework to solve challenging problems in planetary sciences, geology, environmental sciences, and optometry.

36 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 UH-DMML References 1.C. F. Eick, B. Vaezian, D. Jiang, and J. Wang, Discovery of Interesting Regions in Spatial Datasets Using Supervised Clustering, in Proc. 10th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD), Berlin, Germany, September 2006.Discovery of Interesting Regions in Spatial Datasets Using Supervised Clustering 2.C. Choo, R. Jiamthapthaksin, C.-S. Chen, O. Celepcikay, C. Giusti, and C. F. Eick, MOSAIC: A Proximity Graph Approach to Agglomerative Clustering, in Proc. 9th International Conference on Data Warehousing and Knowledge Discovery (DaWaK), Regensburg, Germany, September 2007.MOSAIC: A Proximity Graph Approach to Agglomerative Clustering 3.W. Ding, R. Jiamthapthaksin, R. Parmar, D. Jiang, T. Stepinski, and C. F. Eick, Towards Region Discovery in Spatial Datasets, in Proc. Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), Osaka, Japan, May 2008.Towards Region Discovery in Spatial Datasets 4.C. F. Eick, R. Parmar, W. Ding, T. Stepinki, and J.-P. Nicot, Finding Regional Co-location Patterns for Sets of Continuous Variables in Spatial Datasets, in Proc. 16th ACM SIGSPATIAL International Conference on Advances in GIS (ACM-GIS), Irvine, California, November 2008.Finding Regional Co-location Patterns for Sets of Continuous Variables in Spatial Datasets 5.C.-S. Chen, V. Rinsurongkawong, C.F. Eick, and M.D. Twa, Change Analysis in Spatial Data by Combining Contouring Algorithms with Supervised Density Functions, in Proc. Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), Bangkok, Thailand, April 2009.Change Analysis in Spatial Data by Combining Contouring Algorithms with Supervised Density Functions 6.A. Bagherjeiran, O. U. Celepcikay, R. Jiamthapthaksin, C.-S. Chen, V. Rinsurongkawong, S. Lee, J. Thomas, and C. F. Eick, Cougar**2: An Open Source Machine Learning and Data Mining Development Framework, in Proc. Open Source Data Mining Workshop (OSDM), Bangkok, Thailand, April 2009.Cougar**2: An Open Source Machine Learning and Data Mining Development Framework 7.C. F. Eick, O. U. Celepcikay, and R. Jiamthapthaksin, A Unifying Domain-driven Framework for Clustering with Plug-in Fitness Functions and Region Discovery, submitted to IEEE TKDE. 8.R. Jiamthapthaksin, C. F. Eick, and R. Vilalta, A Framework for Multi-Objective Clustering and its Application to Co-Location Mining, submitted to Fifth International Conference on Advanced Data Mining and Applications (ADMA), Beijing, China, August 2009. 9.W. Ding, C. F. Eick, X. Yuan, J. Wang, and J.-P. Nicot, A Framework for Regional Association Rule Mining and Scoping in Spatial Datasets, under review for publication in Geoinformatica.A Framework for Regional Association Rule Mining and Scoping in Spatial Datasets

37 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Other References 1.L. Cao and C. Zhang, “The Evolution of KDD: Towards Domain-Driven Data Mining,” Journal of Pattern Recognition and Artificial Intelligence, vol.21, no. 4, pp. 677-692, World Scientific Publishing Company, 2007. 2.O. Thonnard and M. Dacier, Actionable Knowledge Discovery for Threats Intelligence Support using a Multi-Dimensional Data Mining Methodology, DDDM08.

38 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Region Discovery Framework Objective: Develop and implement an integrated framework to automatically discover interesting regional patterns in spatial datasets. Treats region discovery as a clustering problem.

39 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Region Discovery Framework Continued The clustering algorithms we currently investigate solve the following problem: Given: A dataset O with a schema R A distance function d defined on instances of R A fitness function q(X) that evaluates clustering X={c 1,…,c k } as follows: q(X)=  c  X reward(c)=  c  X interestingness(c)  size(c)  with  >1 Objective: Find c 1,…,c k  O such that: 1.c i  c j =  if i  j 2.X={c 1,…,c k } maximizes q(X) 3.All cluster c i  X are contiguous in the spatial subspace 4.c 1 ,…,  c k  O 5.c 1,…,c k are usually ranked based on the reward each cluster receives, and low reward clusters are frequently not reported

40 Data Mining & Machine Learning Group CS@UH [CZ07]

41 UH Data Mining & Machine Learning Group CS@UH May 1, 2009 Arsenic Water Pollution Problem  Arsenic pollution is a serious problem in the Texas water supply.  Hard to explain what causes arsenic pollution to occur.  Several Datasets were created using the Ground Water Database (GWDB) by Texas Water Development Board (TWDB) that tests water wells regularly, one of which was used in the experimental evaluation in the paper:  All the wells have a non-null samples for arsenic  Multiple sample values are aggregated using avg/max functions  Other chemicals may have null values  Format: (Longitude, Latitude, )


Download ppt "UH Data Mining & Machine Learning Group May 1, 2009 Christoph F. Eick Department of Computer Science University of Houston A Domain-Driven Framework."

Similar presentations


Ads by Google