Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 1 March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity.

Slides:



Advertisements
Similar presentations
CLUSTERING.
Advertisements

Copyright Jiawei Han, modified by Charles Ling for CS411a
Clustering Clustering of data is a method by which large sets of data is grouped into clusters of smaller sets of similar data. The example below demonstrates.
What is Cluster Analysis?
Clustering Basic Concepts and Algorithms
CS690L: Clustering References:
Cluster Analysis Part III. Learning Objectives Density-Based Methods Grid-Based Methods Model-Based Clustering Methods Outlier Analysis Summary.
Data Mining Techniques: Clustering
What is Cluster Analysis?
Clustering II.
Cluster Analysis.
Clustering.
Clustering II.
Slide 1 EE3J2 Data Mining Lecture 16 Unsupervised Learning Ali Al-Shahib.
Cluster Analysis.
What is Cluster Analysis
Segmentação (Clustering) (baseado nos slides do Han)
1 Chapter 8: Clustering. 2 Searching for groups Clustering is unsupervised or undirected. Unlike classification, in clustering, no pre- classified data.
1 Partitioning Algorithms: Basic Concepts  Partition n objects into k clusters Optimize the chosen partitioning criterion Example: minimize the Squared.
Spatial and Temporal Data Mining
Cluster Analysis.
What is Cluster Analysis?
2013 Teaching of Clustering
Clustering Part2 BIRCH Density-based Clustering --- DBSCAN and DENCLUE
Cluster Analysis Part I
Advanced Database Technologies
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Clustering CS 685: Special Topics in Data Mining Spring 2008 Jinze Liu.
11/15/2012ISC471 / HCI571 Isabelle Bichindaritz 1 Clustering.
1 Lecture 10 Clustering. 2 Preview Introduction Partitioning methods Hierarchical methods Model-based methods Density-based methods.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
October 27, 2015Data Mining: Concepts and Techniques1 Data Mining: Concepts and Techniques — Slides for Textbook — — Chapter 7 — ©Jiawei Han and Micheline.
1 Clustering Sunita Sarawagi
Han/Eick: Clustering II 1 Clustering Part2 continued 1. BIRCH skipped 2. Density-based Clustering --- DBSCAN and DENCLUE 3. GRID-based Approaches --- STING.
November 1, 2015Data Mining: Concepts and Techniques1 Data Mining: Concepts and Techniques Clustering.
Han/Eick: Clustering II 1 Clustering Part2 continued 1. BIRCH skipped 2. Density-based Clustering --- DBSCAN and DENCLUE 3. GRID-based Approaches --- STING.
CIS664-Knowledge Discovery and Data Mining Vasileios Megalooikonomou Dept. of Computer and Information Sciences Temple University Clustering I (based on.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Clustering COMP Research Seminar BCB 713 Module Spring 2011 Wei Wang.
Data Mining and Warehousing: Chapter 8
Cluster Analysis Potyó László. Cluster: a collection of data objects Similar to one another within the same cluster Similar to one another within the.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Clustering COMP Research Seminar BCB 713 Module Spring 2011 Wei Wang.
Presented by Ho Wai Shing
Clustering.
Han, Kamber, Eick: Object Similarity & Clustering for COSC Clustering and Similarity Assessment ©Jiawei Han and Micheline Kamber with major Additions.
Han: Clustering1 Clustering — Slides for Textbook — — Chapter 8 — ©Jiawei Han and Micheline Kamber Intelligent Database Systems Research Lab School of.
Data Mining Algorithms
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Clustering Analysis CS 685: Special Topics in Data Mining Jinze Liu.
Han, Kamber, Eick: Object Similarity & Clustering 1 Clustering and Object Similarity Evaluation ©Jiawei Han and Micheline Kamber with Additions and Modifications.
Clustering High-Dimensional Data. Clustering high-dimensional data – Many applications: text documents, DNA micro-array data – Major challenges: Many.
CLUSTERING HIGH-DIMENSIONAL DATA Elsayed Hemayed Data Mining Course.
Cluster Analysis Dr. Bernard Chen Assistant Professor Department of Computer Science University of Central Arkansas.
Mr. Idrissa Y. H. Assistant Lecturer, Geography & Environment Department of Social Sciences School of Natural & Social Sciences State University of Zanzibar.
Clustering By : Babu Ram Dawadi. 2 Clustering cluster is a collection of data objects, in which the objects similar to one another within the same cluster.
Cluster Analysis Dr. Bernard Chen Ph.D. Assistant Professor Department of Computer Science University of Central Arkansas Fall 2010.
Clustering Wei Wang. Outline What is clustering Partitioning methods Hierarchical methods Density-based methods Grid-based methods Model-based clustering.
1 Similarity and Dissimilarity Between Objects Distances are normally used to measure the similarity or dissimilarity between two data objects Some popular.
CLUSTERING GRID-BASED METHODS Elsayed Hemayed Data Mining Course.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Clustering COMP Research Seminar GNET 713 BCB Module Spring 2007 Wei Wang.
Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods.
1 Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Density-Based.
Data Mining Comp. Sc. and Inf. Mgmt. Asian Institute of Technology
Data Mining: Concepts and Techniques (3rd ed.) — Chapter 10 —
Data Mining Soongsil University
©Jiawei Han and Micheline Kamber Department of Computer Science
©Jiawei Han and Micheline Kamber Department of Computer Science
Clustering and Object Similarity Evaluation
Data Mining 資料探勘 分群分析 (Cluster Analysis) Min-Yuh Day 戴敏育
CSE572, CBS572: Data Mining by H. Liu
What Is Good Clustering?
Clustering Wei Wang.
CSE572: Data Mining by H. Liu
Presentation transcript:

Han, Kamber, Eick: Object Similarity & Clustering for COSC March 4+9: Introduction to KDD March 11: Association Rule Mining March 23: Similarity Assessment March 25: Clustering and UHDM 2 March 30: Data Warehouses and OLAP KDD Lectures

Han, Kamber, Eick: Object Similarity & Clustering for COSC Clustering and Similarity Assessment ©Jiawei Han and Micheline Kamber with major Additions and Modifications by Ch. Eick Organization for COSC 6340: 1. What is Clustering? 2. Object Similarity Assessment 3. K-means/medoid Clustering 4. Grid-based Clustering 5. Work at UH

Han, Kamber, Eick: Object Similarity & Clustering for COSC What is Cluster Analysis? Cluster: a collection of data objects Cluster analysis: Grouping a set of data objects into clusters such that the data objects are similar to one another within the same cluster dissimilar to the objects in other clusters The quality of a clustering result depends on both the similarity measure used and on the clustering method employed. Clustering is unsupervised classification: no predefined classes and no training examples

Han, Kamber, Eick: Object Similarity & Clustering for COSC Motivation: Why Clustering? Problem: Identify (a small number of) groups of similar objects in a given (large) set of object. Goals: Find representatives for homogeneous groups  Data Compression Find “natural” clusters and describe their properties  ”natural” Data Types Find suitable and useful grouping  ”useful” Data Classes Find unusual data object  Outlier Detection

Han, Kamber, Eick: Object Similarity & Clustering for COSC Examples of Clustering Applications Plant/Animal Classification Book Ordering Cloth Sizes Fraud Detection (Find outlier)

Han, Kamber, Eick: Object Similarity & Clustering for COSC Requirements of Clustering in Data Mining Scalability Ability to deal with different types of attributes Discovery of clusters with arbitrary shape Minimal requirements for domain knowledge to determine input parameters Able to deal with noise and outliers Insensitive to order of input records High dimensionality Incorporation of user-specified constraints Interpretability and usability

Han, Kamber, Eick: Object Similarity & Clustering for COSC Data Structures for Clustering Data matrix (n objects, p attributes) (Dis)Similarity matrix (nxn)

Han, Kamber, Eick: Object Similarity & Clustering for COSC Quality Evaluation of Clusters Dissimilarity/Similarity metric: Similarity is expressed in terms of a normalized distance function d, which is typically metric; typically:  (o i, o j ) = 1 - d (o i, o j ) There is a separate “quality” function that measures the “goodness” of a cluster. The definitions of similarity functions are usually very different for interval-scaled, boolean, categorical, ordinal and ratio-scaled variables. Weights should be associated with different variables based on applications and data semantics. It is hard to define “similar enough” or “good enough” the answer is typically highly subjective.

Han, Kamber, Eick: Object Similarity & Clustering for COSC Challenges in Obtaining Object Similarity Measures Many Types of Variables Interval-scaled variables Binary variables and nominal variables Ordinal variables Ratio-scaled variables Objects are characterized by variables belonging to different types (mixture of variables)

Han, Kamber, Eick: Object Similarity & Clustering for COSC Case Study: Patient Similarity The following relation is given (with tuples): Patient(ssn, weight, height, cancer-sev, eye-color, age) Attribute Domains ssn: 9 digits weight between 30 and 650; m weight =158 s weight =24.20 height between 0.30 and 2.20 in meters; m height =1.52 s height =19.2 cancer-sev: 4=serious 3=quite_serious 2=medium 1=minor eye-color: {brown, blue, green, grey } age: between 3 and 100; m age =45 s age =13.2 Task: Define Patient Similarity

Han, Kamber, Eick: Object Similarity & Clustering for COSC Generating a Global Similarity Measure from Single Variable Similarity Measures Assumption: A database may contain up to six types of variables: symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio. 1. Standardize variable and associate similarity measure  i with the standardized i-th variable and determine weight w i of the i-th variable. 2. Create the following global (dis)similarity measure  :

Han, Kamber, Eick: Object Similarity & Clustering for COSC A Methodology to Obtain a Similarity Matrix 1. Understand Variables 2. Remove (non-relevant and redundant) Variables 3. (Standardize and) Normalize Variables (typically using z- scores or variable values are transformed to numbers in [0,1]) 4. Associate (Dis)Similarity Measure d f /  f with each Variable 5. Associate a Weight (measuring its importance) with each Variable 6. Compute the (Dis)Similarity Matrix 7. Apply Similarity-based Data Mining Technique (e.g. Clustering, Nearest Neighbor, Multi-dimensional Scaling,…)

Han, Kamber, Eick: Object Similarity & Clustering for COSC Interval-scaled Variables Standardize data using z-scores Calculate the mean absolute deviation: where Calculate the standardized measurement (z-score) Using mean absolute deviation is more robust than using standard deviation

Han, Kamber, Eick: Object Similarity & Clustering for COSC Normalization in [0,1] Problem: If non-normalized variables are used the maximum distance between two values can be greater than 1. Solution: Normalize interval-scaled variables using where min f denotes the minimum value and max f denotes the maximum value of the f-th attribute in the data set and  is constant that is choses depending on the similarity measure (e.g. if Manhattan distance is used  is chosen to be 1).

Han, Kamber, Eick: Object Similarity & Clustering for COSC Other Normalizations Goal:Limit the maximum distance to 1 Start using a distance measure d f (x,y) Determine the maximum distance dmax f that can occur for two values of the f-th attribute (e.g. dmax f =max f -min f ). Define  f (x,y)=1- (d f (x,y)/ dmax f ) Advantage: Negative similarities cannot occur.

Han, Kamber, Eick: Object Similarity & Clustering for COSC Similarity Between Objects Distances are normally used to measure the similarity or dissimilarity between two data objects Some popular ones include: Minkowski distance: where i = (x i1, x i2, …, x ip ) and j = (x j1, x j2, …, x jp ) are two p-dimensional data objects, and q is a positive integer If q = 1, d is Manhattan distance

Han, Kamber, Eick: Object Similarity & Clustering for COSC Similarity Between Objects (Cont.) If q = 2, d is Euclidean distance: Properties d(i,j)  0 d(i,i) = 0 d(i,j) = d(j,i) d(i,j)  d(i,k) + d(k,j) Also one can use weighted distance, parametric Pearson product moment correlation, or other disimilarity measures.

Han, Kamber, Eick: Object Similarity & Clustering for COSC Similarity with respect to a Set of Binary Variables A contingency table for binary data Object i Object j Ignores agree- ments in O’s Considers agree- ments in 0’s and 1’s to be equivalent.

Han, Kamber, Eick: Object Similarity & Clustering for COSC Similarity between Binary Variable Sets Example gender is a symmetric attribute the remaining attributes are asymmetric binary let the values Y and P be set to 1, and the value N be set to 0

Han, Kamber, Eick: Object Similarity & Clustering for COSC Nominal Variables A generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow, blue, green Method 1: Simple matching m: # of matches, p: total # of variables Method 2: use a large number of binary variables creating a new binary variable for each of the M nominal states

Han, Kamber, Eick: Object Similarity & Clustering for COSC Ordinal Variables An ordinal variable can be discrete or continuous order is important (e.g. UH-grade, hotel-rating) Can be treated like interval-scaled replacing x if by their rank: map the range of each variable onto [0, 1] by replacing the f-th variable of i-th object by compute the dissimilarity using methods for interval- scaled variables

Han, Kamber, Eick: Object Similarity & Clustering for COSC Ratio-Scaled Variables Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale, such as Ae Bt or Ae -Bt Methods: treat them like interval-scaled variables — not a good choice! (why?) apply logarithmic transformation y if = log(x if ) treat them as continuous ordinal data treat their rank as interval-scaled.

Han, Kamber, Eick: Object Similarity & Clustering for COSC Case Study --- Normalization Patient(ssn, weight, height, cancer-sev, eye-color, age) Attribute Relevance: ssn no; eye-color minor; other major Attribute Normalization: ssn remove! weight between 30 and 650; m weight =158 s weight =24.20; transform to z weight = (x weight -158)/24.20 (alternatively, z weight =(x weight -30)/620)); height normalize like weight! cancer_sev: 4=serious 3=quite_serious 2=medium 1=minor; transform 4 to 1, 3 to 2/3, 2 to 1/3, 1 to 0 and then normalize like weight! age: normalize like weight!

Han, Kamber, Eick: Object Similarity & Clustering for COSC Case Study --- Weight Selection and Similarity Measure Selection Patient(ssn, weight, height, cancer-sev, eye-color, age) For normalized weight, height, cancer_sev, age values use Manhattan distance function; e.g.:  weight (w1,w2)= 1  | ((w1-158)/24.20 )  ((w2-158)/24.20) | For eye-color use:  eye-color (c1,c2)= if c1=c2 then 1 else 0 Weight Assignment: 0.2 for eye-color; 1 for all others Final Solution --- chosen Similarity Measure  : Let o1=(s1,w1,h1,cs1,e1,a1) and o2=(s2,w2,h2,cs2,e2,a2)  (o1,o2):= (  weight (w1,w2) +  height (h1,h2) +  cancer- sev (cs1,cs2) +  age (a1,a2) + 0.2*  eye-color (e1,e2) ) /4.2

Han, Kamber, Eick: Object Similarity & Clustering for COSC Major Clustering Approaches Partitioning algorithms: Construct various partitions and then evaluate them by some criterion Hierarchy algorithms: Create a hierarchical decomposition of the set of data (or objects) using some criterion Density-based: based on connectivity and density functions Grid-based: based on a multiple-level granularity structure Model-based: A model is hypothesized for each of the clusters and the idea is to find the best fit of that model to each other

Han, Kamber, Eick: Object Similarity & Clustering for COSC Partitioning Algorithms: Basic Concept Partitioning method: Construct a partition of a database D of n objects into a set of k clusters Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion Global optimal: exhaustively enumerate all partitions Heuristic methods: k-means and k-medoids algorithms k-means (MacQueen’67): Each cluster is represented by the center of the cluster k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster

Han, Kamber, Eick: Object Similarity & Clustering for COSC The K-Means Clustering Method Given k, the k-means algorithm is implemented in 4 steps: Partition objects into k nonempty subsets Compute seed points as the centroids of the clusters of the current partition. The centroid is the center (mean point) of the cluster. Assign each object to the cluster with the nearest seed point. Go back to Step 2, stop when no more new assignment.

Han, Kamber, Eick: Object Similarity & Clustering for COSC The K-Means Clustering Method Example

Han, Kamber, Eick: Object Similarity & Clustering for COSC Comments on the K-Means Method Strength Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. Often terminates at a local optimum. The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms Weakness Applicable only when mean is defined, then what about categorical data? Need to specify k, the number of clusters, in advance Unable to handle noisy data and outliers Not suitable to discover clusters with non-convex shapes

Han, Kamber, Eick: Object Similarity & Clustering for COSC PAM (Partitioning Around Medoids) (1987) PAM (Kaufman and Rousseeuw, 1987), built in Splus Use real object to represent the cluster Select k representative objects arbitrarily For each pair of non-selected object h and selected object i, calculate the total swapping cost TC ih For each pair of i and h, If TC ih < 0, i is replaced by h Then assign each non-selected object to the most similar representative object repeat steps 2-3 until there is no change

Han, Kamber, Eick: Object Similarity & Clustering for COSC PAM Clustering: Total swapping cost TC ih =  j C jih j i h t t ih j h i t j t i hj

Han, Kamber, Eick: Object Similarity & Clustering for COSC CLARANS (“Randomized” CLARA) (1994) CLARANS (A Clustering Algorithm based on Randomized Search) (Ng and Han’94) CLARANS draws sample of neighbors dynamically The clustering process can be presented as searching a graph where every node is a potential solution, that is, a set of k medoids If the local optimum is found, CLARANS starts with new randomly selected node in search for a new local optimum It is more efficient and scalable than both PAM and CLARA Focusing techniques and spatial access structures may further improve its performance (Ester et al.’95)

Han, Kamber, Eick: Object Similarity & Clustering for COSC Grid-Based Clustering Method Using multi-resolution grid data structure Several interesting methods STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997) WaveCluster by Sheikholeslami, Chatterjee, and Zhang (VLDB’98) A multi-resolution clustering approach using wavelet method CLIQUE: Agrawal, et al. (SIGMOD ’ 98)

Han, Kamber, Eick: Object Similarity & Clustering for COSC STING: A Statistical Information Grid Approach Wang, Yang and Muntz (VLDB’97) The spatial area area is divided into rectangular cells There are several levels of cells corresponding to different levels of resolution

Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 STING: A Statistical Information Grid Approach (2) Each cell at a high level is partitioned into a number of smaller cells in the next lower level Statistical info of each cell is calculated and stored beforehand and is used to answer queries Parameters of higher level cells can be easily calculated from parameters of lower level cell count, mean, s, min, max type of distribution—normal, uniform, etc. Use a top-down approach to answer spatial data queries Start from a pre-selected layer — typically with a small number of cells For each cell in the current level compute the confidence interval

Han, Kamber, Eick: Object Similarity & Clustering for COSC 6340 STING: A Statistical Information Grid Approach (3) Remove the irrelevant cells from further consideration When finish examining the current layer, proceed to the next lower level Repeat this process until the bottom layer is reached Advantages: Query-independent, easy to parallelize, incremental update O(K), where K is the number of grid cells at the lowest level Disadvantages: All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected

Han, Kamber, Eick: Object Similarity & Clustering for COSC CLIQUE (Clustering In QUEst) Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98). Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space CLIQUE can be considered as both density-based and grid- based It partitions each dimension into the same number of equal length interval It partitions an m-dimensional data space into non- overlapping rectangular units A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter A cluster is a maximal set of connected dense units within a subspace

Han, Kamber, Eick: Object Similarity & Clustering for COSC CLIQUE: The Major Steps Partition the data space and find the number of points that lie inside each cell of the partition. Identify the subspaces that contain clusters using the Apriori principle Identify clusters: Determine dense units in all subspaces of interests Determine connected dense units in all subspaces of interests. Generate minimal description for the clusters Determine maximal regions that cover a cluster of connected dense units for each cluster Determination of minimal cover for each cluster

Han, Kamber, Eick: Object Similarity & Clustering for COSC Salary (10,000) age age Vacation (week) age Vacation Salary 3050  = 3

Han, Kamber, Eick: Object Similarity & Clustering for COSC Strength and Weakness of CLIQUE Strength It automatically finds subspaces of the highest dimensionality such that high density clusters exist in those subspaces It is insensitive to the order of records in input and does not presume some canonical data distribution It scales linearly with the size of input and has good scalability as the number of dimensions in the data increases Weakness The accuracy of the clustering result may be degraded at the expense of simplicity of the method

Han, Kamber, Eick: Object Similarity & Clustering for COSC Creating Environments for Database Clustering; problems related to Multi-relational Data Mining [ER04]. Distance Function Learning [EVR03] Supervised Clustering [EZZ04] Using Clustering to Enhance Classifiers [ICDM03], [ECAI04], [PKDD04] not discussed Using SQL Queries for Data Summarization [KDD96]; [RYU98]; not discussed Work at UH related to Similarity Assessment and Clustering Work at UH

Han, Kamber, Eick: Object Similarity & Clustering for COSC Data Extraction Tool DBMS Clustering Tool User Interface A set of clusters Similarity measure Similarity Measure Tool Default choices and domain information Library of similarity measures Type and weight information Object View Library of clustering algorithms CAL-FULL/UH Database Clustering Similarity Assessment Environments Learning Tool Training Date Work at UH

Han, Kamber, Eick: Object Similarity & Clustering for COSC Prototypes of Similarity Assessment Tools Prototype1 (CAL State Fullerton): Supported the interactive definition of similarity measures; knowledge representation format does not rely on modular units; provides a nearest neighbor clustering algorithm for database clustering; functions were supported outside a DBMS Prototype 2 (UH 2002): Similarity measures are defined using a special language (not interactively); tool supports modular units and functions are provided using a Java/SQL-Server 2000 framework; functions were partially moved inside a DBMS (although some are still inside Java); analysis results are stored in the database and therefore available for further analysis. Prototype 3 (UH): Learn Distance Functions for Classification Problems. Currently Investigated! Work at UH

Objectives Supervised Clustering: Maximize Cluster Purity while keeping the number of clusters low. Work at UH

Han, Kamber, Eick: Object Similarity & Clustering for COSC Research Goals Supervised Clustering Develop representative-based supervised clustering algorithms. Show the benefits of supervised clustering in case studies that center on summary generation, distance function learning, and classification. Work at UH

Han, Kamber, Eick: Object Similarity & Clustering for COSC What is a good object distance function  for supervised similarity assessment? Objective: Learn good distance functions for classification tasks. Our approach: Apply a clustering algorithm with the distance function  to be evaluated that returns a predetermined number of clusters k. The more pure the obtained clusters are the better is the quality of . Our goal is to learn the weights of an object distance function  such that all the clusters are pure (or as pure is possible); for more details see [ERV03] Paper. Work at UH

Han, Kamber, Eick: Object Similarity & Clustering for COSC Idea: Coevolving Clusters and Similarity Functions Clusters X Similarity Function Reinforcement Learning Clustering Goodness of The Similarity Function q(X) Clustering Evaluation Work at UH q(X):=percentage_of_minority_examples + penalty(k) penalty(k):= If k  c then 0 else sqrt((k-c)/n)) with k:= number of clusters generated n:= number of objects in the dataset c:= number of classes in the dataset

Idea CR*-Approach Let  be a clustering algorithm and Error( ,O)=Error’(  ( ,O)) an error function that measures class purity in clusters, class coverage, and assigns a penalty for large numbers of clusters. While not done do 1. Cluster with respect to ( ,O) receiving clusters C and report Error’(  ( ,O)) 2. If Error’(  ( ,O)) is small enough stop reporting the error, C, and  3. For each cluster determine majority class 4. For each c  C adjust weights w j locally x:=examples belonging to majority class o:= non-majority-class examples x Cluster x XxXx x o o Decrease weight for modular unit Increase weight for modular unit Cluster X xX x o o Idea: Move examples of the majority class closer to each other Work at UH

Han, Kamber, Eick: Object Similarity & Clustering for COSC Weight Adjustment within a Cluster Let w i be the current weight of the i-th modular unit Let  i be the average absolute deviation for the examples that belong to the cluster with respect to  i Let  i be the average absolute deviation for the examples of the cluster that belong to the majority class with respect to  i Learning: Then weights are adjusted as follows with respect to a particular cluster: w i ’=w i + (  i –  i )  or better w i ’=w i + w i  min(max  (  i –  i )  with  being the learning rate and  maximal adjustment (e.g. if  a weight can be maximally increased/decreased by 20%) per weight per cluster. Remark: If the cluster is ‘pure’ or does not contain 2 or more elements of a particular class, no weight adjustment takes place. Work at UH

Han, Kamber, Eick: Object Similarity & Clustering for COSC Summary: Problems and Challenges for Clustering Considerable progress has been made in scalable clustering methods Partitioning: k-means, k-medoid, CLARANS, EM Hierarchical: BIRCH, CURE Density-based: DBSCAN, CLIQUE, OPTICS Grid-based: STING, WaveCluster Model-based: Autoclass, Denclue, Cobweb Current clustering techniques do not address all the requirements adequately Constraint-based clustering analysis: Constraints exist in data space (bridges and highways) or in user queries

Han, Kamber, Eick: Object Similarity & Clustering for COSC Summary Object Similarity & Clustering Cluster analysis groups objects based on their similarity and has wide applications Appropriate similarity measures have to be chosen for various types of variables and combined into a global similarity measure. Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based methods Methods to measure, compute, and learn object similarity are quite important, not only for clustering, but also for nearest neighbor approaches, information retrieval in general, and for data visualization.

Han, Kamber, Eick: Object Similarity & Clustering for COSC References (1) R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. SIGMOD'98 M. R. Anderberg. Cluster Analysis for Applications. Academic Press, M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the clustering structure, SIGMOD’99. P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996 M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases. KDD'96. M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing techniques for efficient class identification. SSD'95. D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2: , D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamic systems. In Proc. VLDB’98. S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases. SIGMOD'98. A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.

Han, Kamber, Eick: Object Similarity & Clustering for COSC References (2) L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John Wiley & Sons, E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB’98. G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering. John Wiley and Sons, P. Michaud. Clustering techniques. Future Generation Computer systems, 13, R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94. E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets. Proc Int. Conf. on Pattern Recognition, G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering approach for very large spatial databases. VLDB’98. W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining, VLDB’97. T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very large databases. SIGMOD'96.