MIS2502: Data Analytics Clustering and Segmentation

Slides:



Advertisements
Similar presentations
SEEM Tutorial 4 – Clustering. 2 What is Cluster Analysis?  Finding groups of objects such that the objects in a group will be similar (or.
Advertisements

Unsupervised Learning
Cluster Analysis: Basic Concepts and Algorithms
Data Mining Cluster Analysis Basics
Clustering Basic Concepts and Algorithms
N. Kumar, Asst. Professor of Marketing Database Marketing Cluster Analysis.
PARTITIONAL CLUSTERING
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ What is Cluster Analysis? l Finding groups of objects such that the objects in a group will.
MIS2502: Data Analytics Clustering and Segmentation.
Cluster Analysis.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ What is Cluster Analysis? l Finding groups of objects such that the objects in a group will.
What is Cluster Analysis? Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or.
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Cluster Analysis (1).
What is Cluster Analysis?
Cluster Analysis CS240B Lecture notes based on those by © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004.
What is Cluster Analysis?
K-means Clustering. What is clustering? Why would we want to cluster? How would you determine clusters? How can you do this efficiently?
Clustering Ram Akella Lecture 6 February 23, & 280I University of California Berkeley Silicon Valley Center/SC.
Chapter 5 Data mining : A Closer Look.
Clustering Unsupervised learning Generating “classes”
Evaluating Performance for Data Mining Techniques
Unsupervised Learning. CS583, Bing Liu, UIC 2 Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate.
Unsupervised Learning Reading: Chapter 8 from Introduction to Data Mining by Tan, Steinbach, and Kumar, pp , , (
1 Lecture 10 Clustering. 2 Preview Introduction Partitioning methods Hierarchical methods Model-based methods Density-based methods.
Partitional and Hierarchical Based clustering Lecture 22 Based on Slides of Dr. Ikle & chapter 8 of Tan, Steinbach, Kumar.
Jeff Howbert Introduction to Machine Learning Winter Clustering Basic Concepts and Algorithms 1.
EXAM REVIEW MIS2502 Data Analytics. Exam What Tool to Use? Evaluating Decision Trees Association Rules Clustering.
CLUSTERING AND SEGMENTATION MIS2502 Data Analytics Adapted from Tan, Steinbach, and Kumar (2004). Introduction to Data Mining.
Data Science and Big Data Analytics Chap 4: Advanced Analytical Theory and Methods: Clustering Charles Tappert Seidenberg School of CSIS, Pace University.
Clustering/Cluster Analysis. What is Cluster Analysis? l Finding groups of objects such that the objects in a group will be similar (or related) to one.
CLUSTERING AND SEGMENTATION MIS2502 Data Analytics Adapted from Tan, Steinbach, and Kumar (2004). Introduction to Data Mining.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Cluster Analysis This lecture node is modified based on Lecture Notes for Chapter.
Clustering Algorithms Minimize distance But to Centers of Groups.
Introduction to Data Mining Clustering & Classification Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods.
DATA MINING: CLUSTER ANALYSIS Instructor: Dr. Chun Yu School of Statistics Jiangxi University of Finance and Economics Fall 2015.
Nearest Neighbour and Clustering. Nearest Neighbour and clustering Clustering and nearest neighbour prediction technique was one of the oldest techniques.
CLUSTER ANALYSIS. Cluster Analysis  Cluster analysis is a major technique for classifying a ‘mountain’ of information into manageable meaningful piles.
Topic 4: Cluster Analysis Analysis of Customer Behavior and Service Modeling.
MIS2502: Data Analytics Clustering and Segmentation Jeremy Shafer
Data Mining Classification and Clustering Techniques Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to Data Mining.
Gilad Lerman Math Department, UMN
Unsupervised Learning
Data Mining: Basic Cluster Analysis
Clustering CSC 600: Data Mining Class 21.
Clustering 28/03/2016 A diák alatti jegyzetszöveget írta: Balogh Tamás Péter.
Data Mining K-means Algorithm
Cluster Analysis: Basic Concepts and Algorithms
数据挖掘 Introduction to Data Mining
Topic 3: Cluster Analysis
MIS2502: Data Analytics Clustering and Segmentation
AIM: Clustering the Data together
Clustering Basic Concepts and Algorithms 1
Dr. Unnikrishnan P.C. Professor, EEE
Data Mining Cluster Techniques: Basic
Critical Issues with Respect to Clustering
MIS2502: Data Analytics Classification using Decision Trees
Clustering 23/03/2016 A diák alatti jegyzetszöveget írta: Balogh Tamás Péter.
Data Mining – Chapter 4 Cluster Analysis Part 2
MIS2502: Data Analytics Clustering and Segmentation
MIS2502: Data Analytics Clustering and Segmentation
Cluster Analysis.
Text Categorization Berlin Chen 2003 Reference:
MIS2502: Data Analytics Classification Using Decision Trees
Topic 5: Cluster Analysis
SEEM4630 Tutorial 3 – Clustering.
Data Mining Cluster Analysis: Basic Concepts and Algorithms
Unsupervised Learning
Presentation transcript:

MIS2502: Data Analytics Clustering and Segmentation David Schuff David.Schuff@temple.edu http://community.mis.temple.edu/dschuff

What is Cluster Analysis? Grouping data so that elements in a group will be Similar (or related) to one another Different (or unrelated) from elements in other groups Distance within clusters is minimized Distance between clusters is maximized http://www.baseball.bornbybits.com/blog/uploaded_images/ Takashi_Saito-703616.gif

Applications Understanding data Summarizing data Group related documents for browsing Create groups of similar customers Discover which stocks have similar price fluctuations Summarizing data Reduce the size of large data sets Data in similar groups can be combined into a single data point

Even more examples Marketing Insurance Healthcare Discover distinct customer groups for targeted promotions Insurance Finding “good customers” (low claim costs, reliable premium payments) Healthcare Find patients with high-risk behaviors

What cluster analysis is NOT People simply place items into categories Manual (“supervised”) classification Dividing students into groups by last name Simple segmentation The clusters must come from the data, not from external specifications. Creating the “buckets” beforehand is categorization, but not clustering.

(Partitional) Clustering Three distinct groups emerge, but… …some curveballs behave more like splitters. …some splitters look more like fastballs.

Clusters can be ambiguous 6 How many clusters? 2 4 The difference is the threshold you set. How distinct must a cluster be to be it’s own cluster? adapted from Tan, Steinbach, and Kumar. Introduction to Data Mining (2004)

K-means (partitional) Choose K clusters The K-means algorithm is one method for doing partitional clustering Select K points as initial centroids Assign all points to clusters based on distance Yes Recompute the centroid of each cluster No Did the center change? DONE!

K-Means Demonstration Here is the initial data set

K-Means Demonstration Choose K points as initial centroids

K-Means Demonstration Assign data points according to distance

K-Means Demonstration Recalculate the centroids

K-Means Demonstration And re-assign the points

K-Means Demonstration And keep doing that until you settle on a final set of clusters

Choosing the initial centroids Choosing the right number Choosing the right initial location It matters They won’t make sense within the context of the problem Unrelated data points will be included in the same group Bad choices create bad groupings

Example of Poor Initialization This may “work” mathematically but the clusters don’t make much sense.

Evaluating K-Means Clusters On the previous slides, we did it visually, but there is a mathematical test Sum-of-Squares Error (SSE) The distance to the nearest cluster center How close does each point get to the center? This just means In a cluster, compute distance from a point (m) to the cluster center (x) Square that distance (so sign isn’t an issue) Add them all together

Example: Evaluating Clusters 3 3.3 1 1.3 1.5 2 SSE1 = 12 + 1.32 + 22 = 1 + 1.69 + 4 = 6.69 SSE2 = 32 + 3.32 + 1.52 = 9 + 10.89 + 2.25 = 22.14 Lower individual cluster SSE = a better cluster Lower total SSE = a better set of clusters More clusters will reduce SSE Considerations Reducing SSE within a cluster increases cohesion (we want that)

Pre-processing: Getting the right centroids There’s no single, best way to choose initial centroids Normalize the data Reduces dispersion and influence of outliers Adjusts for differences in scale (income in dollars versus age in years) Remove outliers altogether Also reduces dispersion that can skew the cluster centroids They don’t represent the population anyway

Limitations of K-Means Clustering Clusters vary widely in size Clusters vary widely in density Clusters are not in rounded shapes The data set has a lot of outliers K-Means gives unreliable results when The clusters may never make sense. In that case, the data may just not be well-suited for clustering!

Similarity between clusters (inter-cluster) Most common: distance between centroids Also can use SSE Look at distance between cluster 1’s points and other centroids You’d want to maximize SSE between clusters Increasing SSE across clusters increases separation (we want that) Cluster 1 Cluster 5

Figuring out if our clusters are good “Good” means Meaningful Useful Provides insight The pitfalls Poor clusters reveal incorrect associations Poor clusters reveal inconclusive associations There might be room for improvement and we can’t tell This is somewhat subjective and depends upon the expectations of the analyst

The Keys to Successful Clustering We want high cohesion within clusters (minimize differences) Low SSE, high correlation And high separation between clusters (maximize differences) High SSE, low correlation Choose the right number of clusters Choose the right initial centroids No easy way to do this Trial-and-error, knowledge of the problem, and looking at the output In R, cohesion is measured by within cluster sum of squares error… …and separation measured by between cluster sum of squares error