Download presentation
Presentation is loading. Please wait.
Published byRidwan Atmadja Modified over 5 years ago
1
Cluster analysis Presented by Dr.Chayada Bhadrakom
Agricultural and Resource Economics, Kasetsart University
2
Lecture / Tutorial outline
Cluster analysis Lecture / Tutorial outline Cluster analysis Example of cluster analysis Work on SPSS
3
Introduction Example Marketing research: Customer survey on brand awareness
4
Question Is there a linear relation between brand awareness and yearly income? Hypothesis: The higher a person's income, the higher his/her brand awareness.
5
Question Is there structure in the brand awareness dataset
Question Is there structure in the brand awareness dataset? Are there clusters for the combination of yearly income and brand awareness?
6
Cluster Analysis It is a class of techniques used to classify cases into groups that are relatively homogeneous within themselves and heterogeneous between each other, on the basis of a defined set of variables. These groups are called clusters.
8
Cluster Analysis and marketing research
Market segmentation. E.g. clustering of consumers according to their attribute preferences Understanding buyers behaviours. Consumers with similar behaviours/characteristics are clustered Identifying new product opportunities. Clusters of similar brands/products can help identifying competitors / market opportunities Reducing data. E.g. in preference mapping
9
Steps to conduct a Cluster Analysis
Formulate the problem Select a distance measure Select a clustering algorithm Determine the number of clusters Validate the analysis
11
Problem Formulation Perhaps the most important part of formulating the clustering problem is selecting the variables on which the clustering is based Basically, the set of variables selected should describe the similarity between objects in terms that are relevant to the marketing research problem The variables should be selected based on past research, theory, or a consideration of the hypotheses being tested. In exploratory research, the researcher should exercise judgment and intuition
12
Defining distance Most common Euclidean
Dij distance between cases i and j xki value of variable Xk for case j The Euclidean distance is the square root of the sum of the squared differences in values for each variable.
13
Choosing a clustering procedure
Clustering Procedures Nonhierarchical Hierarchical Agglomerative Divisive Sequential Threshold Parallel Optimizing Partitioning Ward’s Method Linkage Methods Variance Centroid Single Complete Average
14
Clustering procedures
Hierarchical procedures Agglomerative (start from n clusters, to get to 1 cluster) Divisive (start from 1 cluster, to get to n cluster) Non hierarchical procedures K-means clustering
15
Agglomerative clustering
16
Agglomerative clustering
Linkage methods Single linkage (minimum distance) Complete linkage (maximum distance) Average linkage Ward’s method Compute sum of squared distances within clusters Aggregate clusters with the minimum increase in the overall sum of squares Centroid method The distance between two clusters is defined as the difference between the centroids (cluster averages)
17
Linkage Methods of Clustering
Single Linkage or Nearest neighbor Minimum Distance Cluster 1 Cluster 2 Complete Linkage or Furthest Neighbor Maximum Distance Cluster 1 Cluster 2 Average Linkage Between-groups Linkage Average Distance Cluster 1 Cluster 2
18
Other Agglomerative Clustering Methods
Ward’s Procedure Centroid Method
19
Example of hierarchical method: Single linkage
20
Example of hierarchical method: Complete linkage
21
K-means clustering The number k of cluster is fixed
An initial set of k “seeds” (aggregation centres) is provided First k elements Other seeds Given a certain treshold, all units are assigned to the nearest cluster seed New seeds are computed Go back to step 3 until no reclassification is necessary Units can be reassigned in successive steps (optimising partioning)
22
Hierarchical vs Non hierarchical methods
Hierarchical clustering No decision about the number of clusters Problems when data contain a high level of error Can be very slow Initial decision are more influential (one-step only) Non hierarchical clustering Faster, more reliable Need to specify the number of clusters (arbitrary) Need to set the initial seeds (arbitrary)
23
Suggested approach First perform a hierarchical method to define the number of clusters Then use the k-means procedure to actually form the clusters
24
Defining the number of clusters: elbow rule (1)
25
Elbow rule (2): the scree diagram
26
Validating the analysis
Impact of initial seeds / order of cases Impact of the selected method Consider the relevance of the chosen set of variables
27
SPSS Example
29
Number of clusters: 10 – 6 = 4
31
Open the dataset Cluster_small.sav
41
Open the dataset supermarkets_update.sav
42
The supermarkets.sav dataset
43
Cluster analysis: basic steps
Apply Ward’s methods Check the agglomeration schedule Decide the number of clusters Apply the k-means method
44
Analyse / Classify
45
Select the component scores
46
Select Ward’s algorithm
47
Output: Agglomeration schedule
48
Number of clusters Identify the step where the “distance coefficients” makes a bigger jump
49
The scree diagram (Excel needed)
50
Number of clusters Number of cases 150 N Step of ‘elbow’ 145 S
__________________________________ Number of clusters N-S
51
Now repeat the analysis
Choose the k-means technique Set 5 as the number of clusters Save cluster number for each case Run the analysis
52
K-means
53
K-means dialog box
54
Save cluster membership
55
Cluster membership
56
Interpret
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.