Download presentation
Presentation is loading. Please wait.
Published byLeonard Sherr Modified over 10 years ago
1
Discrimination amongst k populations
2
We want to determine if an observation vector comes from one of the k populations For this purpose we need to partition p-dimensional space into k regions C 1, C 2, …, C k
3
We will make the decision: For this purpose we need to partition p-dimensional space into k regions C 1, C 2, …, C k if Misclassification probabilities P[j|i] = P[ classify the case in j when case is from i ] Cost of Misclassification c j|i = Cost classifying the case in j when case is from i
4
Initial probabilities of inclusion P[i] = P[ classify the case is from i initially] Expected Cost of Misclassification of a case from population i We assume that we know the case came from i
5
Total Expected Cost of Misclassification i j
6
Optimal Classification Rule The optimal classification rule will find the regions C j that will minimize: ECM will be minimized if C j is chosen where the term that is omitted: is the largest
7
Optimal Regions when misclassification costs are equal
8
Optimal Regions when misclassification costs are equal an distributions are p-variate Normal with common covariance matrix In the case of normality
10
Summarizing We will classify the observation vector in population j if:
11
k—means Clustering A non-hierarchical clustering scheme want subdivide the data set into k groups
12
The k means algorithm 1.Initially subdivide the complete data into k groups. 2.Compute the centroids (mean vector) for each group. 3.Sequentially go through the data reassigning each case to the group with the closest centroid. 4.After reassigning a case to a new group recalculate the centroid for the original group and the new group to which it is a member. 5.Continue until there are no new reassignment of cases.
13
Example: n = 60 cases with two variables (x,y) measured
14
Graph: Scattergram of data
15
Graph: Initial Clustering
16
Graph: Final Clustering
17
Graph: True subpopulations
18
An Example: Cluster Analysis, Discriminant Analysis, MANOVA A survey was given to 132 students Male=35, Female=97 They rated, on a Likert scale 1 to 5 their agreement with each of 40 statements. All statements are related to the Meaning of Life
19
Questions and Statements
20
Statements - continued
23
The Analysis The first step in the analysis is to perform cluster analysis to see if there are any subgroups of interest: Both hierarchical and partitioning method (K- means) approaches were used for the cluster analysis.
25
The Analysis From the of the previous figure, it follows by cutting across the dendogram branches at a linkage distance of between 30 or 75, that 2 or 3 clusters describe the data best. The k-means method was then used (with k=2 and k=3) to identify members of these clusters. Using the k- means procedure, similarly, two and three cluster models fit the data best (attempts to use higher values of k resulted in clusters with only one case).
26
One-way MANOVA was then utilized to test for significant differences between the clusters It was also used to identify the statements on which the differences between the two clusters were most significant.
29
A step-wise discriminant function analysis was done to predict cluster membership and to attempt to identify the minimal set of survey statements used to identify cluster separation for the 128 participants in the study.
32
religiousNon-religious Optimistic Pessimistic
33
1.96% of the cluster 1 respondents were correctly classified, 2.88% of cluster 2 respondents were correctly classified, and 3.84% of cluster 3 respondents were classified correctly. Discrimination performance
34
Techniques for studying correlation and covariance structure Principle Components Analysis (PCA) Factor Analysis
35
Principle Component Analysis
36
Let Definition: have a p-variate Normal distribution with mean vector The linear combination is called the first principle component if is chosen to maximize subject to
37
Let Consider maximizing subject to Using the Lagrange multiplier technique
38
Now and
39
Summary is the first principle component if is the eigenvector (length 1)of associated with the largest eigenvalue 1 of .
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.