Download presentation
Presentation is loading. Please wait.
Published byAlayna Whitenton Modified over 10 years ago
1
K-means and Gaussian Mixture Model 王养浩 2013 年 11 月 20 日
2
Outline K-means Gaussian Mixture Model Expectation Maximum
3
K-means Gather data points to a few cohesive ‘Clusters’ Unsupervised Learning
4
K-means
6
Easy Fast Euclidean distance? K needs input ? Convergence ?
7
Determination of K Rule of Thumb : Elbow Method Cross Validation
8
K-means Convergence x (i) data points μ c(i) cluster centroids Coordinate descent
9
Coordinate Descent
10
K-means Convergence Non-circle Clusters
11
K-means Convergence Local minimum – The optimization object is non-convex
12
Gaussian Mixture Model Mixture of Gaussian distribution
13
Gaussian Mixture Model Log likelihood Maximum likelihood – Expectation Maximum
14
Expectation Maximum
15
Jenson inquality
16
Expectation Maximum
17
Construct lower bound
18
Expectation Maximum
19
Repeat until convergence
20
Generalized Expectation Maximum Difficulty in M-step
21
Summary K-means – Coordinate descent Gaussian Mixture Model – Expectation Maximum Expectation Maximum – MLE for models with latent variables – Generalized EM
22
Thanks!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.