Download presentation
Presentation is loading. Please wait.
1
K-means conditional mixture models
Lecture 14 K-means conditional mixture models
2
K-means VEM is a general bound optimization algorithm where we
can choose parameterized posteriors. If we use delta functions instead of general responsibilities in MoG, we get the k-means algorithm. At each E-step we are forced to pick a winner cluster, instead of soft assignments. We minimize a cost function, but not the log-likelihood of the MoG model, so we wont get ML parameters. K-means is typically fast, but prone to local minima.
3
Conditional Mixture Models
Recall the generative and discriminative methods for classification/regression. We will now make these model more flexible by using mixture models. Generative: p(x|z,y) p(z|y) p(y) Discriminative: p(y|z,x) p(z|x). p(z|x) are soft-swithes that are input dependent. They switch between the different models. p(y|z,x) are expert-models, such as linear regression models, but different for each value of z. For regression we get soft piecewise linear curve fitting (with uncertainty). Use EM to learn the parameters.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.