Download presentation
Presentation is loading. Please wait.
1
Expectation Maximization
Lecture 10 Expectation Maximization
2
A simple clustering problem
Naive Bayes has labels observed. What if they are hidden? Mixture model with labels from Bernoulli and data from Gaussian (2 classes). Observation: the summation appears inside the log: trouble for optimization (not so for naive Bayes!). Rewrite the derivative of log-L such that summation moves outside log fixed point equations. Simple updates, but do they converge and do the improve L (note similarity with IS).
3
EM as Bound Optimization
Use Jensen inequality to compute a bound on log-L. E-step: compute bound Q M-step: optimize bound Example: the color-blind man drawing colored balls. demo_EM(p,N).
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.