Download presentation
Presentation is loading. Please wait.
Published byJocelyn Morton Modified over 9 years ago
1
Information Bottleneck versus Maximum Likelihood Felix Polyakov
2
A coin is known to be biased The coin is tossed three times – two heads and one tail Use ML to estimate the probability of throwing a head Try P = 0.2 Try P = 0.6 Probability of a head Likelihood of the Data L(O) = 0.2 * 0.2 * 0.8 = 0.032 Try P = 0.4 L(O) = 0.4 * 0.4 * 0.6 = 0.096 L(O) = 0.6 * 0.6 * 0.4 = 0.144 Try P = 0.8 L(O) = 0.8 * 0.8 * 0.2 = 0.128 A simple example... Model: −p(head) = P −p(tail) = 1 - P
3
A bit more complicated example… : Mixture Model Three baskets with white (O = 1), grey (O = 2), and black (O = 3) balls B1B1 B2B2 B3B3 15 balls were drawn as follows: 1.Choose a basket according to p(i) = b i 2.Draw the ball j from basket i with probability Use ML to estimate given the observations: sequence of balls’ colors
4
Likelihood of observations Log Likelihood of observations Maximal Likelihood of observations
5
Likelihood of the observed data x – hidden random variables [e.g. basket] y – observed random variables [e.g. color] - model parameters [e.g. they define p(y|x)] 0 – current estimate of model parameters
7
1.Expectation −Compute −Get 2.Maximization − Expectation-maximization algorithm (I) EM algorithm converges to local maxima
8
Log-likelihood is non-decreasing, examples
9
EM – another approach Goal: Jensen’s inequality for concave function
12
1.Expectation 2.Maximization Expectation-maximization algorithm (II) (I) and (II) are equivalent
13
Scheme of the approach
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.