Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Expectation Maximization Assembled and extended by Longin Jan Latecki Temple University, based on slides

Similar presentations


Presentation on theme: "Introduction to Expectation Maximization Assembled and extended by Longin Jan Latecki Temple University, based on slides"— Presentation transcript:

1 Introduction to Expectation Maximization Assembled and extended by Longin Jan Latecki Temple University, latecki@temple.edu based on slides bylatecki@temple.edu Andrew Blake, Microsoft Research and Bill Freeman, MIT, ICCV 2003 and Andrew W. Moore, Carnegie Mellon University

2 Learning and vision: Generative Methods Machine learning is an important direction in computer vision. Our goal for this class: –Give overviews of useful techniques. –Show how these methods can be used in vision. –Provide references and pointers.

3 What is the goal of vision? If you are asking, “Are there any faces in this image?”, then you would probably want to use discriminative methods.

4 What is the goal of vision? If you are asking, “Are there any faces in this image?”, then you would probably want to use discriminative methods. If you are asking, “Find a 3-d model that describes the runner”, then you would use generative methods.

5 Modeling So we want to look at high-dimensional visual data, and fit models to it; forming summaries of it that let us understand what we see.

6 The simplest data to model: a set of 1–d samples

7 Fit this distribution with a Gaussian

8 How find the parameters of the best-fitting Gaussian? Posterior probability Likelihood functionPrior probability Evidence By Bayes rule mean std. dev. data points

9 How find the parameters of the best-fitting Gaussian? Posterior probability Likelihood functionPrior probability Evidence Maximum likelihood parameter estimation: mean std. dev. data points

10 Derivation of MLE for Gaussians Observation density Log likelihood Maximisation

11 Basic Maximum Likelihood Estimate (MLE) of a Gaussian distribution Mean Variance Covariance Matrix

12 Basic Maximum Likelihood Estimate (MLE) of a Gaussian distribution Mean Variance For vector-valued data, we have the Covariance Matrix

13 Model fitting example 2: Fit a line to observed data x y

14 Maximum likelihood estimation for the slope of a single line Maximum likelihood estimate: where gives regression formula Data likelihood for point n:

15 Model fitting example 3: Fitting two lines to observed data x y

16 MLE for fitting a line pair (a form of mixture dist. for )

17 Fitting two lines: on the one hand… x y If we knew which points went with which lines, we’d be back at the single line-fitting problem, twice. Line 1 Line 2

18 Fitting two lines, on the other hand… x y We could figure out the probability that any point came from either line if we just knew the two equations for the two lines.

19 Expectation Maximization (EM): a solution to chicken-and-egg problems

20 EM example:

21

22

23

24 Converged!

25 MLE with hidden/latent variables: Expectation Maximisation General problem: dataparametershidden variables For MLE, want to maximise the log likelihood The sum over z inside the log gives a complicated expression for the ML solution.

26 The EM algorithm We don’t know the values of the labels, z i, but let’s use its expected value under its posterior with the current parameter values,  old. That gives us the “expectation step”: “E-step” “M-step” Now let’s maximize this Q function, an expected log-likelihood, over the parameter values, giving the “maximization step”: Each iteration increases the total log-likelihood log p(y|  )

27 Expectation Maximisation applied to fitting the two lines and maximising that gives ’ ’ Hidden variables associate data point with line and probabilities of association are: Need: and then: /2

28 EM fitting to two lines with and Regression becomes: “E-step” “M-step” repeat /2

29 Experiments: EM fitting to two lines Iteration 123 Line weights line 1 line 2 (from a tutorial by Yair Weiss, http://www.cs.huji.ac.il/~yweiss/tutorials.html )

30 Applications of EM in computer vision Image segmentation Motion estimation combined with perceptual grouping Polygonal approximation of edges

31 Next… back to Density Estimation What if we want to do density estimation with multimodal or clumpy data?

32 The GMM assumption There are k components. The i’th component is called  i Component  i has an associated mean vector  i 11 22 33

33 The GMM assumption There are k components. The i’th component is called  i Component  i has an associated mean vector  i Each component generates data from a Gaussian with mean  i and covariance matrix  2 I Assume that each datapoint is generated according to the following recipe: 11 22 33

34 The GMM assumption There are k components. The i’th component is called  i Component  i has an associated mean vector  i Each component generates data from a Gaussian with mean  i and covariance matrix  2 I Assume that each datapoint is generated according to the following recipe: 1.Pick a component at random. Choose component i with probability P(  i ). 22

35 The GMM assumption There are k components. The i’th component is called  i Component  i has an associated mean vector  i Each component generates data from a Gaussian with mean  i and covariance matrix  2 I Assume that each datapoint is generated according to the following recipe: 1.Pick a component at random. Choose component i with probability P(  i ). 2.Datapoint ~ N(  i,  2 I ) 22 x

36 The General GMM assumption 11 22 33 There are k components. The i’th component is called  i Component  i has an associated mean vector  i Each component generates data from a Gaussian with mean  i and covariance matrix  i Assume that each datapoint is generated according to the following recipe: 1.Pick a component at random. Choose component i with probability P(  i ). 2.Datapoint ~ N(  i,  i )

37 Unsupervised Learning: not as hard as it looks Sometimes easy Sometimes impossible and sometimes in between IN CASE YOU’RE WONDERING WHAT THESE DIAGRAMS ARE, THEY SHOW 2-d UNLABELED DATA (X VECTORS) DISTRIBUTED IN 2-d SPACE. THE TOP ONE HAS THREE VERY CLEAR GAUSSIAN CENTERS

38 Computing likelihoods in unsupervised case We have x 1, x 2, … x N We know P(w 1 ) P(w 2 ).. P(w k ) We know σ P(x|w i, μ i, … μ k ) = Prob that an observation from class w i would have value x given class means μ 1 … μ k Can we write an expression for that?

39 likelihoods in unsupervised case We have x 1 x 2 … x n We have P(w 1 ).. P(w k ). We have σ. We can define, for any x, P(x|w i, μ 1, μ 2.. μ k ) Can we define P(x | μ 1, μ 2.. μ k ) ? Can we define P(x 1, x 2,.. x n | μ 1, μ 2.. μ k ) ? [YES, IF WE ASSUME THE X 1 ’S WERE DRAWN INDEPENDENTLY]

40 Unsupervised Learning: Mediumly Good News We now have a procedure s.t. if you give me a guess at μ 1, μ 2.. μ k, I can tell you the prob of the unlabeled data given those μ‘s. Suppose x ‘s are 1-dimensional. There are two classes; w 1 and w 2 P(w 1 ) = 1/3 P(w 2 ) = 2/3 σ = 1. There are 25 unlabeled datapoints x 1 = 0.608 x 2 = -1.590 x 3 = 0.235 x 4 = 3.949 : x 25 = -0.712 (From Duda and Hart)

41 Graph of log P(x 1, x 2.. x 25 | μ 1, μ 2 ) against μ 1 (  ) and μ 2 (  ) Max likelihood = (μ 1 =-2.13, μ 2 =1.668) Local minimum, but very close to global at (μ 1 =2.085, μ 2 =-1.257)* * corresponds to switching w 1 with w 2. Duda & Hart’s Example

42 We can graph the prob. dist. function of data given our μ 1 and μ 2 estimates. We can also graph the true function from which the data was randomly generated. They are close. Good. The 2 nd solution tries to put the “2/3” hump where the “1/3” hump should go, and vice versa. In this example unsupervised is almost as good as supervised. If the x 1.. x 25 are given the class which was used to learn them, then the results are (μ 1 =-2.176, μ 2 =1.684). Unsupervised got (μ 1 =-2.13, μ 2 =1.668).

43 Finding the max likelihood μ 1,μ 2..μ k We can compute P( data | μ 1,μ 2..μ k ) How do we find the μ i ‘s which give max. likelihood? The normal max likelihood trick: Set  log Prob (….) = 0  μ i and solve for μ i ‘s. # Here you get non-linear non-analytically- solvable equations Use gradient descent Slow but doable Use a much faster, cuter, and recently very popular method…

44 Expectation Maximalization

45 The E.M. Algorithm We’ll get back to unsupervised learning soon. But now we’ll look at an even simpler case with hidden information. The EM algorithm  Can do trivial things, such as the contents of the next few slides.  An excellent way of doing our unsupervised learning problem, as we’ll see.  Many, many other uses, including inference of Hidden Markov Models. DETOUR

46 Silly Example Let events be “grades in a class” w 1 = Gets an AP(A) = ½ w 2 = Gets a BP(B) = μ w 3 = Gets a CP(C) = 2μ w 4 = Gets a DP(D) = ½-3μ (Note 0 ≤ μ ≤ 1/6) Assume we want to estimate μ from data. In a given class there were a A’s b B’s c C’s d D’s What’s the maximum likelihood estimate of μ given a,b,c,d ?

47 Silly Example Let events be “grades in a class” w 1 = Gets an AP(A) = ½ w 2 = Gets a BP(B) = μ w 3 = Gets a CP(C) = 2μ w 4 = Gets a DP(D) = ½-3μ (Note 0 ≤ μ ≤ 1/6) Assume we want to estimate μ from data. In a given class there were a A’s b B’s c C’s d D’s What’s the maximum likelihood estimate of μ given a,b,c,d ?

48 Trivial Statistics P(A) = ½ P(B) = μ P(C) = 2μ P(D) = ½-3μ P( a,b,c,d | μ) = K(½) a (μ) b (2μ) c (½-3μ) d log P( a,b,c,d | μ) = log K + alog ½ + blog μ + clog 2μ + dlog (½-3μ) ABCD 146910 Boring, but true!

49 Same Problem with Hidden Information Someone tells us that Number of High grades (A’s + B’s) = h Number of C’s = c Number of D’s = d What is the max. like estimate of μ now? REMEMBER P(A) = ½ P(B) = μ P(C) = 2μ P(D) = ½-3μ

50 Same Problem with Hidden Information Someone tells us that Number of High grades (A’s + B’s) = h Number of C’s = c Number of D’s = d What is the max. like estimate of μ now? We can answer this question circularly: EXPECTATION MAXIMIZATION If we know the value of μ we could compute the expected value of a and b If we know the expected values of a and b we could compute the maximum likelihood value of μ REMEMBER P(A) = ½ P(B) = μ P(C) = 2μ P(D) = ½-3μ Since the ratio a:b should be the same as the ratio ½ : 

51 E.M. for our Trivial Problem We begin with a guess for μ We iterate between EXPECTATION and MAXIMALIZATION to improve our estimates of μ and a and b. Define μ(t) the estimate of μ on the t’th iteration b(t) the estimate of b on t’th iteration REMEMBER P(A) = ½ P(B) = μ P(C) = 2μ P(D) = ½-3μ E-step M-step Continue iterating until converged. Good news: Converging to local optimum is assured. Bad news: I said “local” optimum.

52 E.M. Convergence Convergence proof based on fact that Prob(data | μ) must increase or remain same between each iteration [NOT OBVIOUS] But it can never exceed 1 [OBVIOUS] So it must therefore converge [OBVIOUS] tμ(t)b(t) 000 10.08332.857 20.09373.158 30.09473.185 40.09483.187 50.09483.187 60.09483.187 In our example, suppose we had h = 20 c = 10 d = 10 μ(0) = 0 Convergence is generally linear: error decreases by a constant factor each time step.

53 Back to Unsupervised Learning of GMMs Remember: We have unlabeled data x 1 x 2 … x R We know there are k classes We know P(w 1 ) P(w 2 ) P(w 3 ) … P(w k ) We don’t know μ 1 μ 2.. μ k We can write P( data | μ 1 …. μ k )

54 E.M. for GMMs This is n nonlinear equations in μ j ’s.” …I feel an EM experience coming on!! If, for each x i we knew that for each w j the prob that μ j was in class w j is P(w j |x i,μ 1 …μ k ) Then… we would easily compute μ j. If we knew each μ j then we could easily compute P(w j |x i,μ 1 …μ k ) for each w j and x i. See http://www.cs.cmu.edu/~awm/doc/gmm-algebra.pdf

55 E.M. for GMMs Iterate. On the t’th iteration let our estimates be t = { μ 1 (t), μ 2 (t) … μ c (t) } E-step Compute “expected” classes of all datapoints for each class M-step. Compute Max. like μ given our data’s class membership distributions Just evaluate a Gaussian at x k

56 E.M. Convergence This algorithm is REALLY USED. And in high dimensional state spaces, too. E.G. Vector Quantization for Speech Data As with all EM procedures, convergence to a local optimum guaranteed.

57 E.M. for General GMMs Iterate. On the t’th iteration let our estimates be t = { μ 1 (t), μ 2 (t) … μ c (t),  1 (t),  2 (t) …  c (t), p 1 (t), p 2 (t) … p c (t) } E-step Compute “expected” classes of all datapoints for each class M-step. Compute Max. like μ given our data’s class membership distributions p i (t) is shorthand for estimate of P(  i ) on t’th iteration R = #records Just evaluate a Gaussian at x k

58 Gaussian Mixture Example: Start

59 After first iteration

60 After 2nd iteration

61 After 3rd iteration

62 After 4th iteration

63 After 5th iteration

64 After 6th iteration

65 After 20th iteration

66 Some Bio Assay data

67 GMM clustering of the assay data

68 Resulting Density Estimator

69 In EM the model (number of parameters, which is the number of lines in our case) is fixed. What happen if we start with a wrong model? One line Converged!

70 Three lines:

71 Why not 8 lines?

72 Which model is better? We cannot just compute an approximation error (e.g. LSE), since it decreases with the number of lines. If we have too many lines, we just fit the noise.

73 Can we find an optimal model? Human visual system does it. Bad fit Good fit

74 Polygonal Approximation of Laser Range Data Based on Perceptual Grouping and EM Longin Jan Latecki and Rolf Lakaemper CIS Dept., Temple University, Philadelphia

75 Groping Edge Points to Contour Parts Longin Jan Latecki and Rolf Lakaemper CIS Dept., Temple University, Philadelphia


Download ppt "Introduction to Expectation Maximization Assembled and extended by Longin Jan Latecki Temple University, based on slides"

Similar presentations


Ads by Google