Presentation is loading. Please wait.

Presentation is loading. Please wait.

ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:

Similar presentations


Presentation on theme: "ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:"— Presentation transcript:

1 ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources: D.H.S.: Chapter 3 (Part 2) Wiki: Maximum Likelihood M.Y.: Maximum Likelihood Tutorial J.O.S.: Bayesian Parameter Estimation J.H.: Euro Coin Audio: URL:

2 ECE 8443: Lecture 06, Slide 1 Consider the case where only the mean,  = , is unknown: which implies: because: Gaussian Case: Unknown Mean (Review)

3 ECE 8443: Lecture 06, Slide 2 Rearranging terms: Significance??? Substituting into the expression for the total likelihood: Gaussian Case: Unknown Mean (Review)

4 ECE 8443: Lecture 06, Slide 3 Let  = [ ,  2 ]. The log likelihood of a SINGLE point is: The full likelihood leads to: Gaussian Case: Unknown Mean and Variance (Review)

5 ECE 8443: Lecture 06, Slide 4 This leads to these equations: In the multivariate case: The true covariance is the expected value of the matrix, which is a familiar result. Gaussian Case: Unknown Mean and Variance (Review)

6 ECE 8443: Lecture 06, Slide 5 Does the maximum likelihood estimate of the variance converge to the true value of the variance? Let’s start with a few simple results we will need later. Expected value of the ML estimate of the mean: Convergence of the Mean (Review)

7 ECE 8443: Lecture 06, Slide 6 The expected value of x i x j will be  2 for j  k since the two random variables are independent. The expected value of x i 2 will be  2 +  2. Hence, in the summation above, we have n 2 -n terms with expected value  2 and n terms with expected value  2 +  2. Thus, We see that the variance of the estimate goes to zero as n goes to infinity, and our estimate converges to the true estimate (error goes to zero). which implies: Variance of the ML Estimate of the Mean (Review)

8 ECE 8443: Lecture 06, Slide 7 Note that this implies: Now we can combine these results. Recall our expression for the ML estimate of the variance: We will need one more result: Variance Relationships

9 ECE 8443: Lecture 06, Slide 8 Expand the covariance and simplify: One more intermediate term to derive: Covariance Expansion

10 ECE 8443: Lecture 06, Slide 9 Substitute our previously derived expression for the second term: Biased Variance Estimate

11 ECE 8443: Lecture 06, Slide 10 An unbiased estimator is: These are related by: which is asymptotically unbiased. See Burl, AJWills and AWM for excellent examples and explanations of the details of this derivation.BurlAJWillsAWM Therefore, the ML estimate is biased: However, the ML estimate converges (and is MSE). Expectation Simplification

12 ECE 8443: Lecture 06, Slide 11 In Chapter 2, we learned how to design an optimal classifier if we knew the prior probabilities, P(  i ), and class-conditional densities, p(x|  i ). Bayes: treat the parameters as random variables having some known prior distribution. Observations of samples converts this to a posterior. Bayesian learning: sharpen the a posteriori density causing it to peak near the true value. Supervised vs. unsupervised: do we know the class assignments of the training data. Bayesian estimation and ML estimation produce very similar results in many cases. Reduces statistical inference (prior knowledge or beliefs about the world) to probabilities. Introduction to Bayesian Parameter Estimation

13 ECE 8443: Lecture 06, Slide 12 Posterior probabilities, P(  i |x), are central to Bayesian classification. Bayes formula allows us to compute P(  i |x) from the priors, P(  i ), and the likelihood, p(x|  i ). But what If the priors and class-conditional densities are unknown? The answer is that we can compute the posterior, P(  i |x), using all of the information at our disposal (e.g., training data). For a training set, D, Bayes formula becomes: We assume priors are known: P(  i |D) = P(  i ). Also, assume functional independence: D i have no influence on This gives: Class-Conditional Densities

14 ECE 8443: Lecture 06, Slide 13 Assume the parametric form of the evidence, p(x), is known: p(x|  ). Any information we have about  prior to collecting samples is contained in a known prior density p(  ). Observation of samples converts this to a posterior, p(  |D), which we hope is peaked around the true value of . Our goal is to estimate a parameter vector: We can write the joint distribution as a product: because the samples are drawn independently. The Parameter Distribution This equation links the class-conditional density to the posterior,. But numerical solutions are typically required!

15 ECE 8443: Lecture 06, Slide 14 Case: only mean unknown Known prior density: Using Bayes formula: Rationale: Once a value of  is known, the density for x is completely known.  is a normalization factor that depends on the data, D. Univariate Gaussian Case

16 ECE 8443: Lecture 06, Slide 15 Applying our Gaussian assumptions: Univariate Gaussian Case

17 ECE 8443: Lecture 06, Slide 16 Now we need to work this into a simpler form: Univariate Gaussian Case (Cont.)

18 ECE 8443: Lecture 06, Slide 17 Univariate Gaussian Case (Cont.) p(  |D) is an exponential of a quadratic function, which makes it a normal distribution. Because this is true for any n, it is referred to as a reproducing density. p(  ) is referred to as a conjugate prior. Write p(  |D) ~ N(  n,  n 2 ): Expand the quadratic term: Equate coefficients of our two functions:

19 ECE 8443: Lecture 06, Slide 18 Univariate Gaussian Case (Cont.) Rearrange terms so that the dependencies on  are clear: Associate terms related to  2 and  : There is actually a third equation involving terms not related to  : but we can ignore this since it is not a function of  and is a complicated equation to solve.

20 ECE 8443: Lecture 06, Slide 19 Two equations and two unknowns. Solve for  n and  n 2. First, solve for  n 2 : Univariate Gaussian Case (Cont.) Next, solve for  n : Summarizing:

21 ECE 8443: Lecture 06, Slide 20  n represents our best guess after n samples.  n 2 represents our uncertainty about this guess.  n 2 approaches  2 /n for large n – each additional observation decreases our uncertainty. The posterior, p(  |D), becomes more sharply peaked as n grows large. This is known as Bayesian learning. Bayesian Learning

22 ECE 8443: Lecture 06, Slide 21 Getting ahead a bit, let’s see how we can put these ideas to work on a simple example due to David MacKay, and explained by Jon Hamaker.David MacKayJon Hamaker “The Euro Coin”

23 ECE 8443: Lecture 06, Slide 22 Summary Review of maximum likelihood parameter estimation in the Gaussian case, with an emphasis on convergence and bias of the estimates. Introduction of Bayesian parameter estimation. The role of the class-conditional distribution in a Bayesian estimate. Estimation of the posterior and probability density function assuming the only unknown parameter is the mean, and the conditional density of the “features” given the mean, p(x|  ), can be modeled as a Gaussian distribution.


Download ppt "ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:"

Similar presentations


Ads by Google