Bayesian Estimation (BE) Bayesian Parameter Estimation: Gaussian Case

Slides:



Advertisements
Similar presentations
Component Analysis (Review)
Advertisements

ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
CS479/679 Pattern Recognition Dr. George Bebis
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
LECTURE 11: BAYESIAN PARAMETER ESTIMATION
Chapter 2: Bayesian Decision Theory (Part 2) Minimum-Error-Rate Classification Classifiers, Discriminant Functions and Decision Surfaces The Normal Density.
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Pattern Classification. Chapter 2 (Part 1): Bayesian Decision Theory (Sections ) Introduction Bayesian Decision Theory–Continuous Features.
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Pattern Classification Chapter 2 (Part 2)0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O.
Visual Recognition Tutorial
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Parameter Estimation: Maximum Likelihood Estimation Chapter 3 (Duda et al.) – Sections CS479/679 Pattern Recognition Dr. George Bebis.
Chapter 5: Linear Discriminant Functions
Chapter 4 (Part 1): Non-Parametric Classification
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 2: Bayesian Decision Theory (Part 1) Introduction Bayesian Decision Theory–Continuous Features All materials used in this course were taken from.
Pattern Classification, Chapter 3 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Hidden Markov Model: Extension of Markov Chains
Chapter 2 (part 3) Bayesian Decision Theory Discriminant Functions for the Normal Density Bayes Decision Theory – Discrete Features All materials used.
Chapter 6: Multilayer Neural Networks
Chapter 3 (part 3): Maximum-Likelihood and Bayesian Parameter Estimation Hidden Markov Model: Extension of Markov Chains All materials used in this course.
1 lBayesian Estimation (BE) l Bayesian Parameter Estimation: Gaussian Case l Bayesian Parameter Estimation: General Estimation l Problems of Dimensionality.
Chapter 4 (part 2): Non-Parametric Classification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
CHAPTER 4: Parametric Methods. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation X = {
Introduction to Bayesian Parameter Estimation
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 3 (part 1): Maximum-Likelihood & Bayesian Parameter Estimation  Introduction  Maximum-Likelihood Estimation  Example of a Specific Case  The.
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
0 Pattern Classification, Chapter 3 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda,
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Chapter 3 (part 2): Maximum-Likelihood and Bayesian Parameter Estimation Bayesian Estimation (BE) Bayesian Estimation (BE) Bayesian Parameter Estimation:
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 07: BAYESIAN ESTIMATION (Cont.) Objectives:
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
1 Parameter Estimation Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking and Multimedia,
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Univariate Gaussian Case (Cont.)
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides* were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification Chapter 2(Part 3) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O.
Univariate Gaussian Case (Cont.)
CS479/679 Pattern Recognition Dr. George Bebis
Chapter 3: Maximum-Likelihood Parameter Estimation
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Pattern Classification, Chapter 3
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Course Outline MODEL INFORMATION COMPLETE INCOMPLETE
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
LECTURE 09: BAYESIAN LEARNING
LECTURE 07: BAYESIAN ESTIMATION
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Presentation transcript:

Chapter 3 (part 2): Maximum-Likelihood and Bayesian Parameter Estimation Bayesian Estimation (BE) Bayesian Parameter Estimation: Gaussian Case Bayesian Parameter Estimation: General Estimation Problems of Dimensionality Computational Complexity Component Analysis and Discriminants Hidden Markov Models All materials used in this course were taken from the textbook “Pattern Classification” by Duda et al., John Wiley & Sons, 2001 with the permission of the authors and the publisher

Bayesian Estimation (Bayesian learning to pattern classification problems) In MLE  was supposed fixed In BE  is a random variable The computation of posterior probabilities P(i | x) lies at the heart of Bayesian classification Goal: compute P(i | x, D) Given the sample D, Bayes formula can be written Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

To demonstrate the preceding equation, use: Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Bayesian Parameter Estimation: Gaussian Case Goal: Estimate  using the a-posteriori density P( | D) The univariate case: P( | D)  is the only unknown parameter (0 and 0 are known!) Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Reproducing density (conjugate prior) Identifying (1) and (2) yields: Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

The univariate case P(x | D) P( | D) computed P(x | D) remains to be computed! It provides: (Desired class-conditional density P(x | Dj, j)) Therefore: P(x | Dj, j) together with P(j) And using Bayes formula, we obtain the Bayesian classification rule: Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Bayesian Parameter Estimation: General Theory The Bayesian approach has been applied to compute P(x | D). It can be applied to any situation in which the unknown density can be parameterized: The basic assumptions are: The form of P(x | ) is assumed known, but the value of  is not known exactly Our knowledge about  is assumed to be contained in a known prior density P() The rest of our knowledge about  is contained in a set D of n random variables x1, x2, …, xn that follows P(x) Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

“Compute the posterior density P( | D)” then “Derive P(x | D)” The basic problem is: “Compute the posterior density P( | D)” then “Derive P(x | D)” Using Bayes formula, we have: And by independence assumption: Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Problems of Dimensionality Problems involving 50 or 100 features (binary valued) Classification accuracy depends upon the dimensionality and the amount of training data Case of two classes multivariate normal with the same covariance Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

If features are independent then: Most useful features are the ones for which the difference between the means is large relative to the standard deviation It has frequently been observed in practice that, beyond a certain point, the inclusion of additional features leads to worse rather than better performance: we have the wrong model ! Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

7 7 Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Computational Complexity Our design methodology is affected by the computational difficulty “big oh” notation f(x) = O(h(x)) “big oh of h(x)” If: (An upper bound on f(x) grows no worse than h(x) for sufficiently large x!) f(x) = 2+3x+4x2 g(x) = x2 f(x) = O(x2) Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

f(x) = O(x2); f(x) = O(x3); f(x) = O(x4) “big oh” is not unique! f(x) = O(x2); f(x) = O(x3); f(x) = O(x4) “big theta” notation f(x) = (h(x)) If: f(x) = (x2) but f(x)  (x3) Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Complexity of the ML Estimation Gaussian priors in d dimensions classifier with n training samples for each of c classes For each category, we have to compute the discriminant function Total = O(d2..n) Total for c classes = O(cd2.n)  O(d2.n) Cost increase when d and n are large! Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Component Analysis and Discriminants Goal: Combine features in order to reduce the dimension of the feature space Linear combinations are simple to compute and tractable Project high dimensional data onto a lower dimensional space Two classical approaches for finding “optimal” linear transformation PCA (Principal Component Analysis) “Projection that best represents the data in a least- square sense” (letter Q and O) (oral presentation) MDA (Multiple Discriminant Analysis) “Projection that best separates the data in a least-squares sense” (oral presentation) Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Goal: make a sequence of decisions Hidden Markov Models: Markov Chains Goal: make a sequence of decisions Processes that unfold in time, states at time t are influenced by a state at time t-1 Applications: speech recognition, gesture recognition, parts of speech tagging and DNA sequencing, Any temporal process without memory T = {(1), (2), (3), …, (T)} sequence of states We might have 6 = {1, 4, 2, 2, 1, 4} The system can revisit a state at different steps and not every state need to be visited Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

First-order Markov models Our productions of any sequence is described by the transition probabilities P(j(t + 1) | i (t)) = aij Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation

P(T | ) = a14 . a42 . a22 . a21 . a14 . P((1) = i)  = (aij, T) P(T | ) = a14 . a42 . a22 . a21 . a14 . P((1) = i) Example: speech recognition “production of spoken words” Production of the word: “pattern” represented by phonemes /p/ /a/ /tt/ /er/ /n/ // ( // = silent state) Transitions from /p/ to /a/, /a/ to /tt/, /tt/ to er/, /er/ to /n/ and /n/ to a silent state Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, ch3 (part 2): ML & Bayesian Parameter Estimation