Digital Image Processing Lecture 25: Object Recognition Prof. Charlene Tsai.

Slides:



Advertisements
Similar presentations
Pattern Classification & Decision Theory. How are we doing on the pass sequence? Bayesian regression and estimation enables us to track the man in the.
Advertisements

Pattern Recognition and Machine Learning
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Pattern Classification. Chapter 2 (Part 1): Bayesian Decision Theory (Sections ) Introduction Bayesian Decision Theory–Continuous Features.
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Pattern Classification Chapter 2 (Part 2)0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O.
Probabilistic Generative Models Rong Jin. Probabilistic Generative Model Classify instance x into one of K classes Class prior Density function for class.
Pattern recognition Professor Aly A. Farag
An Illustrative Example.
Lecture 20 Object recognition I
Prénom Nom Document Analysis: Parameter Estimation for Pattern Recognition Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
Neural Networks: A Statistical Pattern Recognition Perspective
METU Informatics Institute Min 720 Pattern Classification with Bio-Medical Applications PART 2: Statistical Pattern Classification: Optimal Classification.
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
0 Pattern Classification, Chapter 3 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda,
Principles of Pattern Recognition
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 13 Oct 14, 2005 Nanjing University of Science & Technology.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Whitening.
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
1 E. Fatemizadeh Statistical Pattern Recognition.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 24 Nov 2, 2005 Nanjing University of Science & Technology.
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
Digital Image Processing Lecture 24: Object Recognition June 13, 2005 Prof. Charlene Tsai *From Gonzalez Chapter 12.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 12 Sept 30, 2005 Nanjing University of Science & Technology.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Computer Vision Lecture 6. Probabilistic Methods in Segmentation.
1Ellen L. Walker Category Recognition Associating information extracted from images with categories (classes) of objects Requires prior knowledge about.
Chapter 12 Object Recognition Chapter 12 Object Recognition 12.1 Patterns and pattern classes Definition of a pattern class:a family of patterns that share.
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
3.Learning In previous lecture, we discussed the biological foundations of of neural computation including  single neuron models  connecting single neuron.
Intro. ANN & Fuzzy Systems Lecture 15. Pattern Classification (I): Statistical Formulation.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification Chapter 2(Part 3) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Lecture 2. Bayesian Decision Theory
Lecture 15. Pattern Classification (I): Statistical Formulation
Chapter 12 Object Recognition
Classification with Perceptrons Reading:
Overview of Supervised Learning
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
REMOTE SENSING Multispectral Image Classification
REMOTE SENSING Multispectral Image Classification
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Recognition and Image Analysis
Computer Vision Chapter 4
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Bayesian Classification
Digital Image Processing Lecture 21: Principal Components for Description Prof. Charlene Tsai *Chapter 11.4 of Gonzalez.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Digital Image Processing Lecture 24: Object Recognition
Parametric Methods Berlin Chen, 2005 References:
Hairong Qi, Gonzalez Family Professor
Hairong Qi, Gonzalez Family Professor
Hairong Qi, Gonzalez Family Professor
Presentation transcript:

Digital Image Processing Lecture 25: Object Recognition Prof. Charlene Tsai

2 Review Matching  Specified by the mean vector of each class Optimum statistical classifiers  Probabilistic approach  Bayes classifier for Gaussian pattern classes  Specified by mean vector and covariance matrix of each class Neural network

3 Foundation Probability that x comes from class is Average loss/risk incurred in assigning x to Using basic probability theory, we get Loss incurred if x actually came from, but assigned to p(A/B)p(B)=p(B/A)p(A)

4 (con’d) Because 1/p(x) is positive and common to all r j (x), so it can be dropped w/o affecting the comparison among r j (x) The classifier assigns x to the class with the smallest average loss --- Bayes classifier Eqn#1

5 The Loss Function (L ij ) 0 loss for correct decision, and same nonzero value (say 1) for any incorrect decision. where Eqn#2

6 Bayes Classifier Substituting eqn#2 into eqn#1 yields The classifier assigns x to class if for all p(x) is common to all classes, so is dropped

7 Decision Function Using Bayes classifier for a 0-1 loss function, the decision function for is Now the questions are  How to get ?  How to estimate ?

8 Using Gaussian Distribution Most prevalent form (assumed) for is the Gaussian probability density function. Now consider a 1D problem with 2 pattern classes (W=2) variance mean

9 Example Where is the decision if

10 N-D Gaussian For jth pattern class, where, Remember this from Principle component Analysis?

11 (con’t) Working with the logarithm of the decision function: If all covariance matrices are equal, then Common covariance

12 For C=I If C=I (identity matrix) and is 1/W, we get which is the minimum distance classifier Gaussian pattern classes satisfying these conditions are spherical clouds of identical shape in N-D.

13 Example in Gonzalez (pg709) Decision boundary

14 (con’t) Assuming We get The decision surface is Dropping, which is common to all classes

15 Neural Network Simulating the brain activity in which the elemental computing elements are treated as the neurons. The trend of research dates back to early 1940s. The perceptron learn a linear decision function that separate 2 training sets.

16 Perceptron for 2 Pattern Classes

17 (con’t) The coefficients w i are the weights, which are analogous to synapses in the human neural system. When d(x)>0, the output is +1, and the x pattern belongs to. The reverse is true when d(x)<0. This is as far as we go. This concept has be adopted in many real systems, when the underlying distributions are unknown.