Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Principles of Density Estimation
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Lecture 3 Nonparametric density estimation and classification
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Pattern Classification Chapter 2 (Part 2)0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O.
Chapter 20 of AIMA KAIST CS570 Lecture note
Pattern recognition Professor Aly A. Farag
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 4 (Part 1): Non-Parametric Classification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification, Chapter 3 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Prénom Nom Document Analysis: Non Parametric Methods for Pattern Recognition Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Hidden Markov Model: Extension of Markov Chains
Chapter 4 (part 2): Non-Parametric Classification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Bayesian Estimation (BE) Bayesian Parameter Estimation: Gaussian Case
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 3 (part 1): Maximum-Likelihood & Bayesian Parameter Estimation  Introduction  Maximum-Likelihood Estimation  Example of a Specific Case  The.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
0 Pattern Classification, Chapter 3 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda,
Pattern Recognition: Baysian Decision Theory Charles Tappert Seidenberg School of CSIS, Pace University.
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
1 E. Fatemizadeh Statistical Pattern Recognition.
Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #2.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides* were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification Chapter 2(Part 3) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
3(+1) classifiers from the Bayesian world
Pattern Classification, Chapter 3
Outline Parameter estimation – continued Non-parametric methods.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
LECTURE 16: NONPARAMETRIC TECHNIQUES
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Nonparametric density estimation and classification
Mathematical Foundations of BME
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Hairong Qi, Gonzalez Family Professor
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Presentation transcript:

Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000 with the permission of the authors and the publisher

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Introduction Introduction Density Estimation Density Estimation Parzen Windows Parzen Windows

Pattern Classification, Ch4 (Part 1) 2Introduction All Parametric densities are unimodal (have a single local maximum), whereas many practical problems involve multi- modal densities All Parametric densities are unimodal (have a single local maximum), whereas many practical problems involve multi- modal densities Nonparametric procedures can be used with arbitrary distributions and without the assumption that the forms of the underlying densities are known Nonparametric procedures can be used with arbitrary distributions and without the assumption that the forms of the underlying densities are known There are two types of nonparametric methods: There are two types of nonparametric methods: Estimating P(x |  j ) Estimating P(x |  j ) Bypass probability and go directly to a-posteriori probability estimation Bypass probability and go directly to a-posteriori probability estimation

Pattern Classification, Ch4 (Part 1) 3 Density Estimation Basic idea: Basic idea: Probability that a vector x will fall in region R is: Probability that a vector x will fall in region R is: P is a smoothed (or averaged) version of the density function p(x) if we have a sample of size n; therefore, the probability that k points fall in R is then: P is a smoothed (or averaged) version of the density function p(x) if we have a sample of size n; therefore, the probability that k points fall in R is then: and the expected value for k is: and the expected value for k is: E(k) = nP (3) E(k) = nP (3)

Pattern Classification, Ch4 (Part 1) 4 ML estimation of P =  is reached for is reached for Therefore, the ratio k/n is a good estimate for the probability P and hence for the density function p. p(x) is continuous and that the region R is so small that p does not vary significantly within it, we can write: where is a point within R and V the volume enclosed by R.

Pattern Classification, Ch4 (Part 1) 5 Combining equation (1), (3) and (4) yields:

Pattern Classification, Ch4 (Part 1) 6 Density Estimation (cont.) Justification of equation (4) Justification of equation (4) We assume that p(x) is continuous and that region R is so small that p does not vary significantly within R. Since p(x) = constant, it is not a part of the sum.

Pattern Classification, Ch4 (Part 1) 7 Where:  ( R ) is: a surface in the Euclidean space R 2 a volume in the Euclidean space R 3 a volume in the Euclidean space R 3 a hypervolume in the Euclidean space R n a hypervolume in the Euclidean space R n Since p(x)  p(x ’ ) = constant, therefore in the Euclidean space R 3 :

Pattern Classification, Ch4 (Part 1) 8 Condition for convergence Condition for convergence The fraction k/(nV) is a space averaged value of p(x). p(x) is obtained only if V approaches zero. p(x) is obtained only if V approaches zero. This is the case where no samples are included in R : it is an uninteresting case! In this case, the estimate diverges: it is an uninteresting case!

Pattern Classification, Ch4 (Part 1) 9 The volume V needs to approach 0 anyway if we want to use this estimation The volume V needs to approach 0 anyway if we want to use this estimation Practically, V cannot be allowed to become small since the number of samples is always limited Practically, V cannot be allowed to become small since the number of samples is always limited One will have to accept a certain amount of variance in the ratio k/n One will have to accept a certain amount of variance in the ratio k/n Theoretically, if an unlimited number of samples is available, we can circumvent this difficulty Theoretically, if an unlimited number of samples is available, we can circumvent this difficulty To estimate the density of x, we form a sequence of regions R 1, R 2, … containing x: the first region contains one sample, the second two samples and so on. Let V n be the volume of R n, k n the number of samples falling in R n and p n (x) be the n th estimate for p(x): p n (x) = (k n /n)/V n (7)

Pattern Classification, Ch4 (Part 1) 10 Three necessary conditions should apply if we want p n (x) to converge to p(x): There are two different ways of obtaining sequences of regions that satisfy these conditions: (a) Shrink an initial region where V n = 1/  n and show that This is called “ the Parzen-window estimation method ” This is called “ the Parzen-window estimation method ” (b) Specify k n as some function of n, such as k n =  n; the volume V n is grown until it encloses k n neighbors of x. This is called “ the k n -nearest grown until it encloses k n neighbors of x. This is called “ the k n -nearest neighbor estimation method ” neighbor estimation method ”

Pattern Classification, Ch4 (Part 1) 11

Pattern Classification, Ch4 (Part 1) 12 Parzen Windows Parzen-window approach to estimate densities assume that the region R n is a d-dimensional hypercube Parzen-window approach to estimate densities assume that the region R n is a d-dimensional hypercube  ((x-x i )/h n ) is equal to unity if x i falls within the hypercube of volume V n centered at x and equal to zero otherwise.  ((x-x i )/h n ) is equal to unity if x i falls within the hypercube of volume V n centered at x and equal to zero otherwise.

Pattern Classification, Ch4 (Part 1) 13 The number of samples in this hypercube is: The number of samples in this hypercube is: By substituting k n in equation (7), we obtain the following estimate: P n (x) estimates p(x) as an average of functions of x and the samples (x i ) (i = 1, …,n). These functions  can be general!

Pattern Classification, Ch4 (Part 1) 14 Illustration The behavior of the Parzen-window method Case where p(x)  N(0,1) Let  (u) = (1/  (2  ) exp(-u 2 /2) and h n = h 1 /  n (n>1) (h 1 : known parameter) Thus: is an average of normal densities centered at the samples x i.

Pattern Classification, Ch4 (Part 1) 15 Numerical results: Numerical results: For n = 1 and h 1 =1 For n = 10 and h = 0.1, the contributions of the individual samples are clearly observable !

Pattern Classification, Ch4 (Part 1) 16

Pattern Classification, Ch4 (Part 1) 17

Pattern Classification, Ch4 (Part 1) 18

Pattern Classification, Ch4 (Part 1) 19

Pattern Classification, Ch4 (Part 1) 20

Pattern Classification, Ch4 (Part 1) 21 Analogous results are also obtained in two dimensions as illustrated:

Pattern Classification, Ch4 (Part 1) 22

Pattern Classification, Ch4 (Part 1) 23

Pattern Classification, Ch4 (Part 1) 24 Case where p(x) = 1.U(a,b) + 2.T(c,d) (unknown density) (mixture of a uniform and a triangle density) Case where p(x) = 1.U(a,b) + 2.T(c,d) (unknown density) (mixture of a uniform and a triangle density)

Pattern Classification, Ch4 (Part 1) 25

Pattern Classification, Ch4 (Part 1) 26 Classification example Classification example In classifiers based on Parzen-window estimation: We estimate the densities for each category and classify a test point by the label corresponding to the maximum posterior We estimate the densities for each category and classify a test point by the label corresponding to the maximum posterior The decision region for a Parzen-window classifier depends upon the choice of window function as illustrated in the following figure. The decision region for a Parzen-window classifier depends upon the choice of window function as illustrated in the following figure.

Pattern Classification, Ch4 (Part 1) 27

Pattern Classification, Ch4 (Part 1) 28