Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.

Similar presentations


Presentation on theme: "Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John."— Presentation transcript:

1 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, with the permission of the authors and the publisher

2 Chapter 4: Nonparametric Techniques (Sections 1-6)
Introduction Density Estimation Parzen Windows Kn–Nearest-Neighbor Estimation The Nearest-Neighbor Rule Metrics and Nearest-Neighbor Classification

3 1. Introduction All Parametric densities are unimodal (have a single local maximum), whereas many practical problems involve multi-modal densities Nonparametric procedures can be used with arbitrary distributions and without the assumption that the forms of the underlying densities are known There are two types of nonparametric methods: Estimate density functions P(x |j) without assuming a model Parzen Windows Bypass density functions and directly estimate P(j |x) k-Nearest Neighbor (kNN) Pattern Classification, Ch4

4 Parzen windows kNN Pattern Classification, Ch4

5 The Nearest-Neighbor Rule
Let Dn = {x1, x2, …, xn} be a set of n labeled prototypes Let x’  Dn be the closest prototype to a test point x then the nearest-neighbor rule for classifying x is to assign it the label associated with x’ The nearest-neighbor rule leads to an error rate greater than the minimum possible: the Bayes rate If the number of prototypes is large (unlimited), the error rate of the nearest-neighbor classifier is never worse than twice the Bayes rate (it can be demonstrated!) If n  , it is always possible to find x’ sufficiently close so that: P(i | x’)  P(i | x) If P(m | x)  1, then the nearest neighbor selection is almost always the same as the Bayes selection Pattern Classification, Ch4

6 The k-nearest-neighbor rule
Goal: Classify x by assigning it the label most frequently represented among the k nearest samples and use a voting scheme Usually choose k odd so no voting ties Pattern Classification, Ch4

7 Pattern Classification, Ch4

8 Pattern Classification, Ch4

9 Pattern Classification, Ch4

10 Pattern Classification, Ch 4

11 Pattern Classification, Ch4


Download ppt "Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John."

Similar presentations


Ads by Google