Presentation is loading. Please wait.

Presentation is loading. Please wait.

Digital Image Processing Lecture 25: Object Recognition Prof. Charlene Tsai.

Similar presentations


Presentation on theme: "Digital Image Processing Lecture 25: Object Recognition Prof. Charlene Tsai."— Presentation transcript:

1 Digital Image Processing Lecture 25: Object Recognition Prof. Charlene Tsai

2 2 Review Matching  Specified by the mean vector of each class Optimum statistical classifiers  Probabilistic approach  Bayes classifier for Gaussian pattern classes  Specified by mean vector and covariance matrix of each class Neural network

3 3 Foundation Probability that x comes from class is Average loss/risk incurred in assigning x to Using basic probability theory, we get Loss incurred if x actually came from, but assigned to p(A/B)p(B)=p(B/A)p(A)

4 4 (con’d) Because 1/p(x) is positive and common to all r j (x), so it can be dropped w/o affecting the comparison among r j (x) The classifier assigns x to the class with the smallest average loss --- Bayes classifier Eqn#1

5 5 The Loss Function (L ij ) 0 loss for correct decision, and same nonzero value (say 1) for any incorrect decision. where Eqn#2

6 6 Bayes Classifier Substituting eqn#2 into eqn#1 yields The classifier assigns x to class if for all p(x) is common to all classes, so is dropped

7 7 Decision Function Using Bayes classifier for a 0-1 loss function, the decision function for is Now the questions are  How to get ?  How to estimate ?

8 8 Using Gaussian Distribution Most prevalent form (assumed) for is the Gaussian probability density function. Now consider a 1D problem with 2 pattern classes (W=2) variance mean

9 9 Example Where is the decision if 1. 2. 3.

10 10 N-D Gaussian For jth pattern class, where, Remember this from Principle component Analysis?

11 11 (con’t) Working with the logarithm of the decision function: If all covariance matrices are equal, then Common covariance

12 12 For C=I If C=I (identity matrix) and is 1/W, we get which is the minimum distance classifier Gaussian pattern classes satisfying these conditions are spherical clouds of identical shape in N-D.

13 13 Example in Gonzalez (pg709) Decision boundary

14 14 (con’t) Assuming We get The decision surface is Dropping, which is common to all classes

15 15 Neural Network Simulating the brain activity in which the elemental computing elements are treated as the neurons. The trend of research dates back to early 1940s. The perceptron learn a linear decision function that separate 2 training sets.

16 16 Perceptron for 2 Pattern Classes

17 17 (con’t) The coefficients w i are the weights, which are analogous to synapses in the human neural system. When d(x)>0, the output is +1, and the x pattern belongs to. The reverse is true when d(x)<0. This is as far as we go. This concept has be adopted in many real systems, when the underlying distributions are unknown.


Download ppt "Digital Image Processing Lecture 25: Object Recognition Prof. Charlene Tsai."

Similar presentations


Ads by Google