Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 8 Sept 23, 2005 Nanjing University of Science & Technology.

Similar presentations


Presentation on theme: "1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 8 Sept 23, 2005 Nanjing University of Science & Technology."— Presentation transcript:

1 1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 8 Sept 23, 2005 Nanjing University of Science & Technology

2 2 May be Optimum

3 3 Review 2: Classifier performance Measures 1. A’Posteriori Probability (Maximize) 2. Probability of Error ( Minimize) 3. Bayes Average Cost (Maximize) 4. Probability of Detection ( Maximize with fixed Probability of False alarm) (Neyman Pearson Rule) 5. Losses (Minimize the maximum)

4 4 If l( x ) N Likelihood ratio < > C1C1 C2C2 Threshold Review 3: MAP, MPE, and Bayes Classification Rule (C 22 - C 12 ) P(C 2 ) (C 11 - C 21 ) P(C 1 ) N BAYES = P(C 2 ) P(C 1 ) N MAP = P(C 2 ) P(C 1 ) N MPE =

5 5 Topics for Lecture 8 1. Two Dimensional problem 2. Solution in likelihood space 3. Solution in pattern space 4. Solution in feature space 5. Calculation of probability of error 6. Transformational Theorem

6 6 Example : 2 Class and 2 observations C 1 : x = [ x 1, x 2 ] T ~ p(x 1, x 2 | C 1 ), P(C 1 ) C 2 : x = [ x 1, x 2 ] T ~ p(x 1, x 2 | C 2 ), P(C 2 ) Given: C 1 : x ~ N( M 1, K 1 ) C 2 : x ~ N( M 2, K 2 ) 0 1 0 0 1 2 0 0 21 M 1 =M 2 =K 1 =K2 =K2 = Find Optimum decision rule (MPE) P(C 1 ) = P(C 2 ) = 1/2

7 7

8 8

9 9 taking the ln of both sides gives an equivalent rule Solution in different spaces If - (x 1 + x 2 - 1) 0 < > C2C2 C1C1 In Observation Space If x 1 + x 2 1 < > C1C1 C2C2 In feature space y=g(x 1,x 2 ) rearranging gives If y 1 < > C1C1 C2C2 g(x 1,x 2 ) = x 1 +x 2

10 10 In Observation Space 1 1 x 1 + x 2 = 1 x2x2 x1x1 decide C 2 decide C 1 10 decide C 2 decide C 1 where y = x 1 + x 2 y In Feature Space (Sufficient statistic for this problem)

11 11 Calculation of P(error | C 1 ) for 2 dimensional Example in y space P(error | C 1 ) = P(decide C 2 |C 1 ) = p( y | C 1 ) dy R2R2 P(error | C 1 ) = exp(-y 2 /4)dy 1 oo Under C 1 : x 1 and x 2 are independent normally distributed gaussian random variables N(0,1) thus y is normally distributed as N(0,2). 1 2 pi

12 12 Calculation of P(error | C 2 ) for 2 dimensional Example in y space P(error | C 2 ) = P(decide C 1 |C 2 ) = p( y | C 2 ) dy R1R1 P(error | C 2 ) = exp{(-(y-2) 2 /4)} dy 1 oo Under C 2 : x 1 and x 2 are independent normally distributed gaussian random variables N(1,1) thus y is normally distributed as N(2,2). 1 2 pi_

13 13 P(error) = P(error | C 1 ) P(C 1 ) + P(error |C 2 ) P(C 2 ) Probability of error for example = exp(-y 2 /4)dy P(C 1 ) 1 oo 1 2 pi + exp{(-(y-2) 2 /4)} dy P(C 2 ) 1 oo 1 2 pi_

14 14 Transformational Theorem Given : X is a random Variable with known probability density function p X (x). y=g(x) is a real vlued function with no flat spots Define the random variable Y=g(X). Then The probability density function for Y, p Y (y) is as follows: d g(x) dx x=x i p Y (y) = Where x i are all real roots of y=g(x) p X (x) all x i

15 15 Example: Transformational Theorem Given: X ~ N(0,1) Define function: y = x 2 Define the random variable: Y = X 2 Find the probability density function p Y (y)

16 16 Solution: y x2x2 x1x1 x y = x 2 y > 0 x 2 = yx 1 = - y for y > 0 there are two real roots of y = x 2 given by for y > 0 there are no real roots of y = x 2 therefore p Y (y) = 0 for those values of y y < 0

17 17 p Y (y) = p X (x 1 ) + p X (x 2 ) = p X ( - y ) + p X ( y ) d g(x) dx x=x i p Y (y) = p X (x) all x i Apply Fundamental Theorem = 0 if no real roots if real roots d g(x) = 2x dx = exp(- (- y ) 2 /2) 1 2 pi exp(- ( y ) 2 /2) 1 2 pi + 2 (- y ) 2( y ) for y > 0

18 18 exp(- y/2) 2 pi u(y)= p Y (y) Final Answer

19 19 Summary for Lecture 8 1. Two Dimensional problem 2. Solution in likelihood space 3. Solution in pattern space 4. Solution in feature space 5. Calculation of probability of error 6. Transformation Theorem

20 20 End of Lecture 8


Download ppt "1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 8 Sept 23, 2005 Nanjing University of Science & Technology."

Similar presentations


Ads by Google