Download presentation
Presentation is loading. Please wait.
1
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley & Sons, 2000 with the permission of the authors and the publisher
2
1 Chapter 3: Maximum-Likelihood & Bayesian Parameter Estimation (part 1) Introduction Maximum-Likelihood Estimation Example of a Specific Case Gaussian Case: unknown and Bias Appendix: ML Problem Statement
3
2 Data availability in a Bayesian framework To design optimal classifier, need: P( i ) (priors) P(x | i ) (class-conditional densities) Unfortunately, rarely have this complete information! Design a classifier from a training sample Easy to estimate prior Samples are often too small to estimate class-conditional (large dimension of feature space!) 1 Introduction
4
3 Normality of P(x | i ) P(x | i ) ~ N( i, i ) Characterized by 2 parameters Estimation techniques Maximum-Likelihood (ML) and the Bayesian estimations Results are nearly identical, but the approaches are different 1 A priori information about the problem
5
4 Ml Estimation Parameters are fixed but unknown! Obtain best parameters by maximizing probability of obtaining the samples observed -- argmax theta { P( D | theta ) } Bayesian methods view parameters as random variables having some known distribution compute POSTERIOR distribution In either approach, classification rule == P( i | x) 1 ML vs Bayesian Methods
6
5 Good convergence properties as the sample size increases Simpler than any other alternative techniques General principle Assume we have c classes and P(x | j ) ~ N( j, j ) P(x | j ) P (x | j, j ) where: 2 Maximum-Likelihood Estimation
7
6 Use training samples to estimate = ( 1, 2, …, c ), i is associated with category i (i = 1, 2, …, c) Suppose that D contains n samples, {x 1, x 2,…, x n } ML estimate of is, by definition, the value that maximizes P(D | ) “It is the value of that best agrees with the actually observed training sample” 2 Details of ML Estimation
8
7 2
9
8 = ( 1, 2, …, p ) t = gradient operator l( ) = ln P(D | ) is log-likelihood function New problem statement: determine that maximizes log-likelihood 2 Optimal Estimation
10
9 l = 0 Not sufficient (local opt, …) Check 2 nd derivative 2 Necessary conditions for Optimum
11
10 P(x i | ) ~ N( , ) (Samples drawn from multivariate normal population) = ML estimate for must satisfy: 2 Specific case: unknown
12
11 Multiply by , rearranging… Just arithmetic average of training sampls! Conclusion: If P(x k | j ) (j = 1, 2, …, c) is d-dimensional Gaussian; then estimate = ( 1, 2, …, c ) t to perform optimal classification! 2 Specific case: unknown (con’t)
13
12 Gaussian Case: unknown and = ( 1, 2 ) = ( , 2 ) 2 ML Estimation (unknown and )
14
13 Combine (1) and (2) to obtain: 2 Results …
15
14 ML estimate for 2 is biased An elementary unbiased estimator for is: 2 Bias
16
15 Let D = {x 1, x 2, …, x n } P(x 1,…, x n | ) = k=1 n P(x k | ) |D| = n Goal: determine (value of that makes this sample the most representative) 2 ML Problem Statement
17
16 |D| = n x1x1 x2x2 xnxn.................. x 11 x 20 x 10 x8x8 x9x9 x1x1 N( j, j ) = P(x j | 1 ) D1D1 DcDc DkDk P(x j | n ) P(x j | k ) 2
18
17 = ( 1, 2, …, c ) Find such that: 2 Problem Statement
19
18 Bayesian Decision Theory Chapter 2 (Sections 2.3-2.5) Minimum-Error-Rate Classification Classifiers, Discriminant Functions, Decision Surfaces The Normal Density
20
19 Minimum-Error-Rate Classification Actions are decisions on classes If take action i and the true state of nature is j then: decision is correct iff i = j (else in error) Seek decision rule that minimizes the probability of error (aka error rate )
21
20 Conditional risk: “The risk corresponding to this loss function is the average probability error” Zero-one loss function
22
21 As R( i | x) = 1 – P( i | x) … to minimize risk, maximize P( i | x) For Minimum error rate Decide i if P ( i | x) > P( j | x) j i Minimum Error Rate
23
22 As 0/1 loss, decide 1 if Decision Boundary If is the zero-one loss function which means:
24
23
25
24 Classifiers, Discriminant Functions and Decision Surfaces The multi-category case Set of discriminant functions g i (x), i = 1,…, c Classifier assigns feature vector x to class i if: g i (x) > g j (x) j i
26
25
27
26 Let g i (x) = - R( i | x) (max discriminant corresponds to min risk!) For minimum error rate, use g i (x) = P( i | x) (max discrimination corresponds to max posterior!) g i (x) P(x | i ) P( i ) Useg i (x) = ln P(x | i ) + ln P( i ) (ln: natural logarithm) Max Discriminant
28
27 Dividee feature space into c decision regions if g i (x) > g j (x) j i then x is in R i ( R i assign x to i ) Two-category case Classifier is “dichotomizer” iff it has two discriminant functions g 1 and g 2 Let g(x) g 1 (x) – g 2 (x) Decide 1 if g(x) > 0 ; Otherwise decide 2 Decision Regions
29
28 Computing g(x)
30
29
31
30 Univariate Normal Density Continuous density, analytically tractable Many processes are asymptotically Gaussian Handwritten characters Speech sounds ideal or prototype corrupted by random process (central limit theorem) where: = mean (or expected value) of x 2 = expected squared deviation or variance
32
31
33
32 Multivariate normal density in d dimensions is: where: x = (x 1, x 2, …, x d ) t (t stands for the transpose vector form) = ( 1, 2, …, d ) t mean vector = d*d covariance matrix | | and -1 are determinant and inverse respectively Multivariate Normal Density
34
33 Bayesian Decision Theory III Chapter 2 (Sections 2-6,2-9) Discriminant Functions for the Normal Density Bayes Decision Theory – Discrete Features
35
34 Discriminant Functions for the Normal Density Recall… minimum error-rate classification achieved by discriminant function g i (x) = ln P(x | i ) + ln P( i ) Multivariate normal
36
35 Special Case… Independent variables; Constant Variance i = 2 I ( I identity matrix) where … Linear Discriminant Function i is “threshold for i th category
37
36 Classifier using linear discriminant function is called “a linear machine” Decision surfaces for a linear machine are pieces of hyperplanes defined by: g i (x) = g j (x) Linear Machine
38
37
39
38 The hyperplane separating R i and R j always orthogonal to the line linking the means! Classification Region HERE!!!
40
39
41
40
42
41 Case i = (covariance of all classes are identical but arbitrary!) Hyperplane separating R i and R j (the hyperplane separating R i and R j is generally not orthogonal to the line between the means!)
43
42
44
43
45
44 Case i = arbitrary The covariance matrices are different for each category (Hyperquadrics which are: hyperplanes, pairs of hyperplanes, hyperspheres, hyperellipsoids, hyperparaboloids, hyperhyperboloids)
46
45
47
46
48
47 Bayes Decision Theory – Discrete Features Components of x are binary or integer valued, x can take only one of m discrete values v 1, v 2, …, v m Case of independent binary features in 2 category problem Let x = [x 1, x 2, …, x d ] t where each x i is either 0 or 1, with probabilities: p i = P(x i = 1 | 1 ) q i = P(x i = 1 | 2 )
49
48 The discriminant function in this case is:
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.