Download presentation
Presentation is loading. Please wait.
Published byDebra Little Modified over 9 years ago
1
Linear Discriminant Functions Discriminant Functions Least Squares Method Fisher’s Linear Discriminant Probabilistic Generative Models
2
Linear Discriminant Functions A discriminant function is a linear combination of the components of x: g(x) = w t x + w 0 where w t is the weight vector w 0 is the bias or threshold weight For the two-class problem we can use the following decision rule: Decide c1 if g(x) > 0 and c2 if g(x) < 0. For the general case we will have one discriminant function for each class.
3
Figure 5.1
4
The Normal Vector w The hyperplane H divides the feature space into two regions: Region R1 for class c1 and Region R2 for class c2. For two points x1 and x2 on the decision boundary: w t x1 + w 0 = w t x2 + w 0 which means w t (x1 – x2) = 0 Thus w is normal to any vector in the hyperplane.
5
Geometry for Linear Models
6
The Problem with Multiple Classes How do we use a linear discriminant when we have more than two classes? There are two approaches: 1.Learn one discriminant function for each class 2.Learn a discriminant function for all pairs of classes If c is the number of classes, in the first case we have c functions and in the second case we have c(c-1) / 2 functions. In both cases we are left with ambiguous regions.
7
Figure 5.3
8
Linear Machines To avoid the problem of ambiguous regions we can use linear machines: We define c linear discriminant functions and choose the one with highest value for a given x. gk(x) = wk t x + wk 0 k = 1, …, c In this case the decision regions are convex and thus are limited in flexibility and accuracy.
9
Figure 5.4
10
Generalized Linear Discriminant Functions A linear discriminant function g(x) can be written as: g(x) = w 0 + Σ i wixi i = 1, …, d (d is the number of features). We could add additional terms to obtain a quadratic discriminant function: g(x) = w 0 + Σ i wixi + Σ i Σ j wij xixj The quadratic discriminant function introduces d(d-1)/2 coefficients corresponding to the products of attributes. The surfaces are thus more complicated (hyperquadric surfaces).
11
Generalized Linear Discriminant Functions We could even add more terms wijk xi xj xk and obtain the class of polynomial discriminant functions. The generalized form is g(x) = Σ i wi yi(x) g(x) = w t y Where the summation goes over all functions yi(x). The yi(x) functions are called the phi or φ functions. The function is now linear on the yi(x). The functions map a d-dimensional x-space into a d’ dimensional y-space. Example: g(x) = w1 + w2x + w3x 2 y = (1 x x 2 ) t
12
Figure 5.5
13
Mapping to other space Mapping from x to y: If x follows certain probability distribution, the corresponding distribution on the new space will be degenerate. Even with simple functions for y, the decision surfaces in x can be quite complicated. With a larger space we have more degrees of freedom (parameters to specify). Thus, we need larger samples.
14
Figure 5.6
15
Linear Discriminant Functions Discriminant Functions Least Squares Method Fisher’s Linear Discriminant Probabilistic Generative Models
16
Least Squares And how do we compute y(x)? How do we find the values of w0, w1, w2, …, wd? We can simply find the w that minimizes an error function E(w): E(w) = ½ Σ (g(x,w) – t) 2 Problems: Lacks robustness; assumes target vector is Gaussian.
17
Least Squares Least squares vs Logistic regression
18
Least Squares Least squares Logistic regression
19
Carl Friedrich Gauss German 1777 – 1855 Carl F. Gauss is known as the scientist who developed the idea of “least squares method”. He came up with this idea at the early age of eighteen years old! He is considered one of the greatest mathematician of all times. He made major discoveries in geometry, number theory, magnetism, astronomy, among other fields. Anecdotes: solved a problem posed by his teacher (sum all integers from 1 – 100).
20
Linear Discriminant Functions Discriminant Functions Least Squares Method Fisher’s Linear Discriminant Probabilistic Generative Models
21
Fisher’s Linear Discriminant The idea is to project the data on one single dimension. We choose a projection that maximizes class separation, and minimizes the variance within each class. Find w that maximizes an error function J(w) = (m2 – m1) 2 / s1 2 + s2 2 J(w) = w T S B w / w T S w w Where S B is the between-class covariance matrix And S W is the within class covariance matrix
22
Fisher’s Linear Discriminant S B = (m2 – m1) (m2 – m1) T S W = ∑ (x – m1)(x – m1) T + ∑(x – m2)(x – m2) T
23
Fisher’s Linear Discriminant Wrong Right
24
Linear Discriminant Functions Discriminant Functions Least Squares Method Fisher’s Linear Discriminant Probabilistic Generative Models
25
Probabilistic Generative Models We first compute g(x) = w1x1 + w2x2 + … + wdxd + w0 But instead we wish to have P(Ck|x). To get conditional probabilities we compute a logistic function: L(g(x)) = 1 / ( 1 + exp(-g(x)) ) And L(y) = P(Ck|x) if the two classes can be modeled as a Gaussian distribution with equal covariance matrix.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.