Download presentation
1
CHAPTER 10: Linear Discrimination
Eick/Aldaydin: Topic13
2
Likelihood- vs. Discriminant-based Classification
Likelihood-based: Assume a model for p(x|Ci), use Bayes’ rule to calculate P(Ci|x) gi(x) = log P(Ci|x) Discriminant-based: Assume a model for gi(x|Φi); no density estimation Estimating the boundaries is enough; no need to accurately estimate the densities inside the boundaries Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
3
Linear Discriminant Linear discriminant: Advantages:
Simple: O(d) space/computation Knowledge extraction: Weighted sum of attributes; positive/negative weights, magnitudes (credit scoring) Optimal when p(x|Ci) are Gaussian with shared cov matrix; useful when classes are (almost) linearly separable Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
4
Generalized Linear Model
Quadratic discriminant: Higher-order (product) terms: Map from x to z using nonlinear basis functions and use a linear discriminant in z-space Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
5
Two Classes Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
6
Geometry Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
7
Support Vector Machines
One Possible Solution
8
Support Vector Machines
Another possible solution
9
Support Vector Machines
Other possible solutions
10
Support Vector Machines
Which one is better? B1 or B2? How do you define better?
11
Support Vector Machines
Find hyperplane maximizes the margin => B1 is better than B2
12
Support Vector Machines
Examples are: (x1,..,xn,y) with y{-1,1} Support Vector Machines
13
Support Vector Machines
We want to maximize: Which is equivalent to minimizing: But subjected to the following N constraints: This is a constrained convex quadratic optimization problem that can be solved in polynominal time Numerical approaches to solve it (e.g., quadratic programming) exist The function to be optimized has only a single minimum no local minimum problem
14
Support Vector Machines
What if the problem is not linearly separable?
15
Linear SVM for Non-linearly Separable Problems
Measures prediction error What if the problem is not linearly separable? Introduce slack variables Need to minimize: Subject to (i=1,..,N): C is chosen using a validation set trying to keep the margins wide while keeping the training error low. Parameter Inverse size of margin between hyperplanes Slack variable allows constraint violation to a certain degree
16
Nonlinear Support Vector Machines
What if decision boundary is not linear? Non-linear function Alternative 1: Use technique that Employs non-linear decision boundaries
17
Nonlinear Support Vector Machines
Transform data into higher dimensional space Find the best hyperplane using the methods introduced earlier Alternative 2: Transform into a higher dimensional attribute space and find linear decision boundaries in this space
18
Nonlinear Support Vector Machines
Choose a non-linear kernel function f to transform into a different, usually higher dimensional, attribute space Minimize but subjected to the following N constraints: Find a good hyperplane in the transformed space
19
Example: Polynomial Kernel Function
F(x1,x2)=(x12,x22,sqrt(2)*x1,sqrt(2)*x2,1) K(u,v)=F(u)F(v)= (uv + 1)2 A Support Vector Machine with polynomial kernel function classifies a new example z as follows: sign((S liyi*F(xi)F(z))+b) = sign((S liyi *(xiz +1)2))+b) Remark: li and b are determined using the methods for linear SVMs that were discussed earlier Kernel function trick: perform computations in the original space, although we solve an optimization problem in the transformed space more efficient; More detailsTopic14.
20
Summary Support Vector Machines
Support vector machines learn hyperplanes that separate two classes maximizing the margin between them (the empty space between the instances of the two classes). Support vector machines introduce slack variables, in the case that classes are not linear separable and trying to maximize margins while keeping the training error low. The most popular versions of SVMs use non-linear kernel functions to map the attribute space into a higher dimensional space to facilitate finding “good” linear decision boundaries in the modified space. Support vector machines find “margin optimal” hyperplanes by solving a convex quadratic optimization problem. However, this optimization process is quite slow and support vector machines tend to fail if the number of examples goes beyond 500/2000/5000… In general, support vector machines accomplish quite high accuracies, if compared to other techniques.
21
Useful Support Vector Machine Links
Lecture notes are much more helpful to understand the basic ideas: Some tools are often used in publications livsvm: spider: Tutorial Slides: Surveys: More General Material: Remarks: Thanks to Chaofan Sun for providing these links!
22
Optimal Separating Hyperplane
Alpaydin transparencies on Support Vector Machines—not used in lecture! (Cortes and Vapnik, 1995; Vapnik, 1995) Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
23
Margin Distance from the discriminant to the closest instances on either side Distance of x to the hyperplane is We require For a unique sol’n, fix ρ||w||=1and to max margin Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
24
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
25
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
26
Most αt are 0 and only a small number have αt >0; they are the support vectors
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
27
Soft Margin Hyperplane
Not linearly separable Soft error New primal is Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
28
Kernel Machines Preprocess input x by basis functions
z = φ(x) g(z)=wTz g(x)=wT φ(x) The SVM solution Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
29
Kernel Functions Polynomials of degree q: Radial-basis functions:
Sigmoidal functions: (Cherkassky and Mulier, 1998) Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
30
SVM for Regression Use a linear model (possibly kernelized)
f(x)=wTx+w0 Use the є-sensitive error function Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.