Presentation is loading. Please wait.

Presentation is loading. Please wait.

SVMs Finalized. Where we are Last time Support vector machines in grungy detail The SVM objective function and QP Today Last details on SVMs Putting it.

Similar presentations


Presentation on theme: "SVMs Finalized. Where we are Last time Support vector machines in grungy detail The SVM objective function and QP Today Last details on SVMs Putting it."— Presentation transcript:

1 SVMs Finalized

2 Where we are Last time Support vector machines in grungy detail The SVM objective function and QP Today Last details on SVMs Putting it all together Next time Bayesian statistical learning Bishop, Ch. 1.2, 2

3 Reminders... What if the data isn’t linearly separable? Project into higher dim space (we’ll get there) Allow some “slop” in the system Allow margins to be violated “a little” w

4 Reminders... The are “slack variables” Allow margins to be violated a little Still want to minimize margin violations, so add them to QP instance: Minimize: Subject to:

5 Reminders... The SVM objective written as: Maximize: Subject to: Can replace “standard” inner product with generalized “kernel” inner product... K(xi,xj)K(xi,xj)

6 Why are kernel fns cool? The cool trick is that many useful projections can be written as kernel functions in closed form I.e., can work with K() rather than If you know K(x i,x j ) for every (i,j) pair, then you can construct the maximum margin hyperplane between the projected data without ever explicitly doing the projection!

7 Example kernels Homogeneous degree- k polynomial:

8 Example kernels Homogeneous degree- k polynomial: Inhomogeneous degree- k polynomial:

9 Example kernels Homogeneous degree- k polynomial: Inhomogeneous degree- k polynomial: Gaussian radial basis function:

10 Example kernels Homogeneous degree- k polynomial: Inhomogeneous degree- k polynomial: Gaussian radial basis function: Sigmoidal (neural network):

11 Side note on kernels What precisely do kernel functions mean? Metric functions take two points and return a (generalized) distance between them What is the equivalent interpretation for kernels? Hint: think about what kernel function replaces in the max margin QP formulation

12 Side note on kernels Kernel functions are generalized inner products Essentially, give you the cosine of the angle between vectors Recall the law of cosines:

13 Side note on kernels Replace traditional dot product with “generalized inner product” and get:

14 Side note on kernels Replace traditional dot product with “generalized inner product” and get: Kernel (essentially) represents: Angle between vectors in the projected, high- dimensional space

15 Side note on kernels Replace traditional dot product with “generalized inner product” and get: Kernel (essentially) represents: Angle between vectors in the projected, high- dimensional space Alternatively: Nonlinear distribution of angles in low-dim space

16 Example of Kernel nonlin.

17

18 Using the classifier Solution of the QP gives back a set of Data points for which are called “support vectors” Turns out that we can write w as

19 Using the classifier And our classification rule for query pt was: So:

20 Using the classifier SVM images from lecture notes by S. Dreiseitl Support vectors

21 Putting it all together Original (low dimensional) data

22 Putting it all together Original data matrix Kernel matrix Kernel function

23 Putting it all together Kernel + orig labels Maximize Subject to: Quadratic Program instance

24 Putting it all together Support Vector weights Maximize Subject to: Quadratic Program instance QP Solver subroutine

25 Putting it all together Support Vector weights Hyperplane in

26 Putting it all together Support Vector weights Final classifier

27 Putting it all together Final classifier Nonlinear classifier in

28 Final notes on SVMs Note that only for which actually contribute to final classifier This is why they are called support vectors All the rest of the training data can be discarded

29 Final notes on SVMs Complexity of training (& ability to generalize) based only on amount of training data Not based on dimension of hyperplane space ( ) Good classification performance In practice, SVMs among the strongest classifiers we have Closely related to neural nets, boosting, etc.


Download ppt "SVMs Finalized. Where we are Last time Support vector machines in grungy detail The SVM objective function and QP Today Last details on SVMs Putting it."

Similar presentations


Ads by Google