Presentation is loading. Please wait.

Presentation is loading. Please wait.

SVMs Concluded; R1. Administrivia HW1 returned today and there was much rejoicing... μ=15.8 σ=8.8 Remember: class is curved You don’t have to run faster.

Similar presentations


Presentation on theme: "SVMs Concluded; R1. Administrivia HW1 returned today and there was much rejoicing... μ=15.8 σ=8.8 Remember: class is curved You don’t have to run faster."— Presentation transcript:

1 SVMs Concluded; R1

2 Administrivia HW1 returned today and there was much rejoicing... μ=15.8 σ=8.8 Remember: class is curved You don’t have to run faster than the bear... Straw poll ongoing... 1 email vote so far...

3 Where we are Last time Support vector machines in grungy detail The SVM objective function and QP Today Last details on SVMs Putting it all together R1 Next time Bayesian statistical learning DH&S, ch. 2.{1-5}, 3.{1-4}

4 What I was thinking... Big mistake on HW2 What I was recalling (hash table lookup failure): Let f(x) be a function possessing at least one local extremum, x m Let g() be a monotonic transform Then x m is an extremum of g(f(x)) as well Proof: f(x’)>f(x m ) for every x’ in some small region around x m Monotonicity: f(x’)>f(x m ) ⇒ g(f(x’))>g(f(x m ))

5 Reminders... The SVM objective written as: Maximize: Subject to: Can replace “standard” inner product with generalized “kernel” inner product... K(xi,xj)K(xi,xj)

6 Putting it all together Original (low dimensional) data

7 Putting it all together Original data matrix Kernel matrix Kernel function

8 Putting it all together Kernel + orig labels Maximize Subject to: Quadratic Program instance

9 Putting it all together Support Vector weights Maximize Subject to: Quadratic Program instance QP Solver subroutine

10 Putting it all together Support Vector weights Hyperplane in

11 Putting it all together Support Vector weights Final classifier

12 Putting it all together Final classifier Nonlinear classifier in

13 Final notes on SVMs Note that only for which actually contribute to final classifier This is why they are called support vectors All the rest of the training data can be discarded

14 Final notes on SVMs Complexity of training (& ability to generalize) based only on amount of training data Not based on dimension of hyperplane space ( ) Good classification performance In practice, SVMs among the strongest classifiers we have Closely related to neural nets, boosting, etc.

15 Reading 1


Download ppt "SVMs Concluded; R1. Administrivia HW1 returned today and there was much rejoicing... μ=15.8 σ=8.8 Remember: class is curved You don’t have to run faster."

Similar presentations


Ads by Google