Download presentation
Presentation is loading. Please wait.
Published byCalvin Long Modified over 8 years ago
1
LECTURE 20: SUPPORT VECTOR MACHINES PT. 1 April 11, 2016 SDS 293 Machine Learning
2
Announcements
3
Final project deliverables FP2: Dataset & Initial Model - What I need: evidence that you have data, and that you’ve started thinking about how to model it - Due APRIL 15th by 4:59pm FP3: Refined Model - What I need: I will give you feedback on FP2; you’ll either want to revise your model or defend your original - Due APRIL 22nd by 4:59pm FP4: Presentation - What I need: your poster, you to show up ready to talk about your project - Posters due during class on April 27th - Reception APRIL 27th 6pm to 8pm FP5: Write-up (2 pages) - MAY 6th by 4:59pm
4
Outline Maximal margin classifier Support vector classification - 2 classes, linear boundaries - 2 classes, nonlinear boundaries Multiple classes Comparison to other methods Lab
5
Activity: name game
6
Maximal margin classifier Claim: if a separating hyperplane exists, there are infinitely many of them (why?) Big idea: pick the one that gives you the widest margin (why?) Bigger margin = more confident
7
Discussion Question: what’s wrong with this approach? Answer: sometimes the margin is tiny (and therefore prone to overfitting), and other times the data there is no hyperplane that perfectly divides the data
8
>11 ≤11 Support vector classifier Big idea: might prefer to sacrifice a few if it enables us to perform better on the rest Can allow points to cross the margin, or even be completely misclassified
9
Support vector classifier (math) Goal: maximize the margin M Subject to the following constraints: distance from the i th obs. to the hyperplane “slack variables”no one gets rewarded for being extra far from the margin We only have so much “slack” to allocate across all the observations
10
>11 ≤11 Support vectors Decision rule is based only on the support vectors => SVC is robust to strange behavior far from the hyperplane!
11
A more general formulation Goal: maximize the margin M Subject to the following constraints: Fun fact: the solution to this maximization can be found using the inner products of the observations distance from the i th obs. to the hyperplane
12
Dot product = measure of similarity The dot product of 2 (same-length) vectors: Geometric Algebraic
13
Only nonzero at support vectors A more general formulation We can rewrite the linear support vector classifier as: The dot product is just one way to measure the similarity In general, we call any such similarity measure a kernel * *which is why SVMs and related measures are often referred to as “kernel methods”
14
Other kernels We’ve seen a linear kernel (i.e. the classification boundary is linear in the features)
15
Other kernels We could also have a polynomial kernel
16
Other kernels Or even a radial kernel
17
Application: Heart dataset
18
Goal: predict whether an individual has heart disease on the basis of 13 variables: Age, Sex, Chol, etc. 297 subjects, randomly split into 207 training and 90 test observations Problem: no good way to plot in 13 dimensions Solution: ROC curve
19
ROC curve: LDA vs. SVM, linear kernel
20
ROC curve: SVM, linear vs. radial kernel
21
Lab: SVMs To do today’s lab in R: e1071, ROCR To do today’s lab in python: Instructions and code: http://www.science.smith.edu/~jcrouser/SDS293/labs/lab15/ Full version can be found beginning on p. 359 of ISLR
22
Coming up Multiclass support vector machines
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.