Download presentation
Presentation is loading. Please wait.
1
Lecture 08: Soft-margin SVM
CS480/680: Intro to ML Lecture 08: Soft-margin SVM 10/11/18 Yao-Liang Yu
2
Outline Formulation Dual Optimization Extension 10/11/18 Yao-Liang Yu
3
Hard-margin SVM Primal hard constraint Dual 10/11/18 Yao-Liang Yu
4
What if inseparable? 10/11/18 Yao-Liang Yu
5
Soft-margin (Cortes & Vapnik’95)
Primal hard constraint propto 1/margin hyper-parameter training error Primal soft constraint prediction (no sign) 10/11/18 Yao-Liang Yu
6
Zero-one loss your prediction
Find prediction rule f so that on an unseen random X, our prediction sign(f(X)) has small chance to be different from the true label Y 10/11/18 Yao-Liang Yu
7
The hinge loss upper bound zero-one Squared hinge exponential loss
logistic loss exponential loss Squared hinge zero-one still suffer loss! 10/11/18 Yao-Liang Yu
8
Classification-calibration
Want to minimize zero-one loss End up with minimizing some other loss Theorem (Bartlett, Jordan, McAuLiffe’06). Any convex margin loss ℓ is classification-calibrated iff ℓ is differentiable at 0 and ℓ’(0) < 0. Classification calibration. has the same sign as , i.e., the Bayes rule. 10/11/18 Yao-Liang Yu
9
Outline Formulation Dual Optimization Extension 10/11/18 Yao-Liang Yu
10
Important optimization trick
joint over x and t 10/11/18 Yao-Liang Yu
11
Slack for “wrong” prediction
10/11/18 Yao-Liang Yu
12
Lagrangian 10/11/18 Yao-Liang Yu
13
Dual problem only dot product is needed! 10/11/18 Yao-Liang Yu
14
The effect of C RdxR C 0? C inf? Rn 10/11/18 Yao-Liang Yu
15
Karush-Kuhn-Tucker conditions
Primal constraints on w, b and ξ: Dual constraints on α and β: Complementary slackness Stationarity 10/11/18 Yao-Liang Yu
16
Parsing the equations 10/11/18 Yao-Liang Yu
17
Support Vectors 10/11/18 Yao-Liang Yu
18
Recover b Take any i such that Then xi is on the hyperplane:
How to recover ξ ? What if there is no such i ? 10/11/18 Yao-Liang Yu
19
More examples 10/11/18 Yao-Liang Yu
20
Outline Formulation Dual Optimization Extension 10/11/18 Yao-Liang Yu
21
Gradient Descent Step size (learning rate) const., if L is smooth
diminishing, otherwise (Generalized) gradient O(nd) ! 10/11/18 Yao-Liang Yu
22
Stochastic Gradient Descent (SGD)
average over n samples a random sample suffices O(d) diminishing step size, e.g., 1/sqrt{t} or 1/t averaging, momentum, variance-reduction, etc. sample w/o replacement; cycle; permute in each pass 10/11/18 Yao-Liang Yu
23
The derivative What about zero-one loss? All other losses are diff.
What about perceptron? 10/11/18 Yao-Liang Yu
24
Solving the dual O(n*n) Can choose constant step size ηt = η
Faster algorithms exist: e.g., choose a pair of αp and αq and derive a closed-form update 10/11/18 Yao-Liang Yu
25
A little history on optimization
Gradient descent mentioned first in (Cauchy, 1847) First rigorous convergence proof (Curry, 1944) SGD proposed and analyzed (Robbins & Monro, 1951) 10/11/18 Yao-Liang Yu
26
Herbert Robbins (1915 – 2001) 10/11/18 Yao-Liang Yu
27
Outline Formulation Dual Optimization Extension 10/11/18 Yao-Liang Yu
28
Multiclass (Crammer & Singer’01)
separate by a “safety margin” Prediction for correct class Prediction for wrong classes Soft-margin is similar Many other variants Calibration theory is more involved 10/11/18 Yao-Liang Yu
29
Regression (Drucker et al.’97)
10/11/18 Yao-Liang Yu
30
Large-scale training (You, Demmel, et al.’17)
Randomly partition training data evenly into p nodes Train SVM independently on each node Compute center on each node For a test sample Find the nearest center (node / SVM) Predict using the corresponding node / SVM 10/11/18 Yao-Liang Yu
31
Questions? 10/11/18 Yao-Liang Yu
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.