Download presentation
Presentation is loading. Please wait.
1
SUPPORT VECTOR MACHINE
2009/3/24
2
Support Vector Machine
A supervised learning method Is known as the maximum margin classifier Find the max-margin separating hyperplane
3
SVM – hard margin max argmin <w, x> - θ = 0
2 max ∥w∥ w, θ yn(<w, xn> - θ) ≧1 2 ∥w∥ 1 argmin <w, w> 2 w, θ yn(<w, xn> - θ) ≧1 x1
4
Quadratic programming
1 Σ Σ aijvivj + Σ bivi 2 i j argmin v V* quadprog(A, b, R, q) Σ rkivi ≧ qk i Let V = [ θ, w1, w2, …, wD ] Adapt the problem for quadratic programming Find A, b, R, q and put into the quad. solver argmin 2 w, θ yn(<w, xn> - θ) ≧1 1 <w, w> Σ wd2 2 1 d=1 D (-yn) θ + Σ yn (xn)d wd ≧ 1 d=1 D
5
Adaptation argmin Σ Σ aijvivj + Σ bivi Σ rkivi ≧ qk Σ wd2 1 2 1
V = [ θ, w1, w2, …, wD ] v0, v1, v2, .…, vD (-yn) θ + Σ yn (xn)d wd ≧ 1 d=1 D Σ wd2 2 1 d=1 D (1+D)*(1+D) v0 vd (1+D)*1 a00 = 0 a0j = 0 ai0 = 0 i ≠ 0, j ≠ 0 aij = 1 (i = j) 0 (i ≠ j) b0 = 0 i ≠ 0 bi = 0 rn0 = -yn d > 0 rnd = yn (xn)d (2N)*(1+D) qn = 1 (2N)*1
6
SVM – soft margin Allow possible training errors Tradeoff c
Large c : thinner hyperplane, care about error Small c : thicker hyperplane, not care about error tradeoff 1 argmin <w, w> + c Σξn errors 2 w, θ n yn(<w, xn> - θ) ≧1 – ξn ξn ≧ 0
7
Adaptation argmin Σ Σ aijvivj + Σ bivi Σ rkivi ≧ qk 1 2
V = [ θ, w1, w2, …, wD, ξ1, ξ2, …, ξN ] (1+D+N)*(1+D+N) (1+D+N)*1 (2N)*1 (2N)*(1+D+N)
8
Primal form and Dual form
1 argmin <w, w> + c Σξn 2 w, θ n yn(<w, xn> - θ) ≧1 – ξn Variables: 1+D+N Constraints: 2N ξn ≧ 0 Dual form 1 argmin ΣΣ αnynαmym<xn, xm> - Σ αn 2 α n m n 0 ≦αn≦C Variables: N Constraints: 2N+1 Σ ynαn = 0 n
9
Dual form SVM Find optimal α* Use α* solve w* and θ
αn=0 correct or on 0<αn<C on αn=C wrong or on αn=C free SV Support Vector αn=0
10
Nonlinear SVM Nonlinear mapping X Φ(X)
{(x)1, (x)2} R2 {1, (x)1, (x)2, (x)12, (x)22, (x)1(x)2} R6 Need kernel trick 1 argmin ΣΣ αnynαmym<Φ(xn), Φ(xm)> - Σ αn 2 α n m n 0 ≦αn≦C Σ ynαn = 0 n (1+ <xn, xm>)2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.