Download presentation
Presentation is loading. Please wait.
1
Direct Convex Relaxations of Sparse SVM Antoni B. Chan, Nuno Vasconcelos, and Gert R. G. Lanckriet The 24th International Conference on Machine Learning (ICML 2007) Presented by Shuiwang Ji
2
Outline Introduction; Quadratically Constrained Quadratic Programming (QCQP) formulation; Semidefinite Programming (SDP) formulation; Experiments;
3
Sparsity of SVM x1, …, xd SVM is sparse w.r.t. data points, but not sparse w.r.t. features.
4
Motivations & Related Work Features may be noisy, redundant; Sparsity enhance interpretability; Sparse PCA (Zou et al. & d'Aspremont et al.); Sparse Eigen Methods by D.C. Programming (ICML07);
5
An Example
6
Vector Norm Number of nonzero entries in x
7
C-SVM Primal and Dual 2-norm
8
LP-SVM Primal and Dual 1-norm
9
Convex QCQP Relaxation
10
Interpretations of QCQP-SSVM Problem 6 and 7 are equivalent; QCQP-SSVM is a combination of C-SVM and LP-SVM, 1-norm encourages sparsity and 2-norm encourages large margin;
11
QCQP-SSVM Dual
12
QCQP-SSVM QCQP-SSVM automatically learns an adaptive soft-threshold on the original SVM hyperplane.
13
SDP Relaxation
14
SDP-SSVM Dual The optimal weighting matrix increases the influence of the relevant features while demoting the less relevant features; SDP-SSVM learns a weighting on the inner product such that the hyperplane in the feature space is sparse.
15
Results on Synthetic Data
16
Results on 15 UCI data sets
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.