Download presentation
Presentation is loading. Please wait.
Published byきょうすけ やすもと Modified over 6 years ago
1
Computer Sciences Dept. University of Wisconsin - Madison
Classification via Mathematical Programming Based Support Vector Machines Glenn M. Fung November 26, 2002 Computer Sciences Dept. University of Wisconsin - Madison
2
Outline of Talk (Standard) Support vector machines (SVM)
Classify by halfspaces Proximal support vector machines (PSVM) Classify by proximity to planes Numerical experiments Incremental PSVM classifiers Synthetic dataset consisting of 1 billion points in dimensional input space classified in less than 2 hours and 26 minutes seconds Knowledge based linear SVMs Incorporating knowledge sets into a classifier
3
Support Vector Machines Maximizing the Margin between Bounding Planes
vectors A-
4
Standard Support Vector Machine Algebra of 2-Category Linearly Separable Case
Given m points in n dimensional space Represented by an m-by-n matrix A Membership of each in class +1 or –1 specified by: An m-by-m diagonal matrix D with +1 & -1 entries Separate by two bounding planes, More succinctly: where e is a vector of ones.
5
Standard Support Vector Machine Formulation
Solve the quadratic program for some : min s. t. (QP) , , denotes where or membership. Margin is maximized by minimizing
6
Proximal Vector Machines (KDD 2002) Fitting the Data using two parallel Bounding Planes
7
PSVM Formulation min We have from the QP SVM formulation: (QP)
s. t. Solving for in terms of and gives: min This simple, but critical modification, changes the nature of the optimization problem tremendously!!
8
Advantages of New Formulation
Objective function remains strongly convex An explicit exact solution can be written in terms of the problem data PSVM classifier is obtained by solving a single system of linear equations in the usually small dimensional input space Exact leave-one-out-correctness can be obtained in terms of problem data
9
Linear PSVM We want to solve: min
Setting the gradient equal to zero, gives a nonsingular system of linear equations. Solution of the system gives the desired PSVM classifier
10
Linear PSVM Solution Here, The linear system to solve depends on:
which is of the size is usually much smaller than
11
Linear Proximal SVM Algorithm
Input Define Calculate Solve Classifier:
12
Nonlinear PSVM Formulation
) Linear PSVM: (Linear separating surface: (QP) min s. t. By QP “duality”, . Maximizing the margin in the “dual space” , gives: min Replace by a nonlinear kernel : min
13
The Nonlinear Classifier
Where K is a nonlinear kernel, e.g.: Gaussian (Radial Basis) Kernel : The -entry of represents the “similarity” of data points and
14
Nonlinear PSVM Similar to the linear case,
Defining slightly different: Similar to the linear case, setting the gradient equal to zero, we obtain: Here, the linear system to solve is of the size However, reduced kernels techniques can be used (RSVM) to reduce dimensionality.
15
Linear Proximal SVM Algorithm
Non Input Define Calculate Solve Classifier: Classifier:
16
Linear & Nonlinear PSVM MATLAB Code
function [w, gamma] = psvm(A,d,nu) % PSVM: linear and nonlinear classification % INPUT: A, d=diag(D), nu. OUTPUT: w, gamma % [w, gamma] = psvm(A,d,nu); [m,n]=size(A);e=ones(m,1);H=[A -e]; v=(d’*H)’ %v=H’*D*e; r=(speye(n+1)/nu+H’*H)\v % solve (I/nu+H’*H)r=v w=r(1:n);gamma=r(n+1); % getting w,gamma from r
17
Linear PSVM Comparisons with Other SVMs Much Faster, Comparable Correctness
Data Set m x n PSVM Ten-fold test % Time (sec.) SSVM SVM WPBC (60 mo.) 110 x 32 68.5 0.02 0.17 62.7 3.85 Ionosphere 351 x 34 87.3 88.7 1.23 88.0 2.19 Cleveland Heart 297 x 13 85.9 0.01 86.2 0.70 86.5 1.44 Pima Indians 768 x 8 77.5 77.6 0.78 76.4 37.00 BUPA Liver 345 x 6 69.4 70.0 69.5 6.65 Galaxy Dim 4192 x 14 93.5 0.34 95.0 5.21 94.1 28.33
18
Linear PSVM vs LSVM 2-Million Dataset Over 30 Times Faster Dataset
Method Training Correctness % Testing Time Sec. NDC “Easy” LSVM 90.86 91.23 658.5 PSVM 90.80 91.13 20.8 “Hard” 69.80 69.44 655.6 69.84 69.52 20.6
19
Nonlinear PSVM: Spiral Dataset 94 Red Dots & 94 White Dots
20
Nonlinear PSVM Comparisons
Data Set m x n PSVM Ten-fold test % Time (sec.) SSVM LSVM Ionosphere 351 x 34 95.2 4.60 95.8 25.25 14.58 BUPA Liver 345 x 6 73.6 4.34 73.7 20.65 30.75 Tic-Tac-Toe 958 x 9 98.4 74.95 395.30 94.7 350.64 Mushroom * 8124 x 22 88.0 35.50 88.8 307.66 87.8 503.74 * A rectangular kernel was used of size 8124 x 215
21
Conclusion PSVM is an extremely simple procedure for generating linear and nonlinear classifiers PSVM classifier is obtained by solving a single system of linear equations in the usually small dimensional input space for a linear classifier Comparable test set correctness to standard SVM Much faster than standard SVMs : typically an order of magnitude less.
22
Incremental PSVM Classification (Second SIAM Data Mining Conference)
and Suppose we have two “blocks” of data The linear system to solve depends on the compressed blocks: which are of the size
23
Linear Incremental Proximal SVM Algorithm
Initialization Read from disk Compute and Store in memory Update in memory No Discard: Keep: Yes Compute output
24
Linear Incremental Proximal SVM Adding – Retiring Data
Capable of modifying an existing linear classifier by both adding and retiring data Option of retiring old data is similar to adding new data Financial Data: old data is obsolete Option of keeping old data and merging it with the new data: Medical Data: old data does not obsolesce.
25
Numerical experiments One-Billion Two-Class Dataset
Synthetic dataset consisting of 1 billion points in 10- dimensional input space Generated by NDC (Normally Distributed Clustered) dataset generator Dataset divided into 500 blocks of 2 million points each. Solution obtained in less than 2 hours and 26 minutes About 30% of the time was spent reading data from disk. Testing set Correctness 90.79%
26
Numerical Experiments Simulation of Two-month 60-Million Dataset
Synthetic dataset consisting of 60 million points (1 million per day) in 10- dimensional input space Generated using NDC At the beginning, we only have data corresponding to the first month Every day: The oldest block of data is retired (1 Million) A new block is added (1 Million) A new linear classifier is calculated daily Only an 11 by 11 matrix is kept in memory at the end of each day. All other data is purged.
27
Numerical experiments Separator changing through time
28
Numerical experiments Normals to the separating hyperplanes
Corresponding to 5 day intervals
29
Conclusion Proposed algorithm is an extremely simple procedure for generating linear classifiers in an incremental fashion for huge datasets. The linear classifier is obtained by solving a single system of linear equations in the small dimensional input space. The proposed algorithm has the ability to retire old data and add new data in a very simple manner. Only a matrix of the size of the input space is kept in memory at any time
30
Support Vector Machines Linear Programming Formulation
Use the 1-norm instead of the 2-norm: min s.t. This is equivalent to the following linear program: min s.t.
31
Conventional Data-Based SVM
-20 -15 -10 -5 5 -45 -40 -35 -30 -25 x'w= g +1 -1 A - +
32
Knowledge-Based SVM via Polyhedral Knowledge Sets (NIPS 2002)
-20 -15 -10 -5 5 -45 -40 -35 -30 -25 {x | B 1 x b } x'w= g +1 {x | C c 2 -1 A - + Knowledge-Based SVM via Polyhedral Knowledge Sets (NIPS 2002)
33
Incoporating Knowledge Sets Into an SVM Classifier
Suppose that the knowledge set: belongs to the class A+. Hence it must lie in the halfspace : We therefore have the implication: Will show that this implication is equivalent to a set of constraints that can be imposed on the classification problem.
34
Knowledge Set Equivalence Theorem
35
Proof of Equivalence Theorem ( Via Nonhomogeneous Farkas or LP Duality)
Proof: By LP Duality:
36
Knowledge-Based SVM Classification
37
Knowledge-Based SVM Classification
Adding one set of constraints for each knowledge set to the 1-norm SVM LP, we have:
38
Parametrized Knowledge-Based LP
39
Numerical Testing The Promoter Recognition Dataset
Promoter: Short DNA sequence that precedes a gene sequence. A promoter consists of 57 consecutive DNA nucleotides belonging to {A,G,C,T} . Important to distinguish between promoters and nonpromoters This distinction identifies starting locations of genes in long uncharacterized DNA sequences.
40
The Promoter Recognition Dataset Numerical Representation
Simple “1 of N” mapping scheme for converting nominal attributes into a real valued representation: Not most economical representation, but commonly used.
41
The Promoter Recognition Dataset Numerical Representation
Feature space mapped from 57-dimensional nominal space to a real valued 57 x 4=228 dimensional space. 57 nominal values 57 x 4 =228 binary values
42
Promoter Recognition Dataset Prior Knowledge Rules
Prior knowledge consist of the following 64 rules:
43
Promoter Recognition Dataset Sample Rules
where denotes position of a nucleotide, with respect to a meaningful reference point starting at position and ending at position Then:
44
The Promoter Recognition Dataset Comparative Algorithms
KBANN Knowledge-based artificial neural network [Shavlik et al] BP: Standard back propagation for neural networks [Rumelhart et al] O’Neill’s Method Empirical method suggested by biologist O’Neill [O’Neill] NN: Nearest neighbor with k=3 [Cost et al] ID3: Quinlan’s decision tree builder[Quinlan] SVM1: Standard 1-norm SVM [Bradley et al]
45
The Promoter Recognition Dataset Comparative Test Results
46
Wisconsin Breast Cancer Prognosis Dataset Description of the data
110 instances corresponding to 41 patients whose cancer had recurred and 69 patients whose cancer had not recurred 32 numerical features The domain theory: two simple rules used by doctors:
47
Wisconsin Breast Cancer Prognosis Dataset Numerical Testing Results
Doctor’s rules applicable to only 32 out of 110 patients. Only 22 of 32 patients are classified correctly by this rule (20% Correctness). KSVM linear classifier applicable to all patients with correctness of 66.4%. Correctness comparable to best available results using conventional SVMs. KSVM can get classifiers based on knowledge without using any data.
48
Conclusion Prior knowledge easily incorporated into classifiers through polyhedral knowledge sets. Resulting problem is a simple LP. Knowledge sets can be used with or without conventional labeled data. In either case KSVM is better than most knowledge based classifiers.
49
Breast Cancer Treatment Response Joint with ExonHit ( French BioTech)
35 patients treated by a drug cocktail 9 partial responders; 26 nonresponders 25 gene expression measurements made on each patient 1-Norm SVM classifier selected: 12 out of 25 genes Combinatorially selected 6 genes out of 12 Separating plane obtained: T S U Z A X = 0. Leave-one-out-error: 1 out of 35 (97.1% correctness)
50
Other papers: A fast and Global Two Point Low Storage Optimization Technique for Tracing Rays in 2D and 3D Isotropic Media (Journal of Applied Geophysics) Semi-Supervised Support Vector Machines for Unlabeled data Classification (Optimization Methods and Software) Select a small subset of an unlabeled dataset to be labeled by an oracle or expert Use the new labeled data and the remaining unlabeled data to train a SVM clasifier
51
Other papers: Multicategory Proximal SVM Classifiers
Fast multicategory algorithm based on PSVM Newton refinement step proposed Data Selection for SVM Classifiers (KDD 2000) Reduce the number of support vectors of a linear SVM Minimal Kernel Classifiers (JMLR) Use a concave minimization formulation to reduce the SVM model complexity. Useful for online testing where testing time is an issue.
52
Other papers: A Feature Selection Newton Method for SVM Classification LP SVM solved using a Newton method Very sparse solutions are obtained Finite Newton method for Lagrangian SVM Classifiers (Neurocomputing Journal) Very fast performance, specially when n>m
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.