An Introduction to Support Vector Machines Martin Law.

Slides:



Advertisements
Similar presentations
Introduction to Support Vector Machines (SVM)
Advertisements

Generative Models Thus far we have essentially considered techniques that perform classification indirectly by modeling the training data, optimizing.
Lecture 9 Support Vector Machines
ECG Signal processing (2)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
An Introduction of Support Vector Machine
Classification / Regression Support Vector Machines
An Introduction of Support Vector Machine
Support Vector Machines and Kernels Adapted from slides by Tim Oates Cognition, Robotics, and Learning (CORAL) Lab University of Maryland Baltimore County.
Support Vector Machines
1 Lecture 5 Support Vector Machines Large-margin linear classifier Non-separable case The Kernel trick.
CSCE822 Data Mining and Warehousing
Machine learning continued Image source:
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Support Vector Machine
Support Vector Machines (and Kernel Methods in general)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
Support Vector Classification (Linearly Separable Case, Primal) The hyperplanethat solves the minimization problem: realizes the maximal margin hyperplane.
Support Vector Machines Kernel Machines
Classification Problem 2-Category Linearly Separable Case A- A+ Malignant Benign.
Support Vector Machines
Sparse Kernels Methods Steve Gunn.
CS 4700: Foundations of Artificial Intelligence
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
SVM Support Vectors Machines
Lecture 10: Support Vector Machines
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Ch. Eick: Support Vector Machines: The Main Ideas Reading Material Support Vector Machines: 1.Textbook 2. First 3 columns of Smola/Schönkopf article on.
Support Vector Machine & Image Classification Applications
Machine Learning Seminar: Support Vector Regression Presented by: Heng Ji 10/08/03.
Support Vector Machines Mei-Chen Yeh 04/20/2010. The Classification Problem Label instances, usually represented by feature vectors, into one of the predefined.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
Machine Learning Using Support Vector Machines (Paper Review) Presented to: Prof. Dr. Mohamed Batouche Prepared By: Asma B. Al-Saleh Amani A. Al-Ajlan.
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
An Introduction to Support Vector Machines (M. Law)
1 Chapter 6. Classification and Prediction Overview Classification algorithms and methods Decision tree induction Bayesian classification Lazy learning.
CISC667, F05, Lec22, Liao1 CISC 667 Intro to Bioinformatics (Fall 2005) Support Vector Machines I.
Sparse Kernel Methods 1 Sparse Kernel Methods for Classification and Regression October 17, 2007 Kyungchul Park SKKU.
An Introduction to Support Vector Machine (SVM)
Support Vector Machines in Marketing Georgi Nalbantov MICC, Maastricht University.
SVM – Support Vector Machines Presented By: Bella Specktor.
Support vector machine LING 572 Fei Xia Week 8: 2/23/2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A 1.
1 New Horizon in Machine Learning — Support Vector Machine for non-Parametric Learning Zhao Lu, Ph.D. Associate Professor Department of Electrical Engineering,
CpSc 810: Machine Learning Support Vector Machine.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Text Classification using Support Vector Machine Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata.
© Eric CMU, Machine Learning Support Vector Machines Eric Xing Lecture 4, August 12, 2010 Reading:
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
CpSc 810: Machine Learning Support Vector Machine.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
SVMs in a Nutshell.
An Introduction of Support Vector Machine In part from of Jinwei Gu.
Non-separable SVM's, and non-linear classification using kernels Jakob Verbeek December 16, 2011 Course website:
PREDICT 422: Practical Machine Learning
A Simple Introduction to Support Vector Machines
An Introduction to Support Vector Machines
An Introduction to Support Vector Machines
Support Vector Machines Introduction to Data Mining, 2nd Edition by
Support Vector Machines
Statistical Learning Dong Liu Dept. EEIS, USTC.
CS 2750: Machine Learning Support Vector Machines
Introduction to Support Vector Machines
Support Vector Machines and Kernels
CSE 802. Prepared by Martin Law
Support Vector Machines 2
Presentation transcript:

An Introduction to Support Vector Machines Martin Law

08/11/05CSE 802. Prepared by Martin Law2 Outline History of support vector machines (SVM) Two classes, linearly separable What is a good decision boundary? Two classes, not linearly separable How to make SVM non-linear: kernel trick Demo of SVM Epsilon support vector regression (  -SVR) Conclusion

08/11/05CSE 802. Prepared by Martin Law3 History of SVM SVM is a classifier derived from statistical learning theory by Vapnik and Chervonenkis SVM was first introduced in COLT-92 SVM becomes famous when, using pixel maps as input, it gives accuracy comparable to sophisticated neural networks with elaborated features in a handwriting recognition task Currently, SVM is closely related to: Kernel methods, large margin classifiers, reproducing kernel Hilbert space, Gaussian process

08/11/05CSE 802. Prepared by Martin Law4 Two Class Problem: Linear Separable Case Class 1 Class 2 Many decision boundaries can separate these two classes Which one should we choose?

08/11/05CSE 802. Prepared by Martin Law5 Example of Bad Decision Boundaries Class 1 Class 2 Class 1 Class 2

08/11/05CSE 802. Prepared by Martin Law6 Good Decision Boundary: Margin Should Be Large The decision boundary should be as far away from the data of both classes as possible We should maximize the margin, m Class 1 Class 2 m

08/11/05CSE 802. Prepared by Martin Law7 The Optimization Problem Let {x 1,..., x n } be our data set and let y i  {1,-1} be the class label of x i The decision boundary should classify all points correctly  A constrained optimization problem

08/11/05CSE 802. Prepared by Martin Law8 The Optimization Problem We can transform the problem to its dual This is a quadratic programming (QP) problem Global maximum of  i can always be found w can be recovered by

08/11/05CSE 802. Prepared by Martin Law9 Characteristics of the Solution Many of the  i are zero w is a linear combination of a small number of data Sparse representation x i with non-zero  i are called support vectors (SV) The decision boundary is determined only by the SV Let t j (j=1,..., s) be the indices of the s support vectors. We can write For testing with a new data z Compute and classify z as class 1 if the sum is positive, and class 2 otherwise

08/11/05CSE 802. Prepared by Martin Law10  6 =1.4 A Geometrical Interpretation Class 1 Class 2  1 =0.8  2 =0  3 =0  4 =0  5 =0  7 =0  8 =0.6  9 =0  10 =0

08/11/05CSE 802. Prepared by Martin Law11 Some Notes There are theoretical upper bounds on the error on unseen data for SVM The larger the margin, the smaller the bound The smaller the number of SV, the smaller the bound Note that in both training and testing, the data are referenced only as inner product, x T y This is important for generalizing to the non-linear case

08/11/05CSE 802. Prepared by Martin Law12 How About Not Linearly Separable We allow “error”  i in classification Class 1 Class 2

08/11/05CSE 802. Prepared by Martin Law13 Soft Margin Hyperplane Define  i =0 if there is no error for x i  i are just “slack variables” in optimization theory We want to minimize C : tradeoff parameter between error and margin The optimization problem becomes

08/11/05CSE 802. Prepared by Martin Law14 The Optimization Problem The dual of the problem is w is also recovered as The only difference with the linear separable case is that there is an upper bound C on  i Once again, a QP solver can be used to find  i

08/11/05CSE 802. Prepared by Martin Law15 Extension to Non-linear Decision Boundary Key idea: transform x i to a higher dimensional space to “make life easier” Input space: the space x i are in Feature space: the space of  (x i ) after transformation Why transform? Linear operation in the feature space is equivalent to non-linear operation in input space The classification task can be “easier” with a proper transformation. Example: XOR

08/11/05CSE 802. Prepared by Martin Law16 Extension to Non-linear Decision Boundary Possible problem of the transformation High computation burden and hard to get a good estimate SVM solves these two issues simultaneously Kernel tricks for efficient computation Minimize ||w|| 2 can lead to a “good” classifier  ( )  (.)  ( ) Feature space Input space

08/11/05CSE 802. Prepared by Martin Law17 Example Transformation Define the kernel function K (x,y) as Consider the following transformation The inner product can be computed by K without going through the map  (.)

08/11/05CSE 802. Prepared by Martin Law18 Kernel Trick The relationship between the kernel function K and the mapping  (.) is This is known as the kernel trick In practice, we specify K, thereby specifying  (.) indirectly, instead of choosing  (.) Intuitively, K (x,y) represents our desired notion of similarity between data x and y and this is from our prior knowledge K (x,y) needs to satisfy a technical condition (Mercer condition) in order for  (.) to exist

08/11/05CSE 802. Prepared by Martin Law19 Examples of Kernel Functions Polynomial kernel with degree d Radial basis function kernel with width  Closely related to radial basis function neural networks Sigmoid with parameter  and  It does not satisfy the Mercer condition on all  and  Research on different kernel functions in different applications is very active

08/11/05CSE 802. Prepared by Martin Law20 Example of SVM Applications: Handwriting Recognition

08/11/05CSE 802. Prepared by Martin Law21 Modification Due to Kernel Function Change all inner products to kernel functions For training, Original With kernel function

08/11/05CSE 802. Prepared by Martin Law22 Modification Due to Kernel Function For testing, the new data z is classified as class 1 if f  0, and as class 2 if f <0 Original With kernel function

08/11/05CSE 802. Prepared by Martin Law23 Example Suppose we have 5 1D data points x 1 =1, x 2 =2, x 3 =4, x 4 =5, x 5 =6, with 1, 2, 6 as class 1 and 4, 5 as class 2  y 1 =1, y 2 =1, y 3 =-1, y 4 =-1, y 5 =1 We use the polynomial kernel of degree 2 K(x,y) = (xy+1) 2 C is set to 100 We first find  i (i=1, …, 5) by

08/11/05CSE 802. Prepared by Martin Law24 Example By using a QP solver, we get  1 =0,  2 =2.5,  3 =0,  4 =7.333,  5 =4.833 Note that the constraints are indeed satisfied The support vectors are {x 2 =2, x 4 =5, x 5 =6} The discriminant function is b is recovered by solving f(2)=1 or by f(5)=-1 or by f(6)=1, as x 2, x 4, x 5 lie on and all give b=9

08/11/05CSE 802. Prepared by Martin Law25 Example Value of discriminant function class 2 class 1

08/11/05CSE 802. Prepared by Martin Law26 Multi-class Classification SVM is basically a two-class classifier One can change the QP formulation to allow multi-class classification More commonly, the data set is divided into two parts “intelligently” in different ways and a separate SVM is trained for each way of division Multi-class classification is done by combining the output of all the SVM classifiers Majority rule Error correcting code Directed acyclic graph

08/11/05CSE 802. Prepared by Martin Law27 Software A list of SVM implementation can be found at Some implementation (such as LIBSVM) can handle multi-class classification SVMLight is among one of the earliest implementation of SVM Several Matlab toolboxes for SVM are also available

08/11/05CSE 802. Prepared by Martin Law28 Summary: Steps for Classification Prepare the pattern matrix Select the kernel function to use Select the parameter of the kernel function and the value of C You can use the values suggested by the SVM software, or you can set apart a validation set to determine the values of the parameter Execute the training algorithm and obtain the  i Unseen data can be classified using the  i and the support vectors

08/11/05CSE 802. Prepared by Martin Law29 Demonstration Iris data set Class 1 and class 3 are “merged” in this demo

08/11/05CSE 802. Prepared by Martin Law30 Strengths and Weaknesses of SVM Strengths Training is relatively easy No local optimal, unlike in neural networks It scales relatively well to high dimensional data Tradeoff between classifier complexity and error can be controlled explicitly Non-traditional data like strings and trees can be used as input to SVM, instead of feature vectors Weaknesses Need a “good” kernel function

08/11/05CSE 802. Prepared by Martin Law31 Epsilon Support Vector Regression (  -SVR) Linear regression in feature space Unlike in least square regression, the error function is  -insensitive loss function Intuitively, mistake less than  is ignored This leads to sparsity similar to SVM  Value off target Penalty Value off target Penalty Square loss function  -insensitive loss function

08/11/05CSE 802. Prepared by Martin Law32 Epsilon Support Vector Regression (  -SVR) Given: a data set {x 1,..., x n } with target values {u 1,..., u n }, we want to do  -SVR The optimization problem is Similar to SVM, this can be solved as a quadratic programming problem

08/11/05CSE 802. Prepared by Martin Law33 Epsilon Support Vector Regression (  -SVR) C is a parameter to control the amount of influence of the error The ½||w|| 2 term serves as controlling the complexity of the regression function This is similar to ridge regression After training (solving the QP), we get values of  i and  i *, which are both zero if x i does not contribute to the error function For a new data z,

08/11/05CSE 802. Prepared by Martin Law34 Other Types of Kernel Methods A lesson learnt in SVM: a linear algorithm in the feature space is equivalent to a non-linear algorithm in the input space Classic linear algorithms can be generalized to its non-linear version by going to the feature space Kernel principal component analysis, kernel independent component analysis, kernel canonical correlation analysis, kernel k-means, 1-class SVM are some examples

08/11/05CSE 802. Prepared by Martin Law35 Conclusion SVM is a useful alternative to neural networks Two key concepts of SVM: maximize the margin and the kernel trick Many active research is taking place on areas related to SVM Many SVM implementations are available on the web for you to try on your data set!

08/11/05CSE 802. Prepared by Martin Law36 Resources machines.org/papers/tutorial-nips.ps.gz machines.org/papers/tutorial-nips.ps.gz applist.html applist.html