An Introduction of Support Vector Machine Courtesy of Jinwei Gu.

Slides:



Advertisements
Similar presentations
Introduction to Support Vector Machines (SVM)
Advertisements

Support Vector Machines
Lecture 9 Support Vector Machines
ECG Signal processing (2)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machine & Its Applications Abhishek Sharma Dept. of EEE BIT Mesra Aug 16, 2010 Course: Neural Network Professor: Dr. B.M. Karan Semester.
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
An Introduction of Support Vector Machine
An Introduction of Support Vector Machine
Support Vector Machines
SVM—Support Vector Machines
Machine learning continued Image source:
Groundwater 3D Geological Modeling: Solving as Classification Problem with Support Vector Machine A. Smirnoff, E. Boisvert, S. J.Paradis Earth Sciences.
LPP-HOG: A New Local Image Descriptor for Fast Human Detection Andy Qing Jun Wang and Ru Bo Zhang IEEE International Symposium.
1 Machine Learning Support Vector Machines. 2 Perceptron Revisited: Linear Separators Binary classification can be viewed as the task of separating classes.
Support Vector Machines
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class.
Support Vector Classification (Linearly Separable Case, Primal) The hyperplanethat solves the minimization problem: realizes the maximal margin hyperplane.
Support Vector Machines
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
An Introduction to Support Vector Machines Martin Law.
Classification III Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Classification Tamara Berg CSE 595 Words & Pictures.
CSE 4705 Artificial Intelligence
Support Vector Machine & Image Classification Applications
Support Vector Machines Mei-Chen Yeh 04/20/2010. The Classification Problem Label instances, usually represented by feature vectors, into one of the predefined.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
An Introduction to Support Vector Machines (M. Law)
CISC667, F05, Lec22, Liao1 CISC 667 Intro to Bioinformatics (Fall 2005) Support Vector Machines I.
CS 478 – Tools for Machine Learning and Data Mining SVM.
An Introduction to Support Vector Machine (SVM)
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Support vector machine LING 572 Fei Xia Week 8: 2/23/2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A 1.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
An Introduction of Support Vector Machine In part from of Jinwei Gu.
Roughly overview of Support vector machines Reference: 1.Support vector machines and machine learning on documents. Christopher D. Manning, Prabhakar Raghavan.
A Brief Introduction to Support Vector Machine (SVM) Most slides were from Prof. A. W. Moore, School of Computer Science, Carnegie Mellon University.
Kernels Slides from Andrew Moore and Mingyue Tan.
Support Vector Machine & Its Applications. Overview Intro. to Support Vector Machines (SVM) Properties of SVM Applications  Gene Expression Data Classification.
Non-separable SVM's, and non-linear classification using kernels Jakob Verbeek December 16, 2011 Course website:
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
Support vector machines
PREDICT 422: Practical Machine Learning
Support Vector Machine
Support Vector Machines
Support Vector Machines
An Introduction to Support Vector Machines
Support Vector Machines
Support Vector Machines Introduction to Data Mining, 2nd Edition by
Statistical Learning Dong Liu Dept. EEIS, USTC.
CS 2750: Machine Learning Support Vector Machines
CSSE463: Image Recognition Day 14
Support Vector Machine
Support Vector Machines
Introduction to Support Vector Machines
Support vector machines
Support vector machines
SVMs for Document Ranking
Presentation transcript:

An Introduction of Support Vector Machine Courtesy of Jinwei Gu

Today: Support Vector Machine (SVM) A classifier derived from statistical learning theory by Vapnik, et al. in 1992 SVM became famous when, using images as input, it gave accuracy comparable to neural-network with in a handwriting recognition task Currently, SVM is widely used in object detection & recognition, content-based image retrieval, text recognition, biometrics, speech recognition, etc.

Classification Function It can be an arbitrary function of x, such as: Decision Tree Linear Functions (e.g perceprton) Nonlinear Functions (e.g. feed forward neural nets)

Linear Function (Linear separator) g(x) is a linear function in R n : x1x1 x2x2 w T x + b = 0 w T x + b < 0 w T x + b > 0 g(x) is a hyper-plane in the n- dimensional feature space, w and x are vectors (the coefficients w i and variables x i of the hyper-line) (Unit-length) normal vector of the hyper-plane: n w1x1+w2x2+b=0

Vector representation (just to recall..) w xaxa xbxb xcxc The black line is perpendicular to w therefore the vector product is 0

Linear separator x1x1 x2x2 w T x + b = 0 w T x + b < 0 w T x + b > 0 denotes +1 denotes -1 f x f(x,w,b) = sign(w x + b) If sign()>0 then positive If sign(9<0 then negative y

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers!

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers!

How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 x1x1 x2x2 Infinite number of answers!

x1x1 x2x2 How would you classify these points using a linear discriminant function in order to minimize the error rate? Linear Discriminant Function denotes +1 denotes -1 Infinite number of answers! Which one is the best?

Linear Classifiers f x y denotes +1 denotes -1 f(x,w,b) = sign(w x + b) How would you classify this data? Misclassified to +1 class

Large Margin Linear Classifier “safe zone” The linear discriminant function (classifier) with the maximum margin is the best Margin is defined as the width that the boundary could be increased by before hitting a data point in the learning set Why it is the best?  Robust to outliers and thus stronger generalization ability Margin x1x1 x2x2 denotes +1 denotes -1

Large Margin Linear Classifier We are given a set of data points (learning set D): With a scale transformation* on both w and b, the above is equivalent to x1x1 x2x2 denotes +1 denotes -1, where Note: class is +1,-1 NOT 1,0) *linear transformation*linear transformation that enlarges (increases) or shrinks (diminishes) objects by a scale factor that is the same in all directionsscale factor

Large Margin Linear Classifier We know that The margin width is: x1x1 x2x2 denotes +1 denotes -1 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n Support Vectors The margin is the projection of along the direction of w

Large Margin Linear Classifier Formulation: x1x1 x2x2 denotes +1 denotes -1 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that Remember: our objective is finding w, b such that margins are maximized

Large Margin Linear Classifier Equivalent formulation: x1x1 x2x2 denotes +1 denotes -1 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that

Large Margin Linear Classifier Equivalent formulation: x1x1 x2x2 denotes +1 denotes -1 Margin w T x + b = 0 w T x + b = -1 w T x + b = 1 x+x+ x+x+ x-x- n such that

Matrix & vector multiplication (in the n-dimensional space)

Matrix & vector multiplication

Solving the Optimization Problem s.t. Quadratic programming with linear constraints s.t. Lagrangian Function

Lagrangian Given a function f(x) and a set of constraints c1..cn, a Lagrangian is a function L(f,c1,..cn,  1,..  n ) that “incorporates” constraints in the optimization problem The optimum is at a point where (Karush-Kuhn-Tucker (KKT) conditions): The third condition is known as complementarity condition

Solving the Optimization Problem s.t. To minimize L we need to maximize the red box

Dual Problem Formulation (2) s.t., and Lagrangian Dual Problem Each non-zero α i indicates that corresponding x i is a support vector SV. Because of complementary condition 3 Since and

Why only SV? The complementarity condition (the third one of KKT, see previous definition of Lagrangians) implies that one of the 2 multiplicands is zero. However, for non-SV xi. Therefore only data points on the margin (=SV) will have a non-zero α (because they are the only ones for wich yi (w T xi + b ) = 1 )

Final Formulation To summaryze: Again, remember that the constraint (2) can be verified with a non-equality only for the SV!! Q(α) can be computed since we know the y i y j x i T x j (they are the pairs of the training set D!!) Find α 1 …α n such that Q(α) =Σα i - ½ΣΣα i α j y i y j x i T x j is maximized and (1) Σα i y i = 0 (2) α i ≥ 0 for all α i

The Optimization Problem Solution Given a solution to the dual problem Q(α) (i.e. computing the α 1 …α n ), solution to the primal (i.e. computing w and b) is: Each non-zero α i indicates that corresponding x i is a support vector. Then the classifying function for a new point x is (note that we don’t need w explicitly): Notice that it relies on an inner product x i T x between the test point x and the support vectors x i. Only SV are useful for classification of new instances!!. Also keep in mind that solving the optimization problem involved computing the inner products x i T x j between all training points. w = Σ α i y i x i b = y k - Σ α i y i x i T x k for any α k > 0 Φ(x) = Σ α i y i x i T x + b (xi are SV)

Example (-1,0)w+b=-1 (-1,-1)w+b=1 (1,1)w-b=1 positive negative

Example (2) (−1,0) ⋅ w+b=−1 (1,−1) ⋅ w+b=1 (1,1) ⋅ w+b=1 −w1+b=−1 w1−w2+b=1 w1+w2+b=1 Solution is: (w1,w2,b)=(1,0,0)

Example (3)

Working example 2 A 3-point training set: The maximum margin weight vector will be parallel to the shortest line connecting points of the two classes, that is, the line between (1,1) and (2,3) (the 2 Support Vectors) The maximum margin weight vector will be parallel to the shortest line connecting points of the two classes, that is, the line between (1,1) and (2,3) (the 2 Support Vectors) W

Computing margin

Dataset with noise Hard Margin: So far we require all data points be classified correctly - No training error What if the training set is noisy? - Solution 1: use more complex separator denotes +1 denotes -1 OVERFITTING!

Soft Margin Classification A better solution: slack variables Slack variables ξ i can be added to allow misclassification of difficult or noisy examples, resulting margins are called soft. ξiξi ξiξi xi are the misclassified examples

Large Margin Linear Classifier New Formulation: such that Parameter C can be viewed as a way to control over-fitting.

Hard Margin v.s. Soft Margin The old formulation: The new formulation incorporating slack variables: Parameter C can be viewed as a way to control overfitting. Find w and b such that Φ(w) =½ w T w is minimized and for all { ( x i,y i )} y i (w T x i + b) ≥ 1 Find w and b such that Φ(w) =½ w T w + C Σ ξ i is minimized and for all { ( x i,y i )} y i (w T x i + b) ≥ 1- ξ i and ξ i ≥ 0 for all i

Non-linear SVMs Datasets that are linearly separable (possibly with noise) work out great: 0 x 0 x x2x2 0 x But what are we going to do if the dataset is just too hard? How about … mapping data to a higher-dimensional space: This slide is courtesy of

Non-linear SVMs: Feature Space General idea: the original input space can be mapped to some higher-dimensional feature space where the training set is separable: Φ: x → φ(x) This slide is courtesy of

The original objects (left side of the schematic) are mapped, i.e., rearranged, using a set of mathematical functions, known as kernels. φ(x)φ(x)

Nonlinear SVMs: The Kernel Trick With this mapping, our discriminant function is now: No need to know this mapping explicitly, because we only use the dot product of feature vectors in both the training and test. A kernel function is defined as a function that corresponds to a dot product of two feature vectors in some expanded feature space: Original formulation was g(x) = Σα i y i x i T x + b

Nonlinear SVMs: The Kernel Trick 2-dimensional vectors x=[x 1 x 2 ]; let K(x i,x j )=(1 + x i T x j ) 2, Need to show that K(x i,x j ) = φ(x i ) T φ(x j ): K(x i,x j )=(1 + x i T x j ) 2, = 1+ x i1 2 x j x i1 x j1 x i2 x j2 + x i2 2 x j x i1 x j1 + 2x i2 x j2 = [1 x i1 2 √2 x i1 x i2 x i2 2 √2x i1 √2x i2 ] T [1 x j1 2 √2 x j1 x j2 x j2 2 √2x j1 √2x j2 ] = φ(x i ) T φ(x j ), where φ(x) = [1 x 1 2 √2 x 1 x 2 x 2 2 √2x 1 √2x 2 ] An example: This slide is courtesy of

Nonlinear SVMs: The Kernel Trick  Linear kernel: Examples of commonly-used kernel functions:  Polynomial kernel:  Gaussian (Radial-Basis Function (RBF) ) kernel:  Sigmoid: In general, functions that satisfy Mercer ’ s condition can be kernel functions.

Nonlinear SVM: Optimization Formulation: (Lagrangian Dual Problem) find α1, α2,.. α n such that: such that The solution of the discriminant function is The optimization technique is the same.

Support Vector Machine: Algorithm 1. Choose a kernel function 2. Choose a value for C 3. Solve the quadratic programming problem (many software packages available) 4. Construct the discriminant function from the support vectors

Some Issues Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate similarity measures Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different classifications - In the absence of reliable criteria, applications rely on the use of a validation set or cross-validation to set such parameters. Optimization criterion – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters are tested This slide is courtesy of

Summary: Support Vector Machine 1. Large Margin Classifier  Better generalization ability & less over-fitting 2. The Kernel Trick  Map data points to higher dimensional space in order to make them linearly separable.  Since only dot product is used, we do not need to represent the mapping explicitly.

Additional Resource

LibSVM (best implementation for SVM)