Support Vector Machines

Slides:



Advertisements
Similar presentations
Support Vector Machine & Its Applications
Advertisements

Introduction to Support Vector Machines (SVM)
Support Vector Machines
Lecture 9 Support Vector Machines
ECG Signal processing (2)
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
An Introduction of Support Vector Machine
An Introduction of Support Vector Machine
Support Vector Machines
Machine learning continued Image source:
Support Vector Machine
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
Support Vector Machines Kernel Machines
Support Vector Machines
CS 4700: Foundations of Artificial Intelligence
Support Vector Machines
Lecture 10: Support Vector Machines
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.
An Introduction to Support Vector Machines Martin Law.
Support Vector Machine & Image Classification Applications
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.
Copyright © 2001, Andrew W. Moore Support Vector Machines Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon University.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Classification - SVM CS 685: Special Topics in Data Mining Jinze Liu.
Introduction to SVMs. SVMs Geometric –Maximizing Margin Kernel Methods –Making nonlinear decision boundaries linear –Efficiently! Capacity –Structural.
Data Mining Volinsky - Columbia University Topic 9: Advanced Classification Neural Networks Support Vector Machines 1 Credits: Shawndra Hill Andrew.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio LECTURE: Support Vector Machines.
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Linear Document Classifier.
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Machine Learning in Ad-hoc IR. Machine Learning for ad hoc IR We’ve looked at methods for ranking documents in IR using factors like –Cosine similarity,
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
An Introduction to Support Vector Machines (M. Law)
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Classification - SVM CS 685: Special Topics in Data Mining Spring 2008 Jinze Liu.
1 CMSC 671 Fall 2010 Class #24 – Wednesday, November 24.
1 Support Vector Machines Chapter Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School.
CS 478 – Tools for Machine Learning and Data Mining SVM.
1 Support Vector Machines. Why SVM? Very popular machine learning technique –Became popular in the late 90s (Vapnik 1995; 1998) –Invented in the late.
Machine Learning Lecture 7: SVM Moshe Koppel Slides adapted from Andrew Moore Copyright © 2001, 2003, Andrew W. Moore.
Support Vector Machine & Its Applications Mingyue Tan The University of British Columbia Nov 26, 2004 A portion (1/3) of the slides are taken from Prof.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Classification - SVM CS 685: Special Topics in Data Mining Jinze Liu.
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
Dec 21, 2006For ICDM Panel on 10 Best Algorithms Support Vector Machines: A Survey Qiang Yang, for ICDM 2006 Panel Partially.
Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.
Support Vector Machines Tao Department of computer science University of Illinois.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
1 Support Vector Machines Some slides were borrowed from Andrew Moore’s PowetPoint slides on SVMs. Andrew’s PowerPoint repository is here:
Support Vector Machines Louis Oliphant Cs540 section 2.
A Brief Introduction to Support Vector Machine (SVM) Most slides were from Prof. A. W. Moore, School of Computer Science, Carnegie Mellon University.
Kernels Slides from Andrew Moore and Mingyue Tan.
Support Vector Machines Chapter 18.9 and the paper “Support vector machines” by M. Hearst, ed., 1998 Acknowledgments: These slides combine and modify ones.
Support Vector Machine & Its Applications. Overview Intro. to Support Vector Machines (SVM) Properties of SVM Applications  Gene Expression Data Classification.
Support Vector Machines (SVMs) Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
Support Vector Machines
Introduction to SVMs.
Support Vector Machines Introduction to Data Mining, 2nd Edition by
Support Vector Machines
Machine Learning Week 2.
Support Vector Machines
CS 2750: Machine Learning Support Vector Machines
CS 485: Special Topics in Data Mining Jinze Liu
Class #212 – Thursday, November 12
Support Vector Machines
Support Vector Machines
Presentation transcript:

Support Vector Machines Modified from the slides by Dr. Andrew W. Moore http://www.cs.cmu.edu/~awm/tutorials Copyright © 2001, 2003, Andrew W. Moore Nov 23rd, 2001

Linear Classifiers f a yest x f(x,w,b) = sign(w. x - b) denotes +1 How would you classify this data? Copyright © 2001, 2003, Andrew W. Moore

Linear Classifiers f a yest x f(x,w,b) = sign(w. x - b) denotes +1 How would you classify this data? Copyright © 2001, 2003, Andrew W. Moore

Linear Classifiers f a yest x f(x,w,b) = sign(w. x - b) denotes +1 How would you classify this data? Copyright © 2001, 2003, Andrew W. Moore

Linear Classifiers f a yest x f(x,w,b) = sign(w. x - b) denotes +1 How would you classify this data? Copyright © 2001, 2003, Andrew W. Moore

Linear Classifiers f a yest x f(x,w,b) = sign(w. x - b) denotes +1 Any of these would be fine.. ..but which is best? Copyright © 2001, 2003, Andrew W. Moore

Classifier Margin f a yest x f(x,w,b) = sign(w. x - b) denotes +1 denotes -1 Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint. Copyright © 2001, 2003, Andrew W. Moore

Maximum Margin f a yest x f(x,w,b) = sign(w. x - b) denotes +1 denotes -1 The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Linear SVM Copyright © 2001, 2003, Andrew W. Moore

Maximum Margin f a yest x f(x,w,b) = sign(w. x - b) denotes +1 denotes -1 The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against Linear SVM Copyright © 2001, 2003, Andrew W. Moore

Why Maximum Margin? Intuitively this feels safest. If we’ve made a small error in the location of the boundary (it’s been jolted in its perpendicular direction) this gives us least chance of causing a misclassification. LOOCV is easy since the model is immune to removal of any non-support-vector datapoints. There’s some theory (using VC dimension) that is related to (but not the same as) the proposition that this is a good thing. Empirically it works very very well. f(x,w,b) = sign(w. x - b) denotes +1 denotes -1 The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against Copyright © 2001, 2003, Andrew W. Moore

Estimate the Margin wx +b = 0 denotes +1 denotes -1 wx +b = 0 x What is the distance expression for a point x to a line wx+b= 0? Copyright © 2001, 2003, Andrew W. Moore

Estimate the Margin wx +b = 0 What is the expression for margin? denotes +1 denotes -1 wx +b = 0 Margin What is the expression for margin? Copyright © 2001, 2003, Andrew W. Moore

Maximize Margin wx +b = 0 Margin denotes +1 denotes -1 Copyright © 2001, 2003, Andrew W. Moore

Maximize Margin wx +b = 0 Min-max problem  game problem Margin denotes +1 denotes -1 wx +b = 0 Margin Min-max problem  game problem Copyright © 2001, 2003, Andrew W. Moore

Maximize Margin wx +b = 0 Strategy: Margin denotes +1 denotes -1 Copyright © 2001, 2003, Andrew W. Moore

Maximum Margin Linear Classifier How to solve it? Copyright © 2001, 2003, Andrew W. Moore

Learning via Quadratic Programming QP is a well-studied class of optimization algorithms to maximize a quadratic function of some real-valued variables subject to linear constraints. Copyright © 2001, 2003, Andrew W. Moore

Quadratic Programming Quadratic criterion Find Subject to n additional linear inequality constraints And subject to e additional linear equality constraints Copyright © 2001, 2003, Andrew W. Moore

Quadratic Programming Quadratic criterion Find There exist algorithms for finding such constrained quadratic optima much more efficiently and reliably than gradient ascent. (But they are very fiddly…you probably don’t want to write one yourself) Subject to n additional linear inequality constraints And subject to e additional linear equality constraints Copyright © 2001, 2003, Andrew W. Moore

Quadratic Programming Copyright © 2001, 2003, Andrew W. Moore

Uh-oh! This is going to be a problem! What should we do? denotes +1 Copyright © 2001, 2003, Andrew W. Moore

Uh-oh! This is going to be a problem! What should we do? Idea 1: Find minimum w.w, while minimizing number of training set errors. Problemette: Two things to minimize makes for an ill-defined optimization Uh-oh! denotes +1 denotes -1 Copyright © 2001, 2003, Andrew W. Moore

Uh-oh! This is going to be a problem! What should we do? Idea 1.1: Minimize w.w + C (#train errors) There’s a serious practical problem that’s about to make us reject this approach. Can you guess what it is? Uh-oh! denotes +1 denotes -1 Tradeoff parameter Copyright © 2001, 2003, Andrew W. Moore

Uh-oh! This is going to be a problem! What should we do? Idea 1.1: Minimize w.w + C (#train errors) There’s a serious practical problem that’s about to make us reject this approach. Can you guess what it is? Uh-oh! denotes +1 denotes -1 Tradeoff parameter Can’t be expressed as a Quadratic Programming problem. Solving it may be too slow. (Also, doesn’t distinguish between disastrous errors and near misses) So… any other ideas? Copyright © 2001, 2003, Andrew W. Moore

Uh-oh! This is going to be a problem! What should we do? Idea 2.0: Minimize w.w + C (distance of error points to their correct place) Uh-oh! denotes +1 denotes -1 Copyright © 2001, 2003, Andrew W. Moore

Support Vector Machine (SVM) for Noisy Data denotes +1 denotes -1 Any problem with the above formulism? Copyright © 2001, 2003, Andrew W. Moore

Support Vector Machine (SVM) for Noisy Data denotes +1 denotes -1 Balance the trade off between margin and classification errors Copyright © 2001, 2003, Andrew W. Moore

Support Vector Machine for Noisy Data How do we determine the appropriate value for c ? Copyright © 2001, 2003, Andrew W. Moore

The Dual Form of QP Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b) Copyright © 2001, 2003, Andrew W. Moore

The Dual Form of QP Maximize where Subject to these constraints: Then define: Copyright © 2001, 2003, Andrew W. Moore

An Equivalent QP Maximize where Subject to these constraints: Then define: Datapoints with ak > 0 will be the support vectors ..so this sum only needs to be over the support vectors. Copyright © 2001, 2003, Andrew W. Moore

Support Vectors Support Vectors i = 0 for non-support vectors i  0 for support vectors denotes +1 denotes -1 Decision boundary is determined only by those support vectors ! Copyright © 2001, 2003, Andrew W. Moore

The Dual Form of QP Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b) How to determine b ? Copyright © 2001, 2003, Andrew W. Moore

An Equivalent QP: Determine b Fix w A linear programming problem ! Copyright © 2001, 2003, Andrew W. Moore

An Equivalent QP Maximize where Why did I tell you about this equivalent QP? It’s a formulation that QP packages can optimize more quickly Because of further jaw-dropping developments you’re about to learn. Maximize where Subject to these constraints: Then define: Datapoints with ak > 0 will be the support vectors Then classify with: f(x,w,b) = sign(w. x - b) ..so this sum only needs to be over the support vectors. Copyright © 2001, 2003, Andrew W. Moore

Suppose we’re in 1-dimension What would SVMs do with this data? x=0 Copyright © 2001, 2003, Andrew W. Moore

Suppose we’re in 1-dimension Not a big surprise x=0 Positive “plane” Negative “plane” Copyright © 2001, 2003, Andrew W. Moore

Harder 1-dimensional dataset That’s wiped the smirk off SVM’s face. What can be done about this? x=0 Copyright © 2001, 2003, Andrew W. Moore

Harder 1-dimensional dataset Remember how permitting non-linear basis functions made linear regression so much nicer? Let’s permit them here too x=0 Copyright © 2001, 2003, Andrew W. Moore

Harder 1-dimensional dataset Remember how permitting non-linear basis functions made linear regression so much nicer? Let’s permit them here too x=0 Copyright © 2001, 2003, Andrew W. Moore

Common SVM basis functions zk = ( polynomial terms of xk of degree 1 to q ) zk = ( radial basis functions of xk ) zk = ( sigmoid functions of xk ) This is sensible. Is that the end of the story? No…there’s one more trick! Copyright © 2001, 2003, Andrew W. Moore

Quadratic Basis Functions Constant Term Quadratic Basis Functions Linear Terms Number of terms (assuming m input dimensions) = (m+2)-choose-2 = (m+2)(m+1)/2 = (as near as makes no difference) m2/2 You may be wondering what those ’s are doing. You should be happy that they do no harm You’ll find out why they’re there soon. Pure Quadratic Terms Quadratic Cross-Terms Copyright © 2001, 2003, Andrew W. Moore

QP (old) Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b) Copyright © 2001, 2003, Andrew W. Moore

QP with basis functions Most important changes: X  f(x) Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. f(x) - b) Copyright © 2001, 2003, Andrew W. Moore

QP with basis functions Maximize where We must do R2/2 dot products to get this matrix ready. Each dot product requires m2/2 additions and multiplications The whole thing costs R2 m2 /4. Yeeks! …or does it? Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. f(x) - b) Copyright © 2001, 2003, Andrew W. Moore

Quadratic Dot Products + Quadratic Dot Products + + Copyright © 2001, 2003, Andrew W. Moore

Quadratic Dot Products Just out of casual, innocent, interest, let’s look at another function of a and b: Quadratic Dot Products Copyright © 2001, 2003, Andrew W. Moore

Quadratic Dot Products Just out of casual, innocent, interest, let’s look at another function of a and b: Quadratic Dot Products They’re the same! And this is only O(m) to compute! Copyright © 2001, 2003, Andrew W. Moore

QP with Quintic basis functions Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. f(x) - b) Copyright © 2001, 2003, Andrew W. Moore

QP with Quadratic basis functions Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(K(w, x) - b) Most important change: Copyright © 2001, 2003, Andrew W. Moore

Higher Order Polynomials f(x) Cost to build Qkl matrix traditionally Cost if 100 inputs f(a).f(b) Cost to build Qkl matrix sneakily Quadratic All m2/2 terms up to degree 2 m2 R2 /4 2,500 R2 (a.b+1)2 m R2 / 2 50 R2 Cubic All m3/6 terms up to degree 3 m3 R2 /12 83,000 R2 (a.b+1)3 Quartic All m4/24 terms up to degree 4 m4 R2 /48 1,960,000 R2 (a.b+1)4 Copyright © 2001, 2003, Andrew W. Moore

SVM Kernel Functions K(a,b)=(a . b +1)d is an example of an SVM Kernel Function Beyond polynomials there are other very high dimensional basis functions that can be made practical by finding the right Kernel Function Radial-Basis-style Kernel Function: Neural-net-style Kernel Function: Copyright © 2001, 2003, Andrew W. Moore

Kernel Tricks Replacing dot product with a kernel function Not all functions are kernel functions Need to be decomposable K(a,b) = (a)  (b) Could K(a,b) = (a-b)3 be a kernel function ? Could K(a,b) = (a-b)4 – (a+b)2 be a kernel function? Copyright © 2001, 2003, Andrew W. Moore

Kernel Tricks Mercer’s condition To expand Kernel function K(x,y) into a dot product, i.e. K(x,y)=(x)(y), K(x, y) has to be positive semi-definite function, i.e., for any function f(x) whose is finite, the following inequality holds Could be a kernel function? Copyright © 2001, 2003, Andrew W. Moore

Kernel Tricks Pro Introducing nonlinearity into the model Computational cheap Con Still have potential overfitting problems Copyright © 2001, 2003, Andrew W. Moore

Nonlinear Kernel (I) Copyright © 2001, 2003, Andrew W. Moore

Nonlinear Kernel (II) Copyright © 2001, 2003, Andrew W. Moore

Kernelize Logistic Regression How can we introduce the nonlinearity into the logistic regression? Copyright © 2001, 2003, Andrew W. Moore

Kernelize Logistic Regression Representation Theorem Copyright © 2001, 2003, Andrew W. Moore

Overfitting in SVM Testing Error Training Error Copyright © 2001, 2003, Andrew W. Moore

SVM Performance Anecdotally they work very very well indeed. Example: They are currently the best-known classifier on a well-studied hand-written-character recognition benchmark Another Example: Andrew knows several reliable people doing practical real-world work who claim that SVMs have saved them when their other favorite classifiers did poorly. There is a lot of excitement and religious fervor about SVMs as of 2001. Despite this, some practitioners are a little skeptical. Copyright © 2001, 2003, Andrew W. Moore

Diffusion Kernel Kernel function describes the correlation or similarity between two data points Given that I have a function s(x,y) that describes the similarity between two data points. Assume that it is a non-negative and symmetric function. How can we generate a kernel function based on this similarity function? A graph theory approach … Copyright © 2001, 2003, Andrew W. Moore

Diffusion Kernel Create a graph for the data points Graph Laplacian Each vertex corresponds to a data point The weight of each edge is the similarity s(x,y) Graph Laplacian Properties of Laplacian Negative semi-definite Copyright © 2001, 2003, Andrew W. Moore

Diffusion Kernel Consider a simple Laplacian Consider What do these matrixes represent? A diffusion kernel Copyright © 2001, 2003, Andrew W. Moore

Diffusion Kernel Consider a simple Laplacian Consider What do these matrixes represent? A diffusion kernel Copyright © 2001, 2003, Andrew W. Moore

Diffusion Kernel: Properties Positive definite Local relationships L induce global relationships Works for undirected weighted graphs with similarities How to compute the diffusion kernel Copyright © 2001, 2003, Andrew W. Moore

Computing Diffusion Kernel Singular value decomposition of Laplacian L What is L2 ? Copyright © 2001, 2003, Andrew W. Moore

Computing Diffusion Kernel What about Ln ? Compute diffusion kernel Copyright © 2001, 2003, Andrew W. Moore

Doing multi-class classification SVMs can only handle two-class outputs (i.e. a categorical output variable with arity 2). What can be done? Answer: with output arity N, learn N SVM’s SVM 1 learns “Output==1” vs “Output != 1” SVM 2 learns “Output==2” vs “Output != 2” : SVM N learns “Output==N” vs “Output != N” Then to predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region. Copyright © 2001, 2003, Andrew W. Moore

Ranking Problem Consider a problem of ranking essays Three ranking categories: good, ok, bad Given a input document, predict its ranking category How should we formulate this problem? A simple multiple class solution Each ranking category is a independent class But, there is something missing here … We miss the ordinal relationship between classes ! Copyright © 2001, 2003, Andrew W. Moore

Ordinal Regression Which choice is better? ‘good’ ‘OK’ ‘bad’ w Which choice is better? How could we formulate this problem? Copyright © 2001, 2003, Andrew W. Moore

Ordinal Regression What are the two decision boundaries? What is the margin for ordinal regression? Maximize margin Copyright © 2001, 2003, Andrew W. Moore

Ordinal Regression What are the two decision boundaries? What is the margin for ordinal regression? Maximize margin Copyright © 2001, 2003, Andrew W. Moore

Ordinal Regression What are the two decision boundaries? What is the margin for ordinal regression? Maximize margin Copyright © 2001, 2003, Andrew W. Moore

Ordinal Regression How do we solve this monster ? Copyright © 2001, 2003, Andrew W. Moore

Ordinal Regression The same old trick To remove the scaling invariance, set Now the problem is simplified as: Copyright © 2001, 2003, Andrew W. Moore

Ordinal Regression Noisy case Is this sufficient enough? Copyright © 2001, 2003, Andrew W. Moore

Ordinal Regression ‘good’ ‘OK’ ‘bad’ w Copyright © 2001, 2003, Andrew W. Moore

References An excellent tutorial on VC-dimension and Support Vector Machines: C.J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):955-974, 1998. http://citeseer.nj.nec.com/burges98tutorial.html The VC/SRM/SVM Bible: (Not for beginners including myself) Statistical Learning Theory by Vladimir Vapnik, Wiley-Interscience; 1998 Software: SVM-light, http://svmlight.joachims.org/, free download Copyright © 2001, 2003, Andrew W. Moore