Support Vector Machines

Slides:



Advertisements
Similar presentations
Support Vector Machines
Advertisements

Lecture 9 Support Vector Machines
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
Classification / Regression Support Vector Machines
Linear Classifiers/SVMs
Support Vector Machines
1 Lecture 5 Support Vector Machines Large-margin linear classifier Non-separable case The Kernel trick.
SVM—Support Vector Machines
Support vector machine
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
Separating Hyperplanes
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Support Vector Classification (Linearly Separable Case, Primal) The hyperplanethat solves the minimization problem: realizes the maximal margin hyperplane.
Classification Problem 2-Category Linearly Separable Case A- A+ Malignant Benign.
Support Vector Machines
Support Vector Machines
Lecture 10: Support Vector Machines
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Perceptrons “ From the heights of error, To the valleys of Truth ” Piyush Kumar Advanced Algorithms.
Support Vector Machines Piyush Kumar. Perceptrons revisited Class 1 : (+1) Class 2 : (-1) Is this unique?
Perceptrons “From the heights of error, To the valleys of Truth” Piyush Kumar Computational Geometry.
Linear hyperplanes as classifiers Usman Roshan. Hyperplane separators.
This week: overview on pattern recognition (related to machine learning)
Support Vector Machine & Image Classification Applications
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
ADVANCED CLASSIFICATION TECHNIQUES David Kauchak CS 159 – Fall 2014.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Kernels Usman Roshan CS 675 Machine Learning. Feature space representation Consider two classes shown below Data cannot be separated by a hyperplane.
CISC667, F05, Lec22, Liao1 CISC 667 Intro to Bioinformatics (Fall 2005) Support Vector Machines I.
CS 478 – Tools for Machine Learning and Data Mining SVM.
Support Vector Machines Project מגישים : גיל טל ואורן אגם מנחה : מיקי אלעד נובמבר 1999 הטכניון מכון טכנולוגי לישראל הפקולטה להנדסת חשמל המעבדה לעיבוד וניתוח.
Linear hyperplanes as classifiers Usman Roshan. Hyperplane separators.
Survey of Kernel Methods by Jinsan Yang. (c) 2003 SNU Biointelligence Lab. Introduction Support Vector Machines Formulation of SVM Optimization Theorem.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Linear hyperplanes as classifiers Usman Roshan. Hyperplane separators.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Support Vector Machine: An Introduction. (C) by Yu Hen Hu 2 Linear Hyper-plane Classifier For x in the side of o : w T x + b  0; d = +1; For.
Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)
1 Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 23, 2010 Piotr Mirowski Based on slides by Sumit.
Support Vector Machine Slides from Andrew Moore and Mingyue Tan.
Support vector machines
PREDICT 422: Practical Machine Learning
Support Vector Machine
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
Fun with Hyperplanes: Perceptrons, SVMs, and Friends
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Support Vector Machines
CS 4/527: Artificial Intelligence
An Introduction to Support Vector Machines
Kernels Usman Roshan.
An Introduction to Support Vector Machines
LINEAR AND NON-LINEAR CLASSIFICATION USING SVM and KERNELS
Support Vector Machines Introduction to Data Mining, 2nd Edition by
CSSE463: Image Recognition Day 14
CSSE463: Image Recognition Day 14
Support vector machines
Support Vector Machines
Usman Roshan CS 675 Machine Learning
Support vector machines
Support vector machines
COSC 4368 Machine Learning Organization
“From the heights of error, To the valleys of Truth”
SVMs for Document Ranking
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Presentation transcript:

Support Vector Machines Piyush Kumar

Perceptrons revisited Class 2 : (-1) Class 1 : (+1) Is this unique?

Which one is the best? Perceptron outputs :

Perceptrons: What went wrong? Slow convergence Can overfit Cant do complicated functions easily Theoretical guarantees are not as strong

Perceptron: The first NN Proposed by Frank Rosenblatt in 1956 Neural net researchers accuse Rosenblatt of promising ‘too much’  Numerous variants Also helps to study LP  One of the simplest Neural Network.

From Perceptrons to SVMs Margins Linearization Kernels Core-Sets and support vectors Solvers: As simple as perceptrons!

Support Vector Machines Margin

Classification Margin Distance from example to the separator is Examples closest to the hyperplane are support vectors. Margin ρ of the separator is the width of separation between classes. ρ r

Support Vector Machines Maximizing the margin is good according to intuition and PAC theory.

Support Vector Machines Implies that only support vectors are important; other training examples are ignorable. Leads to Simple classifiers and hence better? (Simple = large margin)

Let’s start some math… N samples : Where y = +/- 1 are labels for the data. Can we find a hyperplane that separates the two classes? (labeled by y) i.e. : For all j such that y = +1 : For all j such that y = -1

Which we will relax later! Further assumption 1 Lets assume that the hyperplane that we are looking for passes thru the origin

Relax now!!  Further assumption 2 Lets assume that we are looking for a halfspace that contains a set of points

Lets Relax FA 1 now “Homogenize” the coordinates by adding a new coordinate to the input. Think of it as moving the whole red and blue points in one higher dimension From 2D to 3D it is just the x-y plane shifted to z = 1. This takes care of the “bias” or our assumption that the halfspace can pass thru the origin.

Further Assumption 3 Assume all points on a unit sphere! Relax now!  Further Assumption 3 Assume all points on a unit sphere! If they are not after applying transformations for FA 1 and FA 2 , make them so.

What did we want? Maximize the margin. What does it mean in the new space?

What’s the new optimization problem? Max |ρ| subject to xi.w >= ρ (Note that we have gotten rid of the y’s by mirroring around the origin). Here w is a unit vector. ||w|| = 1.

Same Problem Min 1/ρ subject to xi.((1/ρ)w) >= 1 Let v = (1/ρ) w Then the constraint becomes xi.v >= 1. Objective = Min 1/ρ = Min || (1/ρ) w || = Min ||v|| is the same as Min ||v||2

New formulation Min ||v||2 Subject to : v.xi >= 1 Using matlab, this is a piece of cake to solve. Decision boundary sign(w.xi) Only for support vectors v.xi = 1.

Support Vector Machines Linear Learning Machines like perceptrons. Map non-linearly to higher dimension to overcome the linearity constraint. Select between hyperplanes, Use margin as a test (This is what perceptrons don’t do) From learning theory, maximum margin is good

Another Reformulation We will revisit this soon… Another Reformulation Unlike Perceptrons SVMs have a unique solution but are harder to solve. <QP>

Support Vector Machines There are very simple algorithms to solve SVMs ( as simple as perceptrons ) If you are interested in learning those, come and talk to me.

Another twist : Linearization If the data is separable with say a sphere, how would you use a svm to separate it? (Ellipsoids?)

Linearization a.k.a Feature Expansion Delaunay!?? Linearization a.k.a Feature Expansion Lift the points to a paraboloid in one higher dimension, For instance if the data is in 2D, (x,y) -> (x,y,x2+y2)

Linearization Note that replacing x by (x) the decision boundary changes from w.x = 0 to w.(x) = 0 This helps us get non-linear separators compared to linear separators when  is non-linear (as in the last example). Another feature expansion example: (x,y) -> (x^2, xy, y^2, x, y) What kind of separators are there?

Linearization The more features, the more power. There is a danger of overfitting. When there are lot of features (sometimes even infinite), we can use the “kernel trick” to solve the optimization problem faster. Lets look back at optimization for a moment again…

Lagrange Multipliers

Lagrangian function

At optimum

More precisely

The optimization Problem Revisited

Removing v

Support Vectors v is a linear combination of ‘some examples’ or support vectors. More than likely if we see too many support vectors, we are overfitting. Simple and Short classifiers are preferable.

Substitution

Gram Matrix

The decision surface Recovered

What is Gram Matrix reduction good for? The Kernel Trick Even if the number of features is infinite, G might still be small and hence the optimization problem solvable. We could compute G without computing X, at least sometimes (by redefining the dot product in the feature space).

Recall

The kernel Matrix The trick that ML community uses for Linearization is to use a function that redefines distances between points. Example : The optimization problem no longer needs  to be explicitly evaluated. As long as we can figure out the distance between two mapped points, its enough.

Example Kernels

The decision Surface?

A demo using libsvm Some implementations of SVM libsvm svmlight svmtorch

Checkerboard Dataset

k-Nearest Neighbor Algorithm

LSVM on Checkerboard

Conclusions SVM is an step towards improving perceptrons They use large margin for good genralization In order to make large feature expansions, we can use the gram matrix formulation of the optimization problem (or use kernels). SVMs are popular classifiers because they achieve good accuracy on real world data.

Geometric Solvers for SVM Frank Wolfe Algorithm A.k.a. (Gilbert’s algorithm) d

Recall Minkowski sum Defined as: The sweeping of one convex object with another Defined as:

Minkowski difference Minkowski difference, defined as: Minkowski Difference (A,B) = { a - b | a  A, b  B } = Minkowski Sum (A, -B) Can write distance between two objects as: Distance (A,B) = min{ || a – b || : a  A, b  B } = min { || c || : c  A - B } A and B intersecting iff A–B contains the origin! Distance between A and B given by point of minimum norm in A–B!

Again! A and B intersecting iff A–B contains the origin! Distance: Distance between A and B given by point of minimum norm in A–B! Distance: distance(A,B) = min a  A, b B || a - b ||2 distance(A,B) = min c  Minkowski-Difference(A,B) || c ||2 if A and B disjoint, c is point on boundary of Minkowski difference

Minkowski Difference Examples B B A Picture by Eric Larsen

Frank Wolfe Algorithm a.k.a Gilbert’s Algorithm 1. Start with p in P=A-B. 2. Find the closest point q in P, in the direction op. 3. Let r on pq be the closest point to the origin. 4. If r is a good approximation stop. Else, p = r. Goto 1.

Frank Wolfe At each step the objective function is linearized and then a step is taken in a direction that reduces the objective while maintaining feasibility. The algorithm can be seen as a generalization of the primal simplex algorithm for LP. http://www.math.chalmers.se/Math/Grundutb/CTH/tma946/0203/fw_eng.pdf

Frank Wolfe

Gilbert’s Algorithm On the chalk board. Gives: Linear running time in size, 1/  Optimal Core-Sets for SVM. With away steps  Linear Convergence.