Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)

Slides:



Advertisements
Similar presentations
Support Vector Machines
Advertisements

Lecture 9 Support Vector Machines
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
Linear Classifiers/SVMs

An Introduction of Support Vector Machine
Support Vector Machines and Kernels Adapted from slides by Tim Oates Cognition, Robotics, and Learning (CORAL) Lab University of Maryland Baltimore County.
Support Vector Machines
SVM—Support Vector Machines
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
Machine learning continued Image source:
LOGO Classification IV Lecturer: Dr. Bo Yuan
Support Vector Machine
Support Vector Machines and Kernel Methods
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Margins, support vectors, and linear programming Thanks to Terran Lane and S. Dreiseitl.
Support Vector Machines Kernel Machines
Support Vector Machines and Kernel Methods
Support Vector Machines
The Implicit Mapping into Feature Space. In order to learn non-linear relations with a linear machine, we need to select a set of non- linear features.
SVMs Reprised Reading: Bishop, Sec 4.1.1, 6.0, 6.1, 7.0, 7.1.
Lecture 10: Support Vector Machines
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Support Vector Machines a.k.a, Whirlwind o’ Vector Algebra Sec. 6.3 SVM Tutorial by C. Burges (on class “resources” page)
An Introduction to Support Vector Machines Martin Law.
Support Vector Machine & Image Classification Applications
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
Support Vector Machine (SVM) Based on Nello Cristianini presentation
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
An Introduction to Support Vector Machines (M. Law)
Kernels Usman Roshan CS 675 Machine Learning. Feature space representation Consider two classes shown below Data cannot be separated by a hyperplane.
Machine Learning Weak 4 Lecture 2. Hand in Data It is online Only around 6000 images!!! Deadline is one week. Next Thursday lecture will be only one hour.
CISC667, F05, Lec22, Liao1 CISC 667 Intro to Bioinformatics (Fall 2005) Support Vector Machines I.
CS 478 – Tools for Machine Learning and Data Mining SVM.
Kernel Methods: Support Vector Machines Maximum Margin Classifiers and Support Vector Machines.
Support Vector Machines and Kernel Methods Machine Learning March 25, 2010.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Support vector machine LING 572 Fei Xia Week 8: 2/23/2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A 1.
Support Vector Machines Tao Department of computer science University of Illinois.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
CSSE463: Image Recognition Day 15 Announcements: Announcements: Lab 5 posted, due Weds, Jan 13. Lab 5 posted, due Weds, Jan 13. Sunset detector posted,
Support Vector Machines Exercise solutions Ata Kaban The University of Birmingham.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
SVMs, Part 2 Summary of SVM algorithm Examples of “custom” kernels Standardizing data for SVMs Soft-margin SVMs.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Kernel Methods: Support Vector Machines Maximum Margin Classifiers and Support Vector Machines.
SVMs in a Nutshell.
Support Vector Machine: An Introduction. (C) by Yu Hen Hu 2 Linear Hyper-plane Classifier For x in the side of o : w T x + b  0; d = +1; For.
CSSE463: Image Recognition Day 14 Lab due Weds. Lab due Weds. These solutions assume that you don't threshold the shapes.ppt image: Shape1: elongation.
An Introduction of Support Vector Machine In part from of Jinwei Gu.
Support Vector Machines Part 2. Recap of SVM algorithm Given training set S = {(x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) | (x i, y i )   n  {+1, -1}
Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)
Support Vector Machines (SVMs) Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Support Vector Machines
Support Vector Machine
Support Vector Machines
Support Vector Machines Introduction to Data Mining, 2nd Edition by
CSSE463: Image Recognition Day 14
CSSE463: Image Recognition Day 14
CSSE463: Image Recognition Day 14
CSSE463: Image Recognition Day 14
Support Vector Machines and Kernels
CSSE463: Image Recognition Day 14
Presentation transcript:

Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)

Notation Assume a binary classification problem: h (x)  {−1, 1} Instances are represented by vector x   n. Training examples: x k = (x 1, x 2, …, x n ) Training set: S = {(x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) | (x k, y k )   n  {+1, -1}} Questions on notation...

Here, assume positive and negative instances are to be separated by the hyperplane, where b is the bias. Equation of line: x2x2 x1x1

Intuition: The best hyperplane for future generalization will “maximally” separate the examples (assuming new examples will come from the same distribution)

Margin is defined as distance from separating hypeplane to nearest training example. Note: Sometimes margin is defined as twice the distance. Definition of Margin

Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal.

Definition of Margin Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal. This is an optimization problem!

Support vectors: Training examples that lie on the margin.

Support vectors: Training examples that lie on the margin.

Support vectors: Training examples that lie on the margin.

Support vectors: Training examples that lie on the margin.

Support vectors: Training examples that lie on the margin. Without changing the problem, we can rescale our data to set a = 1

Support vectors: Training examples that lie on the margin. Without changing the problem, we can rescale our data to set a = 1

Without changing the problem, we can rescale our data to set a = 1 Length of the margin margin

Without changing the problem, we can rescale our data to set a = 1 Length of the margin margin

w is perpendicular to decision boundary

The length of the margin is So to maximize the margin, we need to minimize.

Minimizing ||w|| Find w and b by doing the following minimization: This is a quadratic optimization problem.

Minimizing ||w|| Find w and b by doing the following minimization: This is a quadratic optimization problem. Shorthand:

Dual Representation It turns out that w can be expressed as a linear combination of the training examples: The results of the SVM training algorithm (involving solving a quadratic programming problem) are the  i and the bias b.

Classify new example x as follows: where | S | = m. (S is the training set) SVM Classification

Classify new example x as follows: where | S | = m. (S is the training set) Note that dot product is a kind of “similarity measure” SVM Classification

Example

Example Input to SVM optimizer: x 1 x 2 class

Example Input to SVM optimizer: x 1 x 2 class Output from SVM optimizer: Support vectorα (-1, 0)-.208 (1, 1).416 (0, -1)-.208 b = -.376

Example Input to SVM optimizer: x 1 x 2 class Output from SVM optimizer: Support vectorα (-1, 0)-.208 (1, 1).416 (0, -1)-.208 b = Weight vector: Separation line: Compute this Sketch this

Example Input to SVM optimizer: x 1 x 2 class Output from SVM optimizer: Support vectorα (-1, 0)-.208 (1, 1).416 (0, -1)-.208 b = Weight vector: Separation line:

Example Classifying a new point: Compute this

Example Classifying a new point:

svm_light Demo

In-Class Exercises

Non-linearly separable training examples What if the training examples are not linearly separable?

Non-linearly separable training examples What if the training examples are not linearly separable? Use old trick: Find a function that maps points to a higher dimensional space (“feature space”) in which they are linearly separable, and do the classification in that higher- dimensional space.

Need to find a function  that will perform such a mapping:  :  n  F Then can find hyperplane in higher dimensional feature space F, and do classification using that hyperplane in higher dimensional space. x1x1 x2x2

Need to find a function  that will perform such a mapping:  :  n  F Then can find hyperplane in higher dimensional feature space F, and do classification using that hyperplane in higher dimensional space. x1x1 x2x2 x1x1 x2x2 x3x3

Exercise Find  :  2  F to create a 3-dimensional feature space in which XOR is linearly separable. I.e., find  (x 1, x 2 ) = (x 1, x 2, x 3 ) so that these points become separable x2x2 x1x1 (0, 1)(1, 1) (0, 0) (1, 0) x1x1 x2x2 x3x3

Problem: –Recall that classification of instance x is expressed in terms of dot products of x and support vectors. –The quadratic programming problem of finding the support vectors and coefficients also depends only on dot products between training examples.

–So if each x i is replaced by  (x i ) in these procedures, we will have to calculate  (x i ) for each i as well as calculate a lot of dot products,  (x i )   (x j ) –In general, if the feature space F is high dimensional,  (x i )   (x j ) will be expensive to compute.

Second trick: –Suppose that there were some magic function, k(x i, x j ) =  (x i )   (x j ) such that k is cheap to compute even though  (x i )   (x j ) is expensive to compute. –Then we wouldn’t need to compute the dot product directly; we’d just need to compute k during both the training and testing phases. –The good news is: such k functions exist! They are called kernel functions, and come from the theory of integral operators.

Example: Polynomial kernel: Suppose x = (x 1, x 2 ) and z = (z 1, z 2 ).

Example Does the degree-2 polynomial kernel make XOR linearly separable? x2x2 x1x1 (0, 1)(1, 1) (0, 0) (1, 0)

Recap of SVM algorithm Given training set S = {(x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) | (x i, y i )   n  {+1, -1} 1.Choose a map  :  n  F, which maps x i to a higher dimensional feature space. (Solves problem that X might not be linearly separable in original space.) 2.Choose a cheap-to-compute kernel function k(x,z)=  (x)   (z) (Solves problem that in high dimensional spaces, dot products are very expensive to compute.)

4.Apply quadratic programming procedure (using the kernel function k) to find a hyperplane (w, b), such that where the  (x i )’s are support vectors, the  i ’s are coefficients, and b is a bias, such that (w, b ) is the hyperplane maximizing the margin of the training set in feature space F.

5.Now, given a new instance, x, find the classification of x by computing

More on Kernels So far we’ve seen kernels that map instances in  n to instances in  z where z > n. One way to create a kernel: Figure out appropriate feature space Φ(x), and find kernel function k which defines inner product on that space. More practically, we usually don’t know appropriate feature space Φ(x). What people do in practice is either: 1.Use one of the “classic” kernels (e.g., polynomial), or 2.Define their own function that is appropriate for their task, and show that it qualifies as a kernel.

How to define your own kernel Given training data (x 1, x 2,..., x n ) Algorithm for SVM learning uses kernel matrix (also called Gram matrix): We can choose some function k, and compute the kernel matrix K using the training data. We just have to guarantee that our kernel defines an inner product on some feature space. Not as hard as it sounds.

What counts as a kernel? Mercer’s Theorem: If the kernel matrix K is “symmetric positive definite”, it defines a kernel, that is, it defines an inner product in some feature space. We don’t even have to know what that feature space is! It can have a huge number of dimensions.

Structural Kernels In domains such as natural language processing and bioinformatics, the similarities we want to capture are often structural (e.g., parse trees, formal grammars). An important area of kernel methods is defining structural kernels that capture this similarity (e.g., sequence alignment kernels, tree kernels, graph kernels, etc.)

From Design criteria - we want kernels to be –valid – Satisfy Mercer condition of positive semidefiniteness –good – embody the “true similarity” between objects –appropriate – generalize well –efficient – the computation of k(x,x’) is feasible

Example: Watson used tree kernels and SVMs to classify question types for Jeopardy! questions From Moschitti et al., 2011