Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)

Slides:



Advertisements
Similar presentations
Based on slides by Pierre Dönnes and Ron Meir Modified by Longin Jan Latecki, Temple University Ch. 5: Support Vector Machines Stephen Marsland, Machine.
Advertisements

Support Vector Machine
Lecture 9 Support Vector Machines
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
Classification / Regression Support Vector Machines

CHAPTER 10: Linear Discrimination
An Introduction of Support Vector Machine
Support Vector Machines and Kernels Adapted from slides by Tim Oates Cognition, Robotics, and Learning (CORAL) Lab University of Maryland Baltimore County.
Support Vector Machines
1 Lecture 5 Support Vector Machines Large-margin linear classifier Non-separable case The Kernel trick.
SVM—Support Vector Machines
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
SVMs Reprised. Administrivia I’m out of town Mar 1-3 May have guest lecturer May cancel class Will let you know more when I do...
Support Vector Machines
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Margins, support vectors, and linear programming Thanks to Terran Lane and S. Dreiseitl.
Support Vector Machines Kernel Machines
Support Vector Machines and Kernel Methods
Support Vector Machines
1 Computational Learning Theory and Kernel Methods Tianyi Jiang March 8, 2004.
SVMs Finalized. Where we are Last time Support vector machines in grungy detail The SVM objective function and QP Today Last details on SVMs Putting it.
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
SVMs Reprised Reading: Bishop, Sec 4.1.1, 6.0, 6.1, 7.0, 7.1.
Support Vector Machines
Lecture 10: Support Vector Machines
Support Vector Machines a.k.a, Whirlwind o’ Vector Algebra Sec. 6.3 SVM Tutorial by C. Burges (on class “resources” page)
SVMs, cont’d Intro to Bayesian learning. Quadratic programming Problems of the form Minimize: Subject to: are called “quadratic programming” problems.
Linear hyperplanes as classifiers Usman Roshan. Hyperplane separators.
Topics on Final Perceptrons SVMs Precision/Recall/ROC Decision Trees Naive Bayes Bayesian networks Adaboost Genetic algorithms Q learning Not on the final:
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
Support Vector Machine (SVM) Based on Nello Cristianini presentation
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Linear Discrimination Reading: Chapter 2 of textbook.
START OF DAY 5 Reading: Chap. 8. Support Vector Machine.
CISC667, F05, Lec22, Liao1 CISC 667 Intro to Bioinformatics (Fall 2005) Support Vector Machines I.
CS 478 – Tools for Machine Learning and Data Mining SVM.
Kernel Methods: Support Vector Machines Maximum Margin Classifiers and Support Vector Machines.
Linear hyperplanes as classifiers Usman Roshan. Hyperplane separators.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Support vector machine LING 572 Fei Xia Week 8: 2/23/2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A 1.
CSE4334/5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai.
Support Vector Machine Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata November 3, 2014.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Support Vector Machines Tao Department of computer science University of Illinois.
Support Vector Machines. Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1,
Text Classification using Support Vector Machine Debapriyo Majumdar Information Retrieval – Spring 2015 Indian Statistical Institute Kolkata.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Kernel Methods: Support Vector Machines Maximum Margin Classifiers and Support Vector Machines.
SVMs in a Nutshell.
Support Vector Machine: An Introduction. (C) by Yu Hen Hu 2 Linear Hyper-plane Classifier For x in the side of o : w T x + b  0; d = +1; For.
Support Vector Machines Part 2. Recap of SVM algorithm Given training set S = {(x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) | (x i, y i )   n  {+1, -1}
Support Vector Machines (SVMs) Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Support Vector Machines
PREDICT 422: Practical Machine Learning
Support Vector Machine
Support Vector Machines Introduction to Data Mining, 2nd Edition by
CSSE463: Image Recognition Day 14
CSSE463: Image Recognition Day 14
CSSE463: Image Recognition Day 14
CSSE463: Image Recognition Day 14
Support Vector Machines and Kernels
CSSE463: Image Recognition Day 14
COSC 4368 Machine Learning Organization
Presentation transcript:

Support Vector Machines Reading: Textbook, Chapter 5 Ben-Hur and Weston, A User’s Guide to Support Vector Machines (linked from class web page)

Notation Assume a binary classification problem. –Instances are represented by vector x   n. –Training examples: x = (x 1, x 2, …, x n ) S = {(x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) | (x i, y i )   n  {+1, -1}} –Hypothesis: A function h:  n  {+1, -1}. h(x) = h(x 1, x 2, …, x n )  {+1, -1}

Here, assume positive and negative instances are to be separated by the hyperplane Equation of line: x2x2 x1x1

Intuition: the best hyperplane (for future generalization) will “maximally” separate the examples

Definition of Margin

Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal.

Definition of Margin Vapnik (1979) showed that the hyperplane maximizing the margin of S will have minimal VC dimension in the set of all consistent hyperplanes, and will thus be optimal. This is an optimization problem!

Note that the hyperplane is defined as To make the math easier, we will rescale w and b such that the hyperplane is halfway in-between the closest positive and negative examples, and

Geometry of the Margin ( -

In this case, the margin is So to maximize the margin, we need to minimize.

Minimizing ||w|| Find w and b by doing the following minimization: This is a quadratic optimization problem. Use “standard optimization tools” to solve it.

Dual formulation: It turns out that w can be expressed as a linear combination of a small subset of the training examples x i : those that lie exactly on margin (minimum distance to hyperplane): such that x i lie exactly on the margin. These training examples are called “support vectors”. They carry all relevant information about the classification problem.

The results of the SVM training algorithm (involving solving a quadratic programming problem) are the  i and the bias b. The support vectors are all x i such that  i > 0.

For a new example x, We can now classify x using the support vectors: This is the resulting SVM classifier.

In-class exercise

SVM review Equation of line: w 1 x 1 + w 2 x 2 + b = 0 Define margin using: Margin distance: To maximize the margin, we minimize ||w|| subject to the constraint that positive examples fall on one side of the margin, and negative examples on the other side: We can relax this constraint using “slack variables”

SVM review To do the optimization, we use the dual formulation: The results of the optimization “black box” are and b. The support vectors are all x i such that  i != 0.

SVM review Once the optimization is done, we can classify a new example x as follows: That is, classification is done entirely through a linear combination of dot products with training examples. This is a “kernel” method.

Example

Example Input to SVM optimzer: x 1 x 2 class

Example Input to SVM optimzer: x 1 x 2 class Output from SVM optimzer: Support vectorα (-1, 0)-.208 (1, 1).416 (0, -1)-.208 b = -.376

Example Input to SVM optimzer: x 1 x 2 class Output from SVM optimzer: Support vectorα (-1, 0)-.208 (1, 1).416 (0, -1)-.208 b = Weight vector:

Example Input to SVM optimzer: x 1 x 2 class Output from SVM optimzer: Support vectorα (-1, 0)-.208 (1, 1).416 (0, -1)-.208 b = Weight vector: Separation line:

Example Classifying a new point:

Non-linearly separable training examples What if the training examples are not linearly separable?

Non-linearly separable training examples What if the training examples are not linearly separable? Use old trick: find a function that maps points to a higher dimensional space (“feature space”) in which they are linearly separable, and do the classification in that higher- dimensional space.

Need to find a function  that will perform such a mapping:  :  n  F Then can find hyperplane in higher dimensional feature space F, and do classification using that hyperplane in higher dimensional space. x1x1 x2x2 x1x1 x2x2 x3x3

Example Find a 3-dimensional feature space in which XOR is linearly separable. x2x2 x1x1 (0, 1)(1, 1) (0, 0) (1, 0) x1x1 x2x2 x3x3

Problem: –Recall that classification of instance x is expressed in terms of dot products of x and support vectors. –The quadratic programming problem of finding the support vectors and coefficients also depends only on dot products between training examples, rather than on the training examples outside of dot products.

–So if each x i is replaced by  (x i ) in these procedures, we will have to calculate  (x i ) for each i as well as calculate a lot of dot products,  (x i )   (x j ) –But in general, if the feature space F is high dimensional,  (x i )   (x j ) will be expensive to compute.

Second trick: –Suppose that there were some magic function, k(x i, x j ) =  (x i )   (x j ) such that k is cheap to compute even though  (x i )   (x j ) is expensive to compute. –Then we wouldn’t need to compute the dot product directly; we’d just need to compute k during both the training and testing phases. –The good news is: such k functions exist! They are called “kernel functions”, and come from the theory of integral operators.

Example: Polynomial kernel: Suppose x = (x 1, x 2 ) and z = (z 1, z 2 ).

Example Does the degree-2 polynomial kernel make XOR linearly separable? x2x2 x1x1 (0, 1)(1, 1) (0, 0) (1, 0)

Recap of SVM algorithm Given training set S = {(x 1, y 1 ), (x 2, y 2 ),..., (x m, y m ) | (x i, y i )   n  {+1, -1} 1.Choose a map  :  n  F, which maps x i to a higher dimensional feature space. (Solves problem that X might not be linearly separable in original space.) 2.Choose a cheap-to-compute kernel function k(x,z)=  (x)   (z) (Solves problem that in high dimensional spaces, dot products are very expensive to compute.)

4.Apply quadratic programming procedure (using the kernel function k) to find a hyperplane (w, b), such that where the  (x i )’s are support vectors, the  i ’s are coefficients, and b is a bias, such that (w, b ) is the hyperplane maximizing the margin of the training set in feature space F.

5.Now, given a new instance, x, find the classification of x by computing

HW 2