Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 7 Statistical Learning Theory & SVMs 13.11.2013 Bastian.

Slides:



Advertisements
Similar presentations
Introduction to Support Vector Machines (SVM)
Advertisements

Generative Models Thus far we have essentially considered techniques that perform classification indirectly by modeling the training data, optimizing.
Lecture 9 Support Vector Machines
ECG Signal processing (2)
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.

Support Vector Machines Instructor Max Welling ICS273A UCIrvine.
An Introduction of Support Vector Machine
Support Vector Machines
SVM—Support Vector Machines
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Separating Hyperplanes
Chapter 4: Linear Models for Classification
The loss function, the normal equation,
Visual Recognition Tutorial
Support Vector Machines (and Kernel Methods in general)
Support Vector Machines and Kernel Methods
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
The Nature of Statistical Learning Theory by V. Vapnik
Support Vector Machines Based on Burges (1998), Scholkopf (1998), Cristianini and Shawe-Taylor (2000), and Hastie et al. (2001) David Madigan.
Support Vector Machines Kernel Machines
Sketched Derivation of error bound using VC-dimension (1) Bound our usual PAC expression by the probability that an algorithm has 0 error on the training.
Support Vector Machines
1 Computational Learning Theory and Kernel Methods Tianyi Jiang March 8, 2004.
Greg GrudicIntro AI1 Support Vector Machine (SVM) Classification Greg Grudic.
Statistical Learning Theory: Classification Using Support Vector Machines John DiMona Some slides based on Prof Andrew Moore at CMU:
An Introduction to Support Vector Machines Martin Law.
Based on: The Nature of Statistical Learning Theory by V. Vapnick 2009 Presentation by John DiMona and some slides based on lectures given by Professor.
Support Vector Machine & Image Classification Applications
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 14 Introduction to Regression Bastian Leibe.
Perceptual and Sensory Augmented Computing Machine Learning, Summer’12 Machine Learning – Lecture 4 Linear Discriminant Functions Bastian Leibe.
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
Perceptual and Sensory Augmented Computing Advanced Machine Learning Winter’12 Advanced Machine Learning Lecture 3 Linear Regression II Bastian.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
10/18/ Support Vector MachinesM.W. Mak Support Vector Machines 1. Introduction to SVMs 2. Linear SVMs 3. Non-linear SVMs References: 1. S.Y. Kung,
Machine Learning – Lecture 6
CS Statistical Machine learning Lecture 18 Yuan (Alan) Qi Purdue CS Oct
An Introduction to Support Vector Machines (M. Law)
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Perceptual and Sensory Augmented Computing Machine Learning WS 13/14 Machine Learning – Lecture 3 Probability Density Estimation II Bastian.
Machine Learning Weak 4 Lecture 2. Hand in Data It is online Only around 6000 images!!! Deadline is one week. Next Thursday lecture will be only one hour.
Perceptual and Sensory Augmented Computing Machine Learning, Summer’12 Machine Learning – Lecture 7 Nonlinear SVMs Bastian Leibe RWTH Aachen.
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 15 Regression II Bastian Leibe RWTH Aachen.
Sparse Kernel Methods 1 Sparse Kernel Methods for Classification and Regression October 17, 2007 Kyungchul Park SKKU.
Ohad Hageby IDC Support Vector Machines & Kernel Machines IP Seminar 2008 IDC Herzliya.
Biointelligence Laboratory, Seoul National University
Linear Models for Classification
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Support Vector Machines
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Support Vector Machines Tao Department of computer science University of Illinois.
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
Perceptual and Sensory Augmented Computing Machine Learning, Summer’12 Machine Learning – Lecture 6 Statistical Learning Theory & SVMs Bastian.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 5 Linear Discriminant Functions Bastian Leibe.
Generalization Error of pac Model  Let be a set of training examples chosen i.i.d. according to  Treat the generalization error as a r.v. depending on.
Support Vector Machines (SVMs) Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Support vector machines
CSSE463: Image Recognition Day 14
CS 9633 Machine Learning Support Vector Machines
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
Support Vector Machines
Pattern Recognition CS479/679 Pattern Recognition Dr. George Bebis
Support vector machines
Presentation transcript:

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 7 Statistical Learning Theory & SVMs Bastian Leibe RWTH Aachen TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A AA A A A A A AAAAAA A AAAA A A A AA A Many slides adapted from B. Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Announcements Exam  Please do not forget to register for the exam!  Deadline: for the registration Note for Bachelor students  It is now also possible for you to register for Master exams!  Either personally in ZPA or electronically in vZPA 2 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Course Outline Fundamentals (2 weeks)  Bayes Decision Theory  Probability Density Estimation Discriminative Approaches (5 weeks)  Linear Discriminant Functions  Statistical Learning Theory & SVMs  Ensemble Methods & Boosting  Randomized Trees, Forests & Ferns Generative Models (4 weeks)  Bayesian Networks  Markov Random Fields B. Leibe 3

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: Classification as Dim. Reduction Classification as dimensionality reduction  Interpret linear classification as a projection onto a lower-dim. space.  Learning problem: Try to find the projection vector w that maximizes class separation. 4 bad separationgood separation Image source: C.M. Bishop, 2006

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: Fisher’s Linear Discriminant Analysis Maximize distance between classes Minimize distance within a class Criterion: S B … between-class scatter matrix S W … within-class scatter matrix The optimal solution for w can be obtained as: Classification function: 5 Class 1 Class 2 w x x Slide adapted from Ales Leonardis where

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: Probabilistic Discriminative Models Consider models of the form with This model is called logistic regression. Properties  Probabilistic interpretation  But discriminative method: only focus on decision hyperplane  Advantageous for high-dimensional spaces, requires less parameters than explicitly modeling p ( Á | C k ) and p ( C k ). 6 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: Logistic Regression Let’s consider a data set { Á n, t n } with n = 1,…, N, where and,. With y n = p ( C 1 | Á n ), we can write the likelihood as Define the error function as the negative log-likelihood  This is the so-called cross-entropy error function. 7

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Gradient Descent (1 st order)  Simple and general  Relatively slow to converge, has problems with some functions Newton-Raphson (2 nd order) where is the Hessian matrix, i.e. the matrix of second derivatives.  Local quadratic approximation to the target function  Faster convergence Recap: Iterative Methods for Estimation 8 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Recap: Iteratively Reweighted Least Squares Update equations Very similar form to pseudo-inverse (normal equations)  But now with non-constant weighing matrix R (depends on w ).  Need to apply normal equations iteratively.  Iteratively Reweighted Least-Squares (IRLS) 9 with

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 A Note on Error Functions Ideal misclassification error function (black)  This is what we want to approximate,  Unfortunately, it is not differentiable.  The gradient is zero for misclassified points.  We cannot minimize it by gradient descent. 10 Image source: Bishop, 2006 Ideal misclassification error Not differentiable!

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 A Note on Error Functions Squared error used in Least-Squares Classification  Very popular, leads to closed-form solutions.  However, sensitive to outliers due to squared penalty.  Penalizes “too correct” data points  Generally does not lead to good classifiers. 11 Image source: Bishop, 2006 Ideal misclassification error Squared error Penalizes “too correct” data points! Sensitive to outliers!

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 A Note on Error Functions Cross-Entropy Error  Minimizer of this error is given by posterior class probabilities.  Concave error function, unique minimum exists.  Robust to outliers, error increases only roughly linearly  But no closed-form solution, requires iterative estimation. 12 Image source: Bishop, 2006 Ideal misclassification error Cross-entropy error Squared error Robust to outliers!

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Overview: Error Functions Ideal Misclassification Error  This is what we would like to optimize.  But cannot compute gradients here. Quadratic Error  Easy to optimize, closed-form solutions exist.  But not robust to outliers. Cross-Entropy Error  Minimizer of this error is given by posterior class probabilities.  Concave error function, unique minimum exists.  But no closed-form solution, requires iterative estimation.  Analysis tool to compare classification approaches 13 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Topics of This Lecture Statistical Learning Theory  Generalization and overfitting  Empirical and actual risk  VC dimension  Empirical Risk Minimization  Structural Risk Minimization Linear Support Vector Machines (SVMs)  Linearly separable case  Lagrange multipliers  Lagrangian (primal) formulation  Dual formulation  Discussion 14 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Generalization and Overfitting Goal: predict class labels of new observations  Train classification model on limited training set.  The further we optimize the model parameters, the more the training error will decrease.  However, at some point the test error will go up again.  Overfitting to the training set! 15 B. Leibe test error training error Image source: B. Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Example: Linearly Separable Data Overfitting is often a problem with linearly separable data  Which of the many possible decision boundaries is correct?  All of them have zero error on the training set…  However, they will most likely result in different predictions on novel test data.  Different generalization performance How to select the classifier with the best generalization performance? 16 B. Leibe ? ? ?

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Cross-Validation Popular technique: Cross-Validation  Split the available data into training and validation sets.  Estimate the generalization error based on the error on the validation set.  Choose the model with minimal validation error. E.g. 5-fold cross-validation  Split  Repeat for all splits… 17 train testtrain testtrain testtrain testtrain test B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Statistical Learning Theory Goal: generalization ability  Choose the model that has the lowest probability of misclassification on all data, not just the training data. Statistical learning theory  Formal treatment of the question  “How can we control the generalization capability of a learning machine?”  Emphasis on theory in contrast to often-used heuristics. 18 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 A Broader View on Statistical Learning Formal treatment: Statistical Learning Theory Supervised learning  Environment: assumed stationary.  I.e. the data x have an unknown but fixed probability density  Teacher: specifies for each data point x the desired classification y (where y may be subject to noise).  Learning machine: represented by class of functions, which produce for each x an output y : 19 B. Leibe with noise º with parameters ® Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Statistical Learning Theory Supervised learning (from the learning machine’s view)  Selection of a specific function  Given: training examples  Goal: the desired response y shall be approximated optimally. Measuring the optimality  Loss function  Example: quadratic loss 20 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Risk Measuring the “optimality”  Measure the optimality by the risk (= expected loss).  Difficulty: how should the risk be estimated? Practical way  Empirical risk (measured on the training/validation set)  Example: quadratic loss function 21 B. Leibe Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Risk However, what we’re really interested in is  Actual risk (= Expected risk)  is the probability distribution of ( x, y ).  is fixed, but typically unknown.  In general, we can’t compute the actual risk directly!  The expected risk is the expectation of the error on all data.  I.e., it is the expected value of the generalization error. 22 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Summary: Risk Actual risk  Advantage: measure for the generalization ability  Disadvantage: in general, we don’t know Empirical risk  Disadvantage: no direct measure of the generalization ability  Advantage: does not depend on  We typically know learning algorithms which minimize the empirical risk.  Strong interest in connection between both types of risk 23 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Empirical Risk Minimization Principle Enforce conditions on learning machine  Necessary and sufficient condition: uniform convergence Compute the empirical risk  Based on training data  Minimizing the empirical risk guarantees that we minimize the actual risk in the limit of N ! B. Leibe Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Empirical Risk Minimization Principle Empirical risk minimization  This is what we have been doing implicitly so far… Advantage  We have formally defined when generalization can be expected (i.e. when the actual risk is minimized).  Necessary and sufficient condition is the uniform convergence. Disadvantage  Those results only hold in the limit of N ! 1.  In reality, the number of training examples N is limited. (Training data is expensive!) 25 B. Leibe Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Statistical Learning Theory Idea  Compute an upper bound on the actual risk based on the empirical risk  where N : number of training examples p * : probability that the bound is correct h : capacity of the learning machine (“VC-dimension”) Side note:  (This idea of specifying a bound that only holds with a certain probability is explored in a branch of learning theory called “Probably Approximately Correct” or PAC Learning). 26 B. Leibe Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 VC Dimension Vapnik-Chervonenkis dimension  Measure for the capacity of a learning machine. Formal definition:  If a given set of points can be labeled in all possible ways, and for each labeling, a member of the set { f ( ® )} can be found which correctly assigns those labels, we say that the set of points is shattered by the set of functions.  The VC dimension for the set of functions { f ( ® )} is defined as the maximum number of training points that can be shattered by { f ( ® )}. 27 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 VC Dimension Interpretation as a two-player game  Opponent’s turn: He says a number N.  Our turn: We specify a set of N points { x 1,…, x N }.  Opponent’s turn: He gives us a labeling { x 1,…, x N } 2 {0,1} N  Our turn: We specify a function f ( ® ) which correctly classifies all N points.  If we can do that for all 2 N possible labelings, then the VC dimension is at least N. 28 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 VC Dimension Example  The VC dimension of all oriented lines in R 2 is 3. 1.Shattering 3 points with an oriented line: 2.More difficult to show: it is not possible to shatter 4 points (XOR)…  More general: the VC dimension of all hyperplanes in R n is n B. Leibe Slide adapted from Bernt Schiele Image source: C. Burges, 1998

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 VC Dimension Intuitive feeling (unfortunately wrong)  The VC dimension has a direct connection with the number of parameters. Counterexample  Just a single parameter ®.  Infinite VC dimension –Proof: Choose 30 B. Leibe Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Upper Bound on the Risk Important result (Vapnik 1979, 1995)  With probability (1- ´ ), the following bound holds  This bound is independent of !  Typically, we cannot compute the left-hand side (the actual risk)  If we know h (the VC dimension), we can however easily compute the risk bound 31 B. Leibe “VC confidence” Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Upper Bound on the Risk 32 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Structural Risk Minimization Principle  Given a series of n models f i ( x ; ® ) with increasing VC dimension.  For each of the models: minimize empirical risk r 1 · r 2 · … · r n  Choose the model that minimizes the upper bound (right-hand side of the equation).  This is in general not the model that minimizes the empirical risk. Remarks  This algorithm is formally justified.  The result is useful whenever the bound is indeed “tight”. 33 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Structural Risk Minimization How can we implement this? Classic approach  Keep constant and minimize.  can be kept constant by controlling the model parameters. Support Vector Machines (SVMs)  Keep constant and minimize.  In fact: for separable data.  Control by adapting the VC dimension (controlling the “capacity” of the classifier). 34 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Topics of This Lecture Statistical Learning Theory  Generalization and overfitting  Empirical and actual risk  VC dimension  Empirical Risk Minimization  Structural Risk Minimization Linear Support Vector Machines (SVMs)  Linearly separable case  Lagrange multipliers  Lagrangian (primal) formulation  Dual formulation  Discussion 35 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Revisiting Our Previous Example… How to select the classifier with the best generalization performance?  Intuitively, we would like to select the classifier which leaves maximal “safety room” for future data points.  This can be obtained by maximizing the margin between positive and negative data points.  It can be shown that the larger the margin, the lower the corresponding classifier’s VC dimension. The SVM takes up this idea  It searches for the classifier with maximum margin.  Formulation as a convex optimization problem  Possible to find the globally optimal solution! 36 B. Leibe Margin

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Support Vector Machine (SVM) Let’s first consider linearly separable data  N training data points  Target values  Hyperplane separating the data 37 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Margin of the hyperplane:  d + : distance to nearest pos. training example  d – : distance to nearest neg. training example  We can always choose w, b such that. Support Vector Machine (SVM) 38 B. Leibe Slide adapted from Bernt Schiele Image source: C. Burges, 1998

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Support Vector Machine (SVM) Since the data is linearly separable, there exists a hyperplane with Combined in one equation, this can be written as  Canonical representation of the decision hyperplane.  The equation will hold exactly for the points on the margin  By definition, there will always be at least one such point. 39 B. Leibe Margin Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Support Vector Machine (SVM) We can choose w such that The distance between those two hyperplanes is then the margin  We can find the hyperplane with maximal margin by minimizing. 40 B. Leibe Slide credit: Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Support Vector Machine (SVM) Optimization problem  Find the hyperplane satisfying under the constraints  Quadratic programming problem with linear constraints.  Can be formulated using Lagrange multipliers. Who is already familiar with Lagrange multipliers?  Let’s look at a real-life example… 41 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Problem  We want to maximize K ( x ) subject to constraints f ( x ) = 0.  Example: we want to get as close as possible to the action…  How should we move?  We want to maximize.  But we can only move parallel to the fence, i.e. along with ¸  0. Recap: Lagrange Multipliers 42 B. Leibe Fence f, but there is a fence. Slide adapted from Mario Fritz

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Problem  We want to maximize K ( x ) subject to constraints f ( x ) = 0.  Example: we want to get as close as possible, but there is a fence.  How should we move?  Optimize Recap: Lagrange Multipliers 43 B. Leibe Fence f

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Problem  Now let’s look at constraints of the form f ( x ) ¸ 0.  Example: There might be a hill from which we can see better…  Optimize Two cases  Solution lies on boundary  f ( x ) = 0 for some ¸ > 0  Solution lies inside f ( x ) > 0  Constraint inactive: ¸ = 0  In both cases  ¸f ( x ) = 0 Recap: Lagrange Multipliers 44 B. Leibe Fence f

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Problem  Now let’s look at constraints of the form f ( x ) ¸ 0.  Example: There might be a hill from which we can see better…  Optimize Two cases  Solution lies on boundary  f ( x ) = 0 for some ¸ > 0  Solution lies inside f ( x ) > 0  Constraint inactive: ¸ = 0  In both cases  ¸f ( x ) = 0 Recap: Lagrange Multipliers 45 B. Leibe Fence f Karush-Kuhn-Tucker (KKT) conditions:

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Lagrangian Formulation Find hyperplane minimizing under the constraints Lagrangian formulation  Introduce positive Lagrange multipliers:  Minimize Lagrangian (“primal form”)  I.e., find w, b, and a such that 46 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Lagrangian Formulation Lagrangian primal form The solution of L p needs to fulfill the KKT conditions  Necessary and sufficient conditions 47 B. Leibe KKT:

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Solution (Part 1) Solution for the hyperplane  Computed as a linear combination of the training examples  Because of the KKT conditions, the following must also hold  This implies that a n > 0 only for training data points for which  Only some of the data points actually influence the decision boundary! 48 B. Leibe KKT: Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Support Vectors The training points for which a n > 0 are called “support vectors”. Graphical interpretation:  The support vectors are the points on the margin.  They define the margin and thus the hyperplane.  Robustness to “too correct” points! 49 B. Leibe Slide adapted from Bernt Schiele Image source: C. Burges, 1998

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Solution (Part 2) Solution for the hyperplane  To define the decision boundary, we still need to know b.  Observation: any support vector x n satisfies  Using, we can derive:  In practice, it is more robust to average over all support vectors: 50 B. Leibe KKT:

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Discussion (Part 1) Linear SVM  Linear classifier  Approximative implementation of the SRM principle.  In case of separable data, the SVM produces an empirical risk of zero with minimal value of the VC confidence (i.e. a classifier minimizing the upper bound on the actual risk).  SVMs thus have a “guaranteed” generalization capability.  Formulation as convex optimization problem.  Globally optimal solution! Primal form formulation  Solution to quadratic prog. problem in M variables is in O( M 3 ).  Here: D variables  O( D 3 )  Problem: scaling with high-dim. data (“curse of dimensionality”) 51 B. Leibe Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Dual Formulation 52 B. Leibe Improving the scaling behavior: rewrite L p in a dual form  Using the constraint, we obtain =0 Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Dual Formulation 53 B. Leibe  Using the constraint, we obtain Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Dual Formulation 54 B. Leibe  Applying and again using  Inserting this, we get the Wolfe dual Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Dual Formulation Maximize under the conditions  The hyperplane is given by the N S support vectors: 55 B. Leibe Slide adapted from Bernt Schiele

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 SVM – Discussion (Part 2) Dual form formulation  In going to the dual, we now have a problem in N variables ( a n ).  Isn’t this worse??? We penalize large training sets! However… 1.SVMs have sparse solutions: a n  0 only for support vectors!  This makes it possible to construct efficient algorithms –e.g. Sequential Minimal Optimization (SMO) –runtime between O( N ) and O( N 2 ). 2.We have avoided the dependency on the dimensionality.  This makes it possible to work with infinite-dimensional feature spaces by using suitable basis functions Á ( x ).  Next lecture… 56 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Historical Importance Handwritten digit recognition  US Postal Service Database  Standard benchmark task for many learning algorithms 57 B. Leibe

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Historical Importance USPS benchmark  2.5% error: human performance Different learning algorithms  16.2% error: Decision tree (C4.5)  5.9% error: (best) 2-layer Neural Network  5.1% error: LeNet 1 – (massively hand-tuned) 5-layer network Different SVMs  4.0% error: Polynomial kernel (p=3, 274 support vectors)  4.1% error: Gaussian kernel ( ¾ =0.3, 291 support vectors) 58 B. Leibe (We will see those in the next lecture…)

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 So Far… Only looked at linearly separable case…  Current problem formulation has no solution if the data are not linearly separable!  Need to introduce some tolerance to outlier data points. Only looked at linear decision boundaries…  This is not sufficient for many applications.  Need to generalize the ideas to non-linear boundaries.  Next Lecture… 59 B. Leibe Margin ?

Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 References and Further Reading More information on SVMs can be found in Chapter 7.1 of Bishop’s book. Additional information about Statistical Learning Theory and a more in-depth introduction to SVMs are available in the following tutorial:  C. Burges, A Tutorial on Support Vector Machines for Pattern Recognition, Data Mining and Knowledge Discovery, Vol. 2(2), pp A Tutorial on Support Vector Machines for Pattern Recognition B. Leibe 60 Christopher M. Bishop Pattern Recognition and Machine Learning Springer, 2006