Binary Classification Problem Learn a Classifier from the Training Set Given a training dataset Main goal: Predict the unseen class label for new data Find a function by learning from data The simplest function is linear:
Binary Classification Problem Linearly Separable Case Benign Malignant
Support Vector Machines Maximizing the Margin between Bounding Planes
Why We Maximize the Margin? (Based on Statistical Learning Theory) The Structural Risk Minimization (SRM): The expected risk will be less than or equal to empirical risk (training error)+ VC (error) bound
Summary the Notations Let be a training dataset and represented by matrices equivalent to , where
Support Vector Classification (Linearly Separable Case, Primal) The hyperplane is determined by solving the minimization problem: It realizes the maximal margin hyperplane with geometric margin
Support Vector Classification (Linearly Separable Case, Dual Form) The dual problem of previous MP: subject to Applying the KKT optimality conditions, we have . But where is Don’t forget
Dual Representation of SVM (Key of Kernel Methods: ) The hypothesis is determined by
Soft Margin SVM (Nonseparable Case) If data are not linearly separable Primal problem is infeasible Dual problem is unbounded above Introduce the slack variable for each training point The inequality system is always feasible e.g.
Robust Linear Programming Preliminary Approach to SVM s.t. (LP) where : nonnegative slack (error) vector The term , 1-norm measure of error vector, is called the training error. For the linearly separable case, at solution of (LP):
Support Vector Machine Formulations (Two Different Measures of Training Error) 2-Norm Soft Margin: 1-Norm Soft Margin (Conventional SVM):
Tuning Procedure How to determine C? overfitting C The final value of parameter is one with the maximum testing set correctness !
Lagrangian Dual Problem subject to subject to where
1-Norm Soft Margin SVM Dual Formulation The Lagrangian for 1-norm soft margin: where The partial derivatives with respect to primal variables equal zeros
Substitute: in where and
Dual Maximization Problem for 1-Norm Soft Margin Dual: The corresponding KKT complementarity:
Slack Variables for 1-Norm Soft Margin SVM Non-zero slack can only occur when The contribution of outlier in the decision rule will be at most The trade-off between accuracy and regularization directly controls by C The points for which lie at the bounding planes This will help us to find
Two-spiral Dataset (94 White Dots & 94 Red Dots)
Learning in Feature Space (Could Simplify the Classification Task) Learning in a high dimensional space could degrade generalization performance This phenomenon is called curse of dimensionality By using a kernel function, that represents the inner product of training example in feature space, we never need to explicitly know the nonlinear map. Even do not know the dimensionality of feature space There is no free lunch Deal with a huge and dense kernel matrix Reduced kernel can avoid this difficulty
Linear Machine in Feature Space Let be a nonlinear map from the input space to some feature space The classifier will be in the form (Primal): Make it in the dual form:
Kernel: Represent Inner Product in Feature Space Definition: A kernel is a function such that where The classifier will become:
A Simple Example of Kernel Polynomial Kernel of Degree 2: and the nonlinear map Let defined by . Then . There are many other nonlinear maps, , that satisfy the relation:
Power of the Kernel Technique Consider a nonlinear map that consists of distinct features of all the monomials of degree d. Then . For example: Is it necessary? We only need to know ! This can be achieved
Kernel Technique Based on Mercer’s Condition (1909) The value of kernel function represents the inner product of two training points in feature space Kernel functions merge two steps 1. map input data from input space to feature space (might be infinite dim.) 2. do inner product in the feature space
More Examples of Kernel is an integer: Polynomial Kernel : ) (Linear Kernel Gaussian (Radial Basis) Kernel : The -entry of represents the “similarity” of data points and
Nonlinear 1-Norm Soft Margin SVM In Dual Form Linear SVM: Nonlinear SVM:
1-norm Support Vector Machines Good for Feature Selection Solve the quadratic program for some : min s. t. , , denotes where or membership. Equivalent to solve a Linear Program as follows:
SVM as an Unconstrained Minimization Problem (QP) At the solution of (QP): where Hence (QP) is equivalent to the nonsmooth SVM: min Change (QP) into an unconstrained MP Reduce (n+1+m) variables to (n+1) variables
Smooth the Plus Function: Integrate Step function: Sigmoid function: p-function: Plus function:
SSVM: Smooth Support Vector Machine Replacing the plus function in the nonsmooth SVM by the smooth , gives our SSVM: min nonsmooth SVM as goes to infinity. The solution of SSVM converges to the solution of (Typically, )
Newton-Armijo Method: Quadratic Approximation of SSVM generated by solving a The sequence quadratic approximation of SSVM, converges to the of SSVM at a quadratic rate. unique solution Converges in 6 to 8 iterations At each iteration we solve a linear system of: n+1 equations in n+1 variables Complexity depends on dimension of input space It might be needed to select a stepsize
Newton-Armijo Algorithm Start with any . Having stop if else : (i) Newton Direction : globally and quadratically converge to unique solution in a finite number of steps (ii) Armijo Stepsize : such that Armijo’s rule is satisfied
Nonlinear Smooth SVM Nonlinear Classifier: by a nonlinear kernel : min Replace by a nonlinear kernel : min Use Newton-Armijo algorithm to solve the problem Each iteration solves m+1 linear equations in m+1 variables Nonlinear classifier depends on the data points with nonzero coefficients :
Conclusion An overview of SVMs for classification SSVM: A new formulation of support vector machine as a smooth unconstrained minimization problem Can be solved by a fast Newton-Armijo algorithm No optimization (LP, QP) package is needed There are many important issues did not address this lecture such as: How to solve conventional SVM? How to select parameters: How to deal with massive datasets?
{ Perceptron . i=0n wi xi g 1 if i=0n wi xi >0 o(xi) = Linear threshold unit (LTU) x0=1 x1 w1 w0 w2 x2 . i=0n wi xi g wn xn 1 if i=0n wi xi >0 o(xi) = -1 otherwise {
Possibilities for function g Sign function Step function Sigmoid (logistic) function sign(x) = +1, if x > 0 -1, if x 0 step(x) = 1, if x > threshold 0, if x threshold (in picture above, threshold = 0) sigmoid(x) = 1/(1+e-x) Adding an extra input with activation x0 = 1 and weight wi, 0 = -T (called the bias weight) is equivalent to having a threshold at T. This way we can always assume a 0 threshold.
Using a Bias Weight to Standardize the Threshold 1 -T w1 x1 w2 x2 w1x1+ w2x2 < T w1x1+ w2x2 - T < 0
Perceptron Learning Rule (x, t)=([2,1], -1) o =sgn(0.45-0.6+0.3) =1 x2 x2 w = [0.25 –0.1 0.5] x2 = 0.2 x1 – 0.5 o=-1 w = [0.2 –0.2 –0.2] (x, t)=([-1,-1], 1) o = sgn(0.25+0.1-0.5) =-1 x1 x1 (x, t)=([1,1], 1) o = sgn(0.25-0.7+0.1) = -1 -0.5x1+0.3x2+0.45>0 o = 1 w = [0.2 0.2 0.2] w = [-0.2 –0.4 –0.2] x2 x2 x1 x1
The Perceptron Algorithm Rosenblatt, 1956 Given a linearly separable training set and learning rate and the initial weight vector, bias: and let
The Perceptron Algorithm (Primal Form) Repeat: until no mistakes made within the for loop return: . What is ?
The Perceptron Algorithm ( STOP in Finite Steps ) Theorem (Novikoff) Let be a non-trivial training set, and let Suppose that there exists a vector and . Then the number of mistakes made by the on-line perceptron algorithm on is at most
Proof of Finite Termination Proof: Let The algorithm starts with an augmented weight vector and updates it at each mistake. Let be the augmented weight vector prior to the th mistake. The th update is performed when where is the point incorrectly classified by .
Update Rule of Perceotron Similarly,
Update Rule of Perceotron
The Perceptron Algorithm (Dual Form) Given a linearly separable training set and Repeat: until no mistakes made within the for loop return:
What We Got in the Dual Form Perceptron Algorithm? The number of updates equals: implies that the training point has been misclassified in the training process at least once. implies that removing the training point will not affect the final results The training data only appear in the algorithm through the entries of the Gram matrix, which is defined below: