Download presentation
Presentation is loading. Please wait.
Published byGeorgiana Nicholson Modified over 6 years ago
1
Omer Boehm omerb@il.ibm.com
A tutorial about SVM Omer Boehm
2
Outline Introduction Classification Perceptron
SVM for linearly separable data. SVM for almost linearly separable data. SVM for non-linearly separable data.
3
Introduction A branch of artificial intelligence, a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data An important task of machine learning is classification. Classification is also referred to as pattern recognition.
4
Example Objects Classes Learning Machine Income Debt Married age
Shelley 60,000 1000 No 30 Elad 200,000 Yes 80 Dan 20,000 25 Alona 100,000 10,000 40 Approve deny Learning Machine
5
Types of learning problems
Supervised learning (n class, n>1) Classification Regression Unsupervised learning (0 class) Clustering (building equivalence classes) Density estimation
6
approximation problems
Supervised learning Regression Learn a continuous function from input samples Stock prediction Input – future date Output – stock price Training – information on stack price over last period Classification Learn a separation function from discrete inputs to classes. Optical Character Recognition (OCR) Input – images of digits. Output – labeling 0-9. Training - labeled images of digits. In fact, these are approximation problems
7
Regression
8
Classification
9
Density estimation
10
What makes learning difficult
Given the following examples How should we draw the line?
11
What makes learning difficult
Which one is most appropriate?
12
What makes learning difficult
The hidden test points
13
What is Learning (mathematically)?
We would like to ensure that small changes in an input point from a learning point will not result in a jump to a different classification Such an approximation is called a stable approximation As a rule of thumb, small derivatives ensure stable approximation
14
Stable vs. Unstable approximation
Lagrange approximation (unstable) given points, , we find the unique polynomial , that passes through the given points Spline approximation (stable) given points, , we find a piecewise approximation by third degree polynomials such that they pass through the given points and have common tangents at the division points and in addition :
15
What would be the best choice?
The “simplest” solution A solution where the distance from each example is as small as possible and where the derivative is as small as possible
16
Vector Geometry Just in case ….
17
Dot product The dot product of two vectors Is defined as: An example
18
Dot product where denotes the length (magnitude) of ‘a’ Unit vector
19
Plane/Hyperplane Hyperplane can be defined by: Three points
Two vectors A normal vector and a point
20
Plane/Hyperplane Let be a perpendicular vector to the hyperplane H
Let be the position vector of some known point in the plane. A point _ with position vector is in the plane iff the vector drawn from to is perpendicular to Two vectors are perpendicular iff their dot product is zero The hyperplane H can be expressed as
21
Classification
22
Solving approximation problems
First we define the family of approximating functions F Next we define the cost function This function tells how well performs the required approximation Getting this done , the approximation/classification consists of solving the minimization problem A first necessary condition (after Fermat) is As we know it is always possible to do Newton-Raphson, and get a sequence of approximations
23
Classification A classifier is a function or an algorithm that maps every possible input (from a legal set of inputs) to a finite set of categories. X is the input space, is a data point from an input space. A typical input space is high-dimensional, for example X is also called a feature vector. Ω is a finite set of categories to which the input data points belong : Ω ={1,2,…,C}. are called labels.
24
Classification Y is a finite set of decisions – the output set of the classifier. The classifier is a function
25
The Perceptron
26
Perceptron - Frank Rosenblatt (1957)
Linear separation of the input space
27
Perceptron algorithm Start: The weight vector is generated randomly, set Test: A vector is selected randomly, if and go to test, if and go to add, if and go to test, if and go to subtract Add: go to test, Subtract: go to test,
28
Perceptron algorithm Shorter version
Update rule for the k+1 iterations (iteration for each data point)
29
Perceptron – visualization (intuition)
30
Perceptron – visualization (intuition)
31
Perceptron – visualization (intuition)
32
Perceptron – visualization (intuition)
33
Perceptron – visualization (intuition)
34
Perceptron - analysis Solution is a linear combination of training points Only uses informative points (mistake driven) The coefficient of a point reflect its ‘difficulty’ The perceptron learning algorithm does not terminate if the learning set is not linearly separable (e.g. XOR)
35
Support Vector Machines
36
Advantages of SVM, Vladimir Vapnik 1979,1998
Exhibit good generalization Can implement confidence measures, etc. Hypothesis has an explicit dependence on the data (via the support vectors) Learning involves optimization of a convex function (no false minima, unlike NN). Few parameters required for tuning the learning machine (unlike NN where the architecture/various parameters must be found).
37
Advantages of SVM From the perspective of statistical learning theory the motivation for considering binary classifier SVMs comes from theoretical bounds on the generalization error. These generalization bounds have two important features:
38
Advantages of SVM The upper bound on the generalization error does not depend on the dimensionality of the space. The bound is minimized by maximizing the margin, i.e. the minimal distance between the hyperplane separating the two classes and the closest data-points of each class.
39
Basic scenario - Separable data set
40
Basic scenario – define margin
41
In an arbitrary-dimensional space, a separating hyperplane can be written :
Where W is the normal. The decision function would be :
42
Note argument in is invariant under a rescaling of the form
Implicitly the scale can be fixed by defining as the support vectors (canonical hyperplanes)
43
The task is to select , so that the training data can be described as: for
These can be combined into:
45
The margin will be given by the projection of the vector
The margin will be given by the projection of the vector onto the normal vector to the hyperplane i.e. So the distance (Euclidian) can be formed
46
Note that lies on i.e. Similarly for Subtracting the two results in
47
The margin can be put as Can convert the problem to subject to the constraints: J(w) is a quadratic function, thus there is a single global minimum
48
Lagrange multipliers Problem definition : Maximize subject to
A new λ variable is used , called ‘Lagrange multiplier‘ to define
49
Lagrange multipliers
50
Lagrange multipliers
51
Lagrange multipliers - example
Maximize , subject to Formally set Set the derivatives to 0 Combining the first two yields : Substituting into the last Evaluating the objective function f on these yields
52
Lagrange multipliers - example
53
Primal problem: minimize s.t.
Introduce Lagrange multipliers associated with the constraints The solution to the primal problem is equivalent to determining the saddle point of the function :
54
At saddle point, , has minimum requiring
55
Primal: Minimize with respect to subject to
Primal-Dual Primal: Minimize with respect to subject to Substitute and Dual: Maximize with respect to subject to
56
Solving QP using dual problem
Maximize constrained to and We have , new variables. One for each data point This is a convex quadratic optimization problem , and we run a QP solver to get , and W
57
‘b’ can be determined from the optimal and Karush-Kuhn-Tucker (KKT) conditions (data points residing on the SV) implies or AVG
58
For every data point i, one of the following must hold
Many sparse solution Data Points with are Support Vectors Optimal hyperplane is completely defined by support vectors
59
SVM - The classification
Given a new data point Z, find it’s label Y
60
Extended scenario - Non-Separable data set
61
Data is most likely not to be separable (inconsistencies, outliers, noise), but linear classifier may still be appropriate Can apply SVM in non-linearly separable case Data should be almost linearly separable
62
SVM with slacks Use non-negative slack variables one per data point Change constraints from to is a measure of deviation from ideal position for sample I
63
Would like to minimize constrained to
SVM with slacks Would like to minimize constrained to The parameter C is a regularization term, which provides a way to control over-fitting if C is small, we allow a lot of samples not in ideal position if C is large, we want to have very few samples not in ideal position
64
SVM with slacks - Dual formulation
Maximize Constraint to
65
SVM - non linear mapping
Cover’s theorem: “pattern-classification problem cast in a high dimensional space non-linearly is more likely to be linearly separable than in a low-dimensional space” One dimensional space, not linearly separable Lift to two dimensional space with
66
SVM - non linear mapping
Solve a non linear classification problem with a linear classifier Project data x to high dimension using function Find a linear discriminant function for transformed data Final nonlinear discriminant function is In 2D, discriminant function is linear In 1D, discriminant function is NOT linear
67
SVM - non linear mapping
Can use any linear classifier after lifting data into a higher dimensional space. However we will have to deal with the “curse of dimensionality” poor generalization to test data computationally expensive SVM handles the “curse of dimensionality” problem: enforcing largest margin permits good generalization It can be shown that generalization in SVM is a function of the margin, independent of the dimensionality computation in the higher dimensional case is performed only implicitly through the use of kernel functions
68
Non linear SVM - kernels
Recall: The data points appear only in dot products If is mapped to high dimensional space using The high dimensional product is needed The dimensionality of space F not necessarily important. May not even know the map
69
Kernel A function that returns the value of the dot product between the images of the two arguments: Given a function K, it is possible to verify that it is a kernel. Now we only need to compute instead of “kernel trick”: do not need to perform operations in high dimensional space explicitly
70
Kernel Matrix The central structure in kernel machines
Contains all necessary information for the learning algorithm Fuses information about the data AND the kernel Many interesting properties:
71
Mercer’s Theorem The kernel matrix is Symmetric Positive Definite
Any symmetric positive definite matrix can be regarded as a kernel matrix, that is as an inner product matrix in some space Every (semi)positive definite, symmetric function is a kernel: i.e. there exists a mapping φ such that it is possible to write:
72
Examples of kernels Some common choices (both satisfying Mercer’s condition): Polynomial kernel Gaussian radial basis function (RBF)
73
Polynomial Kernel - example
74
Applying - non linear SVM
Start with data , which lives in feature space of dimension n Choose a kernel corresponding to some function , which takes data point to a higher dimensional space Find the largest margin linear discriminant function in the higher dimensional space by using quadratic programming package to solve
75
Applying - non linear SVM
Weight vector w in the high dimensional space: Linear discriminant function of largest margin in the high dimensional space: Non-Linear discriminant function in the original space:
76
Applying - non linear SVM
77
SVM summary Advantages: Disadvantages: Based on nice theory
Excellent generalization properties Objective function has no local minima Can be used to find non linear discriminant functions Complexity of the classifier is characterized by the number of support vectors rather than the dimensionality of the transformed space Disadvantages: It’s not clear how to select a kernel function in a principled manner tends to be slower than other methods (in non-linear case).
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.