Download presentation
Presentation is loading. Please wait.
1
Support Vector Machines
Summer Course: Data Mining Support Vector Machines Support Vector Machines and other penalization classifiers Presenter: Georgi Nalbantov Presenter: Georgi Nalbantov August 2009
2
Contents Purpose Linear Support Vector Machines
Nonlinear Support Vector Machines (Theoretical justifications of SVM) Marketing Examples Other penalization classification methods Conclusion and Q & A (some extensions)
3
Purpose Task to be solved (The Classification Task): Classify cases (customers) into “type 1” or “type 2” on the basis of some known attributes (characteristics) Chosen tool to solve this task: Support Vector Machines
4
The Classification Task
Given data on explanatory and explained variables, where the explained variable can take two values { 1 }, find a function that gives the “best” separation between the “-1” cases and the “+1” cases: Given: ( x1, y1 ), … , ( xm , ym ) n { 1 } Find: : n { 1 } “best function” = the expected error on unseen data ( xm+1, ym+1 ), … , ( xm+k , ym+k ) is minimal Existing techniques to solve the classification task: Linear and Quadratic Discriminant Analysis Logit choice models (Logistic Regression) Decision trees, Neural Networks, Least Squares SVM
5
Support Vector Machines: Definition
Support Vector Machines are a non-parametric tool for classification/regression Support Vector Machines are used for prediction rather than description purposes Support Vector Machines have been developed by Vapnik and co-workers
6
Linear Support Vector Machines
A direct marketing company wants to sell a new book: “The Art History of Florence” Nissan Levin and Jacob Zahavi in Lattin, Carroll and Green (2003). Problem: How to identify buyers and non-buyers using the two variables: Months since last purchase Number of art books purchased ∆ buyers ● non-buyers ∆ ● Number of art books purchased Months since last purchase
7
Linear SVM: Separable Case
Main idea of SVM: separate groups by a line. However: There are infinitely many lines that have zero training error… … which line shall we choose? ∆ buyers ● non-buyers ∆ ● Number of art books purchased Months since last purchase
8
Linear SVM: Separable Case
SVM use the idea of a margin around the separating line. The thinner the margin, the more complex the model, The best line is the one with the largest margin. margin ∆ buyers ● non-buyers ∆ ● Number of art books purchased Months since last purchase
9
Linear SVM: Separable Case
The line having the largest margin is: w1x1 + w2x2 + b = 0 Where x1 = months since last purchase x2 = number of art books purchased Note: w1xi 1 + w2xi 2 + b for i ∆ w1xj 1 + w2xj 2 + b – for j ● x2 w1x1 + w2x2 + b = 1 ∆ ● w1x1 + w2x2 + b = 0 w1x1 + w2x2 + b = -1 Number of art books purchased margin x1 Months since last purchase
10
Linear SVM: Separable Case
The width of the margin is given by: Note: x2 w1x1 + w2x2 + b = 1 ∆ ● w1x1 + w2x2 + b = 0 w1x1 + w2x2 + b = -1 Number of art books purchased margin maximize the margin minimize x1 Months since last purchase
11
Linear SVM: Separable Case
maximize the margin minimize x2 ∆ ● The optimization problem for SVM is: subject to: w1xi 1 + w2xi 2 + b for i ∆ w1xj 1 + w2xj 2 + b – for j ● margin x1
12
Linear SVM: Separable Case
“Support vectors” x2 “Support vectors” are those points that lie on the boundaries of the margin The decision surface (line) is determined only by the support vectors. All other points are irrelevant ∆ ● x1
13
Linear SVM: Nonseparable Case
Non-separable case: there is no line separating errorlessly the two groups Here, SVM minimize L(w,C) : subject to: w1xi 1 + w2xi 2 + b +1 – i for i ∆ w1xj 1 + w2xj 2 + b –1 + i for j ● I,j 0 Training set: 1000 targeted customers x2 ∆ buyers ● non-buyers w1x1 + w2x2 + b = 1 maximize the margin minimize the training errors ∆ ∆ ∆ ∆ ∆ ∆ ● ● ∆ ● L(w,C) = Complexity Errors ● ∆ ∆ ● ● ● ● ● ● ● x1
14
Linear SVM: The Role of C
x2 x1 C = 1 ∆ ● x2 ∆ ● C = 5 ∆ ∆ x1 Bigger C Smaller C increased complexity decreased complexity ( thinner margin ) ( wider margin ) smaller number errors bigger number errors ( better fit on the data ) ( worse fit on the data ) Vary both complexity and empirical error via C … by affecting the optimal w and optimal number of training errors
15
Bias – Variance trade-off
16
From Regression into Classification
We have a linear model, such as We have to estimate this relation using our training data set and having in mind the so-called “accuracy”, or “0-1” loss function (our evaluation criterion). The training data set we have consists of only MANY observations, for instance: Output (y) Input (x) -1 0.2 1 0.5 Training data: 1 0.7 . . . . . -1 -0.7
17
From Regression into Classification
We have a linear model, such as y We have to estimate this relation using our training data set and having in mind the so-called “accuracy”, or “0-1” loss function (our evaluation criterion). 1 The training data set we have consists of only MANY observations, for instance: -1 Training data: x Output (y) Input (x) -1 0.2 1 0.5 1 0.7 “margin” x . . . . . Support vector -1 -0.7
18
From Regression into Classification: Support Vector Machines
flatter line greater penalization y smaller slope bigger margin equivalently: 1 -1 x x “margin”
19
From Regression into Classification: Support Vector Machines
y x2 x2 x1 x1 “margin” flatter line greater penalization equivalently: smaller slope bigger margin
20
Nonlinear SVM: Nonseparable Case
Mapping into a higher-dimensional space Optimization task: minimize L(w,C) subject to: ∆ ● x2 ∆ ● x1
21
Nonlinear SVM: Nonseparable Case
Map the data into higher-dimensional space: 3 ∆ ● x2 (1,1) (-1,1) (-1,-1) ∆ ● ∆ x1 ● (1,-1)
22
Nonlinear SVM: Nonseparable Case
Find the optimal hyperplane in the transformed space ∆ ● x2 (1,1) (-1,1) (-1,-1) ∆ ● ∆ x1 ● (1,-1)
23
Nonlinear SVM: Nonseparable Case
Observe the decision surface in the original space (optional) ∆ ● x2 ∆ ● ∆ x1 ● ∆ ●
24
Nonlinear SVM: Nonseparable Case
Dual formulation of the (primal) SVM minimization problem Primal Dual Subject to Subject to
25
Nonlinear SVM: Nonseparable Case
Dual formulation of the (primal) SVM minimization problem Dual Subject to (kernel function)
26
Nonlinear SVM: Nonseparable Case
Dual formulation of the (primal) SVM minimization problem Dual Subject to (kernel function)
27
Strengths and Weaknesses of SVM
Strengths of SVM: Training is relatively easy No local minima It scales relatively well to high dimensional data Trade-off between classifier complexity and error can be controlled explicitly via C Robustness of the results The “curse of dimensionality” is avoided Weaknesses of SVM: What is the best trade-off parameter C ? Need a good transformation of the original space
28
The Ketchup Marketing Problem
Two types of ketchup: Heinz and Hunts Seven Attributes Feature Heinz Feature Hunts Display Heinz Display Hunts Feature&Display Heinz Feature&Display Hunts Log price difference between Heinz and Hunts Training Data: 2498 cases (89.11% Heinz is chosen) Test Data: 300 cases (88.33% Heinz is chosen)
29
The Ketchup Marketing Problem
Choose a kernel mapping: Cross-validation mean squared errors, SVM with RBF kernel Linear kernel Polynomial kernel RBF kernel Do (5-fold ) cross-validation procedure to find the best combination of the manually adjustable parameters (here: C and σ) C min max σ
30
The Ketchup Marketing Problem – Training Set
Model Linear Discriminant Analysis Heinz Predicted Group Membership Total Hunts Hit Rate Original Count 68 204 272 89.51% 58 2168 2226 % 25.00% 75.00% 100.00% 2.61% 97.39%
31
The Ketchup Marketing Problem – Training Set
Model Logit Choice Model Heinz Predicted Group Membership Total Hunts Hit Rate Original Count 214 58 272 77.79% 497 1729 2226 % 78.68% 21.32% 100.00% 22.33% 77.67%
32
The Ketchup Marketing Problem – Training Set
Model Support Vector Machines Heinz Predicted Group Membership Total Hunts Hit Rate Original Count 255 17 272 99.08% 6 2220 2226 % 93.75% 6.25% 100.00% 0.27% 99.73%
33
The Ketchup Marketing Problem – Training Set
Model Majority Voting Heinz Predicted Group Membership Total Hunts Hit Rate Original Count 272 89.11% 2226 % 0% 100% 100.00%
34
The Ketchup Marketing Problem – Test Set
Model Linear Discriminant Analysis Heinz Predicted Group Membership Total Hunts Hit Rate Original Count 3 32 35 88.33% 262 265 % 8.57% 91.43% 100.00% 1.13% 98.87%
35
The Ketchup Marketing Problem – Test Set
Model Logit Choice Model Heinz Predicted Group Membership Total Hunts Hit Rate Original Count 29 6 35 77% 63 202 265 % 82.86% 17.14% 100.00% 23.77% 76.23%
36
The Ketchup Marketing Problem – Test Set
Model Support Vector Machines Heinz Predicted Group Membership Total Hunts Hit Rate Original Count 25 10 35 95.67% 3 262 265 % 71.43% 28.57% 100.00% 1.13% 98.87%
37
Part II Penalized classification and regression methods Support Hyperplanes Nearest Convex Hull classifier Soft Nearest Neighbor Application: An example Support Vector Regression financial study Conclusion
38
Classification: Support Hyperplanes + +
Consider a (separable) binary classification case: training data (+,-) and a test point x. There are infinitely many hyperplanes that are semi-consistent (= commit no error) with the training data.
39
Classification: Support Hyperplanes + + + + + + +
Support hyperplane of x + + + + + For the classification of the test point x, use the farthest-away h-plane that is semi-consistent with training data. The SH decision surface. Each point on it has 2 support h-planes.
40
Classification: Support Hyperplanes + + + + + +
Toy Problem Experiment with Support Hyperplanes and Support Vector Machines
41
Support Vector Machines and Support Hyperplanes
Classification: Support Vector Machines and Support Hyperplanes Support Vector Machines Support Hyperplanes
42
Support Vector Machines and Nearest Convex Hull cl.
Classification: Support Vector Machines and Nearest Convex Hull cl. Support Vector Machines Nearest Convex Hull classification
43
Support Vector Machines and Soft Nearest Neighbor
Classification: Support Vector Machines and Soft Nearest Neighbor Support Vector Machines Soft Nearest Neighbor
44
Classification: Support Hyperplanes
(bigger penalization) Support Hyperplanes
45
Classification: Nearest Convex Hull classification
(bigger penalization) Nearest Convex Hull classification
46
(bigger penalization)
Classification: Soft Nearest Neighbor Soft Nearest Neighbor (bigger penalization) Soft Nearest Neighbor
47
Classification: Support Vector Machines, Nonseparable Case
48
Classification: Support Hyperplanes, Nonseparable Case
49
Classification: Nearest Convex Hull classification, Nonseparable Case
50
Classification: Soft Nearest Neighbor, Nonseparable Case
51
Summary: Penalization Techniques for Classification
Penalization methods for classification: Support Vector Machines (SVM), Support Hyperplanes (SH), Nearest Convex Hull classification (NCH), and Soft Nearest Neighbour (SNN). In all cases, the classificarion of test point x is dete4rmined using the hyperplane h. Equivalently, x is labelled +1 (-1) if it is farther away from set S_ (S+).
52
Conclusion Support Vector Machines (SVM) can be applied in the binary and multi-class classification problems SVM behave robustly in multivariate problems Further research in various Marketing areas is needed to justify or refute the applicability of SVM Support Vector Regressions (SVR) can also be applied
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.