Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.

Similar presentations


Presentation on theme: "Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University."— Presentation transcript:

1 Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University www.cs.cmu.edu/~awm awm@cs.cmu.edu 412-268-7599 Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew’s tutorials: http://www.cs.cmu.edu/~awm/tutorials. Comments and corrections gratefully received. http://www.cs.cmu.edu/~awm/tutorials

2 Support Vector Machines: Slide 2 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How would you classify this data?

3 Support Vector Machines: Slide 3 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How would you classify this data?

4 Support Vector Machines: Slide 4 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How would you classify this data?

5 Support Vector Machines: Slide 5 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) How would you classify this data?

6 Support Vector Machines: Slide 6 Copyright © 2001, 2003, Andrew W. Moore Linear Classifiers f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) Any of these would be fine....but which is best?

7 Support Vector Machines: Slide 7 Copyright © 2001, 2003, Andrew W. Moore Classifier Margin f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

8 Support Vector Machines: Slide 8 Copyright © 2001, 2003, Andrew W. Moore Maximum Margin f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Linear SVM

9 Support Vector Machines: Slide 9 Copyright © 2001, 2003, Andrew W. Moore Maximum Margin f x  y est denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against Linear SVM

10 Support Vector Machines: Slide 10 Copyright © 2001, 2003, Andrew W. Moore Why Maximum Margin? denotes +1 denotes -1 f(x,w,b) = sign(w. x - b) The maximum margin linear classifier is the linear classifier with the, um, maximum margin. This is the simplest kind of SVM (Called an LSVM) Support Vectors are those datapoints that the margin pushes up against 1.Intuitively this feels safest. 2.If we’ve made a small error in the location of the boundary (it’s been jolted in its perpendicular direction) this gives us least chance of causing a misclassification. 3.LOOCV is easy since the model is immune to removal of any non- support-vector datapoints. 4.There’s some theory (using VC dimension) that is related to (but not the same as) the proposition that this is a good thing. 5.Empirically it works very very well.

11 Support Vector Machines: Slide 11 Copyright © 2001, 2003, Andrew W. Moore Estimate the Margin What is the distance expression for a point x to a line wx+b= 0? denotes +1 denotes -1 x wx +b = 0

12 Support Vector Machines: Slide 12 Copyright © 2001, 2003, Andrew W. Moore Estimate the Margin What is the expression for margin? denotes +1 denotes -1 wx +b = 0 Margin

13 Support Vector Machines: Slide 13 Copyright © 2001, 2003, Andrew W. Moore Maximize Margin denotes +1 denotes -1 wx +b = 0 Margin

14 Support Vector Machines: Slide 14 Copyright © 2001, 2003, Andrew W. Moore Maximize Margin denotes +1 denotes -1 wx +b = 0 Margin Min-max problem  game problem

15 Support Vector Machines: Slide 15 Copyright © 2001, 2003, Andrew W. Moore Maximize Margin denotes +1 denotes -1 wx +b = 0 Margin Strategy:

16 Support Vector Machines: Slide 16 Copyright © 2001, 2003, Andrew W. Moore Maximum Margin Linear Classifier How to solve it?

17 Support Vector Machines: Slide 17 Copyright © 2001, 2003, Andrew W. Moore Learning via Quadratic Programming QP is a well-studied class of optimization algorithms to maximize a quadratic function of some real-valued variables subject to linear constraints.

18 Support Vector Machines: Slide 18 Copyright © 2001, 2003, Andrew W. Moore Quadratic Programming Find And subject to n additional linear inequality constraints e additional linear equality constraints Quadratic criterion Subject to

19 Support Vector Machines: Slide 19 Copyright © 2001, 2003, Andrew W. Moore Quadratic Programming Find Subject to And subject to n additional linear inequality constraints e additional linear equality constraints Quadratic criterion There exist algorithms for finding such constrained quadratic optima much more efficiently and reliably than gradient ascent. (But they are very fiddly…you probably don’t want to write one yourself)

20 Support Vector Machines: Slide 20 Copyright © 2001, 2003, Andrew W. Moore Quadratic Programming

21 Support Vector Machines: Slide 21 Copyright © 2001, 2003, Andrew W. Moore Uh-oh! denotes +1 denotes -1 This is going to be a problem! What should we do?

22 Support Vector Machines: Slide 22 Copyright © 2001, 2003, Andrew W. Moore Uh-oh! denotes +1 denotes -1 This is going to be a problem! What should we do? Idea 1: Find minimum w.w, while minimizing number of training set errors. Problemette: Two things to minimize makes for an ill-defined optimization

23 Support Vector Machines: Slide 23 Copyright © 2001, 2003, Andrew W. Moore Uh-oh! denotes +1 denotes -1 This is going to be a problem! What should we do? Idea 1.1: Minimize w.w + C (#train errors) There’s a serious practical problem that’s about to make us reject this approach. Can you guess what it is? Tradeoff parameter

24 Support Vector Machines: Slide 24 Copyright © 2001, 2003, Andrew W. Moore Uh-oh! denotes +1 denotes -1 This is going to be a problem! What should we do? Idea 1.1: Minimize w.w + C (#train errors) There’s a serious practical problem that’s about to make us reject this approach. Can you guess what it is? Tradeoff parameter Can’t be expressed as a Quadratic Programming problem. Solving it may be too slow. (Also, doesn’t distinguish between disastrous errors and near misses) So… any other ideas?

25 Support Vector Machines: Slide 25 Copyright © 2001, 2003, Andrew W. Moore Uh-oh! denotes +1 denotes -1 This is going to be a problem! What should we do? Idea 2.0: Minimize w.w + C (distance of error points to their correct place)

26 Support Vector Machines: Slide 26 Copyright © 2001, 2003, Andrew W. Moore Support Vector Machine (SVM) for Noisy Data Any problem with the above formulism? denotes +1 denotes -1

27 Support Vector Machines: Slide 27 Copyright © 2001, 2003, Andrew W. Moore Support Vector Machine (SVM) for Noisy Data Balance the trade off between margin and classification errors denotes +1 denotes -1

28 Support Vector Machines: Slide 28 Copyright © 2001, 2003, Andrew W. Moore Support Vector Machine for Noisy Data How do we determine the appropriate value for c ?

29 Support Vector Machines: Slide 29 Copyright © 2001, 2003, Andrew W. Moore The Dual Form of QP Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b)

30 Support Vector Machines: Slide 30 Copyright © 2001, 2003, Andrew W. Moore The Dual Form of QP Maximize where Subject to these constraints: Then define:

31 Support Vector Machines: Slide 31 Copyright © 2001, 2003, Andrew W. Moore An Equivalent QP Maximize where Subject to these constraints: Then define: Datapoints with  k > 0 will be the support vectors..so this sum only needs to be over the support vectors.

32 Support Vector Machines: Slide 32 Copyright © 2001, 2003, Andrew W. Moore Support Vectors denotes +1 denotes -1 Support Vectors Decision boundary is determined only by those support vectors !  i = 0 for non-support vectors  i  0 for support vectors

33 Support Vector Machines: Slide 33 Copyright © 2001, 2003, Andrew W. Moore The Dual Form of QP Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b) How to determine b ?

34 Support Vector Machines: Slide 34 Copyright © 2001, 2003, Andrew W. Moore An Equivalent QP: Determine b A linear programming problem ! Fix w

35 Support Vector Machines: Slide 35 Copyright © 2001, 2003, Andrew W. Moore An Equivalent QP Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b) Datapoints with  k > 0 will be the support vectors..so this sum only needs to be over the support vectors. Why did I tell you about this equivalent QP? It’s a formulation that QP packages can optimize more quickly Because of further jaw- dropping developments you’re about to learn.

36 Support Vector Machines: Slide 36 Copyright © 2001, 2003, Andrew W. Moore Suppose we’re in 1-dimension What would SVMs do with this data? x=0

37 Support Vector Machines: Slide 37 Copyright © 2001, 2003, Andrew W. Moore Suppose we’re in 1-dimension Not a big surprise Positive “plane” Negative “plane” x=0

38 Support Vector Machines: Slide 38 Copyright © 2001, 2003, Andrew W. Moore Harder 1-dimensional dataset That’s wiped the smirk off SVM’s face. What can be done about this? x=0

39 Support Vector Machines: Slide 39 Copyright © 2001, 2003, Andrew W. Moore Harder 1-dimensional dataset Remember how permitting non- linear basis functions made linear regression so much nicer? Let’s permit them here too x=0

40 Support Vector Machines: Slide 40 Copyright © 2001, 2003, Andrew W. Moore Harder 1-dimensional dataset Remember how permitting non- linear basis functions made linear regression so much nicer? Let’s permit them here too x=0

41 Support Vector Machines: Slide 41 Copyright © 2001, 2003, Andrew W. Moore Common SVM basis functions z k = ( polynomial terms of x k of degree 1 to q ) z k = ( radial basis functions of x k ) z k = ( sigmoid functions of x k ) This is sensible. Is that the end of the story? No…there’s one more trick!

42 Support Vector Machines: Slide 42 Copyright © 2001, 2003, Andrew W. Moore Quadratic Basis Functions Constant Term Linear Terms Pure Quadratic Terms Quadratic Cross-Terms Number of terms (assuming m input dimensions) = (m+2)-choose-2 = (m+2)(m+1)/2 = (as near as makes no difference) m 2 /2 You may be wondering what those ’s are doing. You should be happy that they do no harm You’ll find out why they’re there soon.

43 Support Vector Machines: Slide 43 Copyright © 2001, 2003, Andrew W. Moore QP (old) Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w. x - b)

44 Support Vector Machines: Slide 44 Copyright © 2001, 2003, Andrew W. Moore QP with basis functions where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) Maximize Most important changes: X   (x)

45 Support Vector Machines: Slide 45 Copyright © 2001, 2003, Andrew W. Moore QP with basis functions where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) We must do R 2 /2 dot products to get this matrix ready. Each dot product requires m 2 /2 additions and multiplications The whole thing costs R 2 m 2 /4. Yeeks! …or does it? Maximize

46 Support Vector Machines: Slide 46 Copyright © 2001, 2003, Andrew W. Moore Quadratic Dot Products + + +

47 Support Vector Machines: Slide 47 Copyright © 2001, 2003, Andrew W. Moore Quadratic Dot Products Just out of casual, innocent, interest, let’s look at another function of a and b:

48 Support Vector Machines: Slide 48 Copyright © 2001, 2003, Andrew W. Moore Quadratic Dot Products Just out of casual, innocent, interest, let’s look at another function of a and b: They’re the same! And this is only O(m) to compute!

49 Support Vector Machines: Slide 49 Copyright © 2001, 2003, Andrew W. Moore QP with Quadratic basis functions where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) We must do R 2 /2 dot products to get this matrix ready. Each dot product now only requires m additions and multiplications Maximize

50 Support Vector Machines: Slide 50 Copyright © 2001, 2003, Andrew W. Moore Higher Order Polynomials Poly- nomial (x)(x) Cost to build Q kl matrix tradition ally Cost if 100 inputs  (a).  (b) Cost to build Q kl matrix sneakily Cost if 100 inputs QuadraticAll m 2 /2 terms up to degree 2 m 2 R 2 /42,500 R 2 (a.b+1) 2 m R 2 / 250 R 2 CubicAll m 3 /6 terms up to degree 3 m 3 R 2 /1283,000 R 2 (a.b+1) 3 m R 2 / 250 R 2 QuarticAll m 4 /24 terms up to degree 4 m 4 R 2 /481,960,000 R 2 (a.b+1) 4 m R 2 / 250 R 2

51 Support Vector Machines: Slide 51 Copyright © 2001, 2003, Andrew W. Moore QP with Quintic basis functions Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) We must do R 2 /2 dot products to get this matrix ready. In 100-d, each dot product now needs 103 operations instead of 75 million But there are still worrying things lurking away. What are they?

52 Support Vector Machines: Slide 52 Copyright © 2001, 2003, Andrew W. Moore QP with Quintic basis functions Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) We must do R 2 /2 dot products to get this matrix ready. In 100-d, each dot product now needs 103 operations instead of 75 million But there are still worrying things lurking away. What are they? The fear of overfitting with this enormous number of terms The evaluation phase (doing a set of predictions on a test set) will be very expensive (why?)

53 Support Vector Machines: Slide 53 Copyright © 2001, 2003, Andrew W. Moore QP with Quintic basis functions Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) We must do R 2 /2 dot products to get this matrix ready. In 100-d, each dot product now needs 103 operations instead of 75 million But there are still worrying things lurking away. What are they? The fear of overfitting with this enormous number of terms The evaluation phase (doing a set of predictions on a test set) will be very expensive (why?) Because each w.  (x) (see below) needs 75 million operations. What can be done? The use of Maximum Margin magically makes this not a problem. (Not always!)

54 Support Vector Machines: Slide 54 Copyright © 2001, 2003, Andrew W. Moore QP with Quintic basis functions Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b) We must do R 2 /2 dot products to get this matrix ready. In 100-d, each dot product now needs 103 operations instead of 75 million But there are still worrying things lurking away. What are they? The fear of overfitting with this enormous number of terms The evaluation phase (doing a set of predictions on a test set) will be very expensive (why?) Because each w.  (x) (see below) needs 75 million operations. What can be done? The use of Maximum Margin magically makes this not a problem Only Sm operations (S=#support vectors)

55 Support Vector Machines: Slide 55 Copyright © 2001, 2003, Andrew W. Moore QP with Quintic basis functions Maximize where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(w.  (x) - b)

56 Support Vector Machines: Slide 56 Copyright © 2001, 2003, Andrew W. Moore QP with Quadratic basis functions where Subject to these constraints: Then define: Then classify with: f(x,w,b) = sign(K(w, x) - b) Maximize Most important change:

57 Support Vector Machines: Slide 57 Copyright © 2001, 2003, Andrew W. Moore SVM Kernel Functions K(a,b)=(a. b +1) d is an example of an SVM Kernel Function Beyond polynomials there are other very high dimensional basis functions that can be made practical by finding the right Kernel Function Radial-Basis-style Kernel Function: Neural-net-style Kernel Function:

58 Support Vector Machines: Slide 58 Copyright © 2001, 2003, Andrew W. Moore Kernel Tricks Replacing dot product with a kernel function Not all functions are kernel functions Need to be decomposable K(a,b) =  (a)   (b) Could K(a,b) = (a-b) 3 be a kernel function ? Could K(a,b) = (a-b) 4 – (a+b) 2 be a kernel function?

59 Support Vector Machines: Slide 59 Copyright © 2001, 2003, Andrew W. Moore Kernel Tricks Mercer’s condition To expand Kernel function K(x,y) into a dot product, i.e. K(x,y)=  (x)  (y), K(x, y) has to be positive semi-definite function, i.e., for any function f(x) whose is finite, the following inequality holds Could be a kernel function?

60 Support Vector Machines: Slide 60 Copyright © 2001, 2003, Andrew W. Moore Kernel Tricks Pro Introducing nonlinearity into the model Computational cheap Con Still have potential overfitting problems

61 Support Vector Machines: Slide 61 Copyright © 2001, 2003, Andrew W. Moore Nonlinear Kernel (I)

62 Support Vector Machines: Slide 62 Copyright © 2001, 2003, Andrew W. Moore Nonlinear Kernel (II)

63 Support Vector Machines: Slide 63 Copyright © 2001, 2003, Andrew W. Moore Overfitting in SVM Training Error Testing Error

64 Support Vector Machines: Slide 64 Copyright © 2001, 2003, Andrew W. Moore SVM Performance Anecdotally they work very very well indeed. Example: They are currently the best-known classifier on a well-studied hand-written-character recognition benchmark Another Example: Andrew knows several reliable people doing practical real-world work who claim that SVMs have saved them when their other favorite classifiers did poorly. There is a lot of excitement and religious fervor about SVMs as of 2001. Despite this, some practitioners are a little skeptical.

65 Support Vector Machines: Slide 65 Copyright © 2001, 2003, Andrew W. Moore Kernelize Logistic Regression How can we introduce the nonlinearity into the logistic regression?

66 Support Vector Machines: Slide 66 Copyright © 2001, 2003, Andrew W. Moore Kernelize Logistic Regression Representation Theorem

67 Support Vector Machines: Slide 67 Copyright © 2001, 2003, Andrew W. Moore Diffusion Kernel Kernel function describes the correlation or similarity between two data points Given that I have a function s(x,y) that describes the similarity between two data points. Assume that it is a non- negative and symmetric function. How can we generate a kernel function based on this similarity function? A graph theory approach …

68 Support Vector Machines: Slide 68 Copyright © 2001, 2003, Andrew W. Moore Diffusion Kernel Create a graph for the data points Each vertex corresponds to a data point The weight of each edge is the similarity s(x,y) Graph Laplacian Properties of Laplacian Negative semi-definite

69 Support Vector Machines: Slide 69 Copyright © 2001, 2003, Andrew W. Moore Diffusion Kernel Consider a simple Laplacian Consider What do these matrixes represent? A diffusion kernel

70 Support Vector Machines: Slide 70 Copyright © 2001, 2003, Andrew W. Moore Diffusion Kernel Consider a simple Laplacian Consider What do these matrixes represent? A diffusion kernel

71 Support Vector Machines: Slide 71 Copyright © 2001, 2003, Andrew W. Moore Diffusion Kernel: Properties Positive definite Local relationships L induce global relationships Works for undirected weighted graphs with similarities How to compute the diffusion kernel

72 Support Vector Machines: Slide 72 Copyright © 2001, 2003, Andrew W. Moore Computing Diffusion Kernel Singular value decomposition of Laplacian L What is L 2 ?

73 Support Vector Machines: Slide 73 Copyright © 2001, 2003, Andrew W. Moore Computing Diffusion Kernel What about L n ? Compute diffusion kernel

74 Support Vector Machines: Slide 74 Copyright © 2001, 2003, Andrew W. Moore Doing multi-class classification SVMs can only handle two-class outputs (i.e. a categorical output variable with arity 2). What can be done? Answer: with output arity N, learn N SVM’s SVM 1 learns “Output==1” vs “Output != 1” SVM 2 learns “Output==2” vs “Output != 2” : SVM N learns “Output==N” vs “Output != N” Then to predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region.

75 Support Vector Machines: Slide 75 Copyright © 2001, 2003, Andrew W. Moore References An excellent tutorial on VC-dimension and Support Vector Machines: C.J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):955-974, 1998. http://citeseer.nj.nec.com/burges98tutorial.html The VC/SRM/SVM Bible: (Not for beginners including myself) Statistical Learning Theory by Vladimir Vapnik, Wiley- Interscience; 1998 Software: SVM-light, http://svmlight.joachims.org/, free downloadhttp://svmlight.joachims.org/

76 Support Vector Machines: Slide 76 Copyright © 2001, 2003, Andrew W. Moore Ranking Problem Consider a problem of ranking essays Three ranking categories: good, ok, bad Given a input document, predict its ranking category How should we formulate this problem? A simple multiple class solution Each ranking category is a independent class But, there is something missing here … We miss the ordinal relationship between classes !

77 Support Vector Machines: Slide 77 Copyright © 2001, 2003, Andrew W. Moore Ordinal Regression Which choice is better? How could we formulate this problem? ‘good’ ‘OK’ ‘bad’ w w’

78 Support Vector Machines: Slide 78 Copyright © 2001, 2003, Andrew W. Moore Ordinal Regression What are the two decision boundaries? What is the margin for ordinal regression? Maximize margin

79 Support Vector Machines: Slide 79 Copyright © 2001, 2003, Andrew W. Moore Ordinal Regression What are the two decision boundaries? What is the margin for ordinal regression? Maximize margin

80 Support Vector Machines: Slide 80 Copyright © 2001, 2003, Andrew W. Moore Ordinal Regression What are the two decision boundaries? What is the margin for ordinal regression? Maximize margin

81 Support Vector Machines: Slide 81 Copyright © 2001, 2003, Andrew W. Moore Ordinal Regression How do we solve this monster ?

82 Support Vector Machines: Slide 82 Copyright © 2001, 2003, Andrew W. Moore Ordinal Regression The same old trick To remove the scaling invariance, set Now the problem is simplified as:

83 Support Vector Machines: Slide 83 Copyright © 2001, 2003, Andrew W. Moore Ordinal Regression Noisy case Is this sufficient enough?

84 Support Vector Machines: Slide 84 Copyright © 2001, 2003, Andrew W. Moore Ordinal Regression ‘good’ ‘OK’ ‘bad’ w


Download ppt "Nov 23rd, 2001Copyright © 2001, 2003, Andrew W. Moore Support Vector Machines Andrew W. Moore Professor School of Computer Science Carnegie Mellon University."

Similar presentations


Ads by Google