Download presentation
Presentation is loading. Please wait.
1
Ti5216100 MACHINE VISION SUPPORT VECTOR MACHINES Maxim Mikhnevich Pavel Stepanov Pankaj Sharma Ivan Ryzhov Sergey Vlasov 2006-2007
2
Content 1.Where Support Vector Machine comes from? 2.Relationship between Machine Vision and Pattern Recognition (the place of SVM in the whole system) 3.Application areas of Support Vector Machines 4.Classification problem 5.Linear classifiers 6.The Non-Separable Case 7.Kernel-trick 8.Advantages and disadvantages
3
Out of the presentation Lagrange Theorem Kuhn-Tucker Theorem Quadratic Programming We don’t go to deep math
4
History The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs
5
Relationship between Machine Vision and Pattern Recognition Our task during this presentation is to show that SVM is one of the best classifiers
6
Application Areas (just several examples)
7
Application Areas (cont.)
9
Geometrical Interpretation of how the SVM separates the face and non-face classes. The patterns are real support vectors obtained after training the system. Notice the small number of total support vectors and the fact that a higher proportion of them correspond to non-faces.
10
Basic Definitions from technical viewpoint Feature Feature space Hyperplane Margin
11
Problem Binary classification Learning collection: - Vectors x 1,…,x n – our documents (objects) - y 1,…,y n {-1,1} Our goal is to find the optimal hyperplane!
12
Linear classifiers w·xi > b => yi=1 w·xi yi = -1 Maximum margin linear classifier w·xi - b >= 1 => yi = 1 w·xi - b yi = -1
13
Linear classifiers (cont.) (a) - a separating hyperplane with a small margin (b) - a separating hyperplane with a larger margin A better generalization capability is expected from (b)!
14
Margin width Let’s take two any points from H1 and H2: x + and x -
15
Formalization Our aim is to find the widest margin!!! Constraints: Optimization criterion: Number of constraints = number of pairs (x i,yi)
16
Noise and Penalties Optimization criterion: Constraints: Number of constraints = 2 * number of pairs (x i,yi) where i >= 0
17
First great idea The idea give us how to find linear classifier: then wider our margin and then sum of errors is smaller then better. Now we’ve brought our problem of finding linear classifier to Quadratic Programming problem.
18
How to solve our problem 1.Construct Lagrangian 2.Use Kuhn-Tucker Theorem
19
How to solve our problem Our solution is:
20
Second great idea Chose the mapping to extended space After that we can find the new function which is called Kernel: Find the linear margin w, b in extended space Now we have our hyperplane in initial space
21
Second great idea - Extend our space Solution of XOR problem with the help of Support Vector Machines (by increasing of our space dimension) OR different example:
22
Examples (Extend our space)
23
SVM Kernel Functions K(a,b)=(a. b +1) d is an example of an SVM Kernel Function Beyond polynomials there are other very high dimensional basis functions that can be made practical by finding the right Kernel Function Radial-Basis-style Kernel Function: –Neural-net-style Kernel Function: , and are magic parameters that must be chosen by a model selection method such as CV or VCSRM
24
Advantages and Disadvantages
25
References 1.V. Vapnik, The Nature of Learning Theory, Springer-Verlag, New York, 1995. 2.http://www.support-vector-machines.org/http://www.support-vector-machines.org/ 3.http://en.wikipedia.org/wiki/Support_vector_machinehttp://en.wikipedia.org/wiki/Support_vector_machine 4.Pattern Classification (2nd ed.) Richard O. Duda, Peter E. Hart 5.Support Vector Machines Andrew W. Moore tutorial at http://www.autonlab.org/tutorials/svm.html http://www.autonlab.org/tutorials/svm.html 6.B. Schölkopf, C.J.C. Burges, and A.J. Smola, Advances in Kernel Methods—Support Vector Learning, to appear, MIT Press, Cambridge, Mass, 1998.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.