Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classification Problem 2-Category Linearly Separable Case A- A+ Malignant Benign.

Similar presentations


Presentation on theme: "Classification Problem 2-Category Linearly Separable Case A- A+ Malignant Benign."— Presentation transcript:

1 Classification Problem 2-Category Linearly Separable Case A- A+ Malignant Benign

2 Support Vector Machines Maximizing the Margin between Bounding Planes A+ A-

3 Algebra of the Classification Problem 2-Category Linearly Separable Case  Given m points in the n dimensional real space  Represented by an matrix or  Membership of each point in the classes is specified by an diagonal matrix D : if and if  Separate and by two bounding planes such that:  More succinctly:, where

4 Support Vector Classification (Linearly Separable Case) Letbe a linearly separable training sample and represented by matrices

5 Support Vector Classification (Linearly Separable Case, Primal) The hyperplanethat solves the minimization problem: realizes the maximal margin hyperplane with geometric margin

6 Support Vector Classification (Linearly Separable Case, Dual Form) The dual problem of previous MP: subject to Applying the KKT optimality conditions, we have. But where is Don ’ t forget

7 Dual Representation of SVM (Key of Kernel Methods: ) The hypothesis is determined by

8 Compute the Geometric Margin via Dual Solution  The geometric marginand, hence we can compute by using.  Use KKT again (in dual)!  Don ’ t forget 

9 Soft Margin SVM (Nonseparable Case)  If data are not linearly separable  Primal problem is infeasible  Dual problem is unbounded above  Introduce the slack variable for each training point  The inequality system is always feasible e.g.

10

11 Two Different Measures of Training Error 2-Norm Soft Margin: 1-Norm Soft Margin:

12 2-Norm Soft Margin Dual Formulation The Lagrangian for 2-norm soft margin: where The partial derivatives with respect to primal variables equal zeros

13 Dual Maximization Problem For 2-Norm Soft Margin Dual:  The corresponding KKT complementarity:  Use above conditions to find

14 Linear Machine in Feature Space Let be a nonlinear map from the input space to some feature space The classifier will be in the form ( Primal ): Make it in the dual form:

15 Kernel: Represent Inner Product in Feature Space The classifier will become: Definition: A kernel is a function such that where

16 Introduce Kernel into Dual Formulation Letbe a linearly separable training sample in the feature space implicitly defined by the kernel. The SV classifier is determined bythat solves subject to

17 The value of kernel function represents the inner product in feature space Kernel functions merge two steps 1. map input data from input space to feature space (might be infinite dim.) 2. do inner product in the feature space Kernel Technique Based on Mercer ’ s Condition (1909)

18 Mercer’s Conditions Guarantees the Convexity of QP and is a symmetric function on. be a finite space Let Then is a kernel function if and only if is positive semi-definite.

19 Introduce Kernel in Dual Formulation For 2-Norm Soft Margin  Then the decision rule is defined by  Use above conditions to find  The feature space implicitly defined by  Supposesolves the QP problem:

20 Introduce Kernel in Dual Formulation for 2-Norm Soft Margin for any  is chosen so that with Because: and

21 Geometric Margin in Feature Space for 2-Norm Soft Margin  The geometric margin in the feature space is defined by  Why

22 Discussion about C for 2-Norm Soft Margin  The only difference between “ hard margin ” and 2-norm soft margin is the objective function in the optimization problem  Larger C will give you a smaller margin in the feature space  Compare  Smaller C will give you a better numerical condition


Download ppt "Classification Problem 2-Category Linearly Separable Case A- A+ Malignant Benign."

Similar presentations


Ads by Google