Download presentation
Presentation is loading. Please wait.
1
Efficient Convex Relaxation for Transductive Support Vector Machine Zenglin Xu 1, Rong Jin 2, Jianke Zhu 1, Irwin King 1, and Michael R. Lyu 1 4. Experimental Results and Conclusion To evaluate the performance of the proposed method, we perform experiments on several data sets with different scales. The following table shows the classification performance of supervised SVM and a variety of approximation methods to TSVM. 1. Introduction We consider the problem of Support Vector Machine transduction, which involves a combinatorial problem with exponential computational complexity in the number of unlabeled examples. Although several studies are devoted to Transductive SVM, they suffer either from The dual form of SVM can be formulated in the following according to [Lanckriet et al., 2004] : where K is the kernel matrix. Acknowledgments Computation time of the proposed convex relaxation approach for TSVM (i.e., CTSVM) and the traditional semi-definite relaxation approach for TSVM (i.e., RTSVM) versus the number of unlabeled examples. The Course data set is used, and the number of labeled examples is 20. The experimental results demonstrate that our proposed algorithm is effective to decrease the time complexity of convex relaxation TSVM and to improve the transductive classification accuracy performance. Discussion Based on the result that the set We propose an improved SDP relaxation algorithm to TSVM. We further define We introduce a new variable z, such that 3. Efficient Relaxation for TSVM The dual problem then becomes as follows: Conclusion 1 {zlxu, jkzhu, king, lyu}@cse.cuhk.edu.hk Department of Computer Science and Engineering Department of Computer Science and Engineering The Chinese University of Hong Kong The Chinese University of Hong Kong Shatin, N.T., Hong Kong Shatin, N.T., Hong Kong 2 rongjin@cse.msu.edu Department of Computer Science and Engineering Department of Computer Science and Engineering Michigan State University Michigan State University East Lansing, MI, 48824 East Lansing, MI, 48824 2. Problem Using Schur complement, we have a semi-definite programming (SDP) the following approximation is achieved in [Xu et al., 2005] variable scale: Contributions time complexity: 1 2 The constraint where the last constraint is the balance constraint, which avoids assigning all the data to one of the classes. and, we then have To solve the above problem, we get the following theorem by calculating the Lagrange of the above non-convex problem. Our proposed convex approximation method provides tighter approximation than the previous method. Our prediction function implements the conjugate of conjugate of the prediction function f(x), which is the convex envelope of f(x). The solution of the proposed algorithm is related to that of harmonic functions. variable scale: worst-case time complexity: 1 2 3 (1) Unlike the semi-definite relaxation [Xu et al., 2005] that approximates TSVM by dropping the rank constraint, the proposed approach approximates TSVM by its dual problem, which provides a tighter approximation. (2) The proposed algorithm involves fewer free parameters and therefore significantly improves the efficiency by reducing the worst- case computational complexity from to. The decision function can be written as follows:. We first propagate the class labels from the labeled examples to the unlabeled one by term, then adjust the prediction labels by the factor The following figure compares the training time of the traditional relaxation method with the proposed efficient convex relaxation method. The authors thank the anonymous reviewers and PC members for their valuable suggestions and comments to improve this paper. Note: (1) Due to the high computational complexity, the traditional SDP-relaxation method is only evaluated on the small-size data sets, i.e., “IBM-s” and “Course-s”. The results are 68.57±22.73 and 64.03±7.65, respectively, which are worse than the proposed CTSVM method. (2) SVM-light fails to converge on Banana within one week. (1) the high computation complexity, or from (2) the solutions of local optimum., is dropped.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.