Download presentation
Presentation is loading. Please wait.
1
Announcements Project teams should be decided today! Otherwise, you will work alone. If you have any question or uncertainty about the project, talk to me by appointment. By tomorrow, you should have a working version of text classifier !
2
Support Vector Machine (III) Rong Jin
3
Recap: Support Vector Machine (SVM) for Noisy Data Noisy data No perfect linear decision boundary Balance the trade off between margin and classification errors denotes +1 denotes -1
4
Recap: Dual Problem Original optimization problem for SVM Parameters: W and b Vector W is in the feature space A weight parameter for each feature #parameters = # number of features May have too many parameters when the number of features is large Text classification: 100,000 word features 100,000 parameters !
5
Recap: Dual Problem Represent W as a linear combination of training examples Parameter space: W Every training example get a weight i #parameter: #features #training_examples Why linear combination of training examples is sufficient?
6
Recap: Dual Problem Maximize where Subject to these constraints: Quadratic Programming For non-support-vector training data points: i =0 Only support vectors have i 0 Non single training example can be dominative Weights for positive and negative examples are balanced
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.