Download presentation
Presentation is loading. Please wait.
1
Support Vector Machines for Multiple- Instance Learning Authors: Andrews, S.; Tsochantaridis, I. & Hofmann, T. (Advances in Neural Information Processing Systems, 2002, 15, 577-584) Presentation by BH Shen to Machine Learning Research Lab, ASU 09/19/2006
2
Outline SVM (Support Vector Machine) Maximum Pattern Margin Formulation Maximum Bag Margin Formulation Heuristics Simulation results of some other learning algorithms for MIL.
3
Problem Instance For supervised learning, we are given For MIL, we are given
4
SVM: To find a Max Margin Classifier Find a classifier that gives the least chance of causing a misclassification if we’ve made a small error in locating the boundary.
5
SVM: To find a Max Margin Classifier The margin of the classifier is the width between the boundary of the distinct classes.
6
SVM: To find a Max Margin Classifier Support vectors are those datapoints on the boundary of the half-spaces. Support vectors
7
SVM The half-spaces define the feasible regions for the data points
8
SVM Soft margin: errors are allowed to solve infeasibility issue for the datapoints that cannot be separated.
9
SVM: Constraints Constraints: are combined into
10
SVM: Objective function Margin:
11
SVM: Objective function Maximizing is the same as Minimizing. We also like to minimize the sum of training set errors, due to the slack variables
12
SVM: Primal Formulation Subject to Quadratic minimization problem:
13
Maximum Pattern Margin Modification to SVM: At least one instance in each positive bag is positive.
14
Maximum Pattern Margin
15
Pattern Margin: Primal Formulation Subject to Mixed integer problem:
16
Heuristics Idea: Alternate the following TWO steps –For fixed integer variables, solve the associated quadratic problem for optimal discriminate function. –For a given discriminate function, update one, several, or all integer variables that (locally) minimize the objective function.
17
Hueristic for Maximum Pattern Margin
18
Maximum Bag Margin A bag label is represented by an instance that has maximum distance from a given separating hyperplane. Select a “witness” data point from each positive bag as the delegant of the bag. For the given witnesses, apply SVM.
19
Maximum Bag Margin
22
Bag Margin: Primal Formulation Mixed integer formulation given in the paper: A (better?) alternative constraint for positive bag might be
23
Hueristic for Maximum Bag Margin
24
Simulation results
25
Accuracy for other MIL Methods Gaussion Regression QuadraticRule-based Supervised learning algorithm (Non-MIL) “Supervised versus Multiple Instance Learning: An Empirical Comparison” by S Ray, M Craven for the 22nd International Conference on Machine Learning, 2005
26
Conclusion from the table Different inductive biases are appropriate for different MI problems. Ordinary supervised learning algorithms learn accurate models in many MI settings. Some MI algorithms learn consistently better than their supervised-learning counterparts.
27
The End
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.