Kullback-Leibler Boosting Ce Liu, Hueng-Yeung Shum Microsoft Research Asia CVPR 2003 Presented by Derek Hoiem
RealBoost Review Start with some candidate feature set Initialize training sample weights Loop: Add feature to minimize error bound Reweight training examples, giving more weight to misclassified examples Assign weight to weak classifier according to weighted error of training samples Exit loop after N features have been added
The Basic Idea of KLBoosting Similar to RealBoost except: Features are general linear projections Generates optimal features Uses KL divergence to select features Finer tuning on coefficients
Linear Features KLBoosting: VJ Adaboost:
What makes a feature good? KLBoosting: RealBoost: Minimize upper bound on classification error
Creating the feature set Sequential 1-D Optimization Begin with large initial set of features (linear projections) Choose top L features according to KL-Div Initial feature = weighted sum of L features Search for optimal feature in directions of L features
Example Initial feature set: x x x x x x x x
Example Top two features (by KL-Div): x x x x x x x x w1w1 w2w2
Example Initial feature (weighted combo by KL): x x x x x x x x w1w1 w2w2 f0f0
Example Optimize over w 1 x x x x x x x x w1w1 w2w2 f1f1 f 1 = f 0 + B * w 1 B = -a 1..a 1
Example Optimize over w 2 x x x x x x x x w1w1 w2w2 f2f2 f 2 = f 1 + B * w 2 B = -a 2..a 2 (and repeat…)
Creating the feature set First three features Selecting the first feature
Creating feature set
Classification = ½ in RealBoost
Parameter Learning With each added feature k: Set first a 1..a k-1 to current optimal value Set a k to 0 Minimize recognition error on training: Solve using greedy algorithm
KLBoost vs AdaBoost 1024 candidate features for AdaBoost
Face detection: candidate features 52,400 2,800 450
Face detection: training samples 8760 faces + mirror images 2484 non-face images 1.34B patches Cascaded classifier allows bootstrapping
Face detection: final features top ten global semantic global not semantic local
Results x x x x Schneiderman (2003) Test time:.4 sec per 320x240 image
Comments Training time? Which improves performance: Generating optimal features? KL feature selection? Optimizing alpha coefficients?