Download presentation
Presentation is loading. Please wait.
Published byMelanie Hubbard Modified over 9 years ago
1
An Efficient Approach to Learning Inhomogenous Gibbs Models Ziqiang Liu, Hong Chen, Heung-Yeung Shum Microsoft Research Asia CVPR 2003 Presented by Derek Hoiem
2
Overview Build histograms for projections to 1-D Feature selection: max KL divergence between estimated and true distribution 1-D histograms for a feature computed from training data and MCMC sampling Fast solution with good starting point and importance sampling
3
Maximum Entropy Principle p(x) and f(x) should have same stats over observed features but p(x) should be as random as possible over other dimensions
4
Gibbs Distribution and KL-Divergence The solution: Gibbs distribution Λ minimizes the KL divergence:
5
Inhomogeneous Gibbs Model Gaussian and MoG deemed inadequate Use vector-valued features (histograms)
6
Approximate Information Gain and KL-Divergence Effectiveness of feature defined by reduction in KL-divergence: Approximate information gain given by (old params constant): For a vector-valued feature: Key Contribution! gainstarting point
7
Estimating Λ: Importance Sampling Obtain reference samples x ref by MCMC from starting point Update Λ by: Bad starting point Good starting point
8
A Toy Success Story True Reference (Initial) Optimized Estimate
9
Caricature Generation: Representation Learn mapping from photo to caricature Active appearance models: Photos: shape + texture (44-D after PCA) Caricature: shape (25-D after PCA)
10
Caricature Generation: Learning Gain(1)=.447 Gain(17)=.196 100,000 reference samples 8 hours on 1.4GHz 256MB vs 24 hours on 667MHz 18-D Estimate: Draw samples from: Approximate to:
11
Caricature Generation: Results
13
Comments Claims 100x speedup from efficiency analysis (33% speedup in reality)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.