Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fields of Experts: A Framework for Learning Image Priors 2006. 7. 10 (Mon) Young Ki Baik, Computer Vision Lab.

Similar presentations


Presentation on theme: "Fields of Experts: A Framework for Learning Image Priors 2006. 7. 10 (Mon) Young Ki Baik, Computer Vision Lab."— Presentation transcript:

1 Fields of Experts: A Framework for Learning Image Priors 2006. 7. 10 (Mon) Young Ki Baik, Computer Vision Lab.

2 2 Fields of Experts References On the Spatial Statistics of Optical Flow Stefan Roth and Michael J. Black (ICCV 2005) Fields of Experts: A Framework for Learning Image Priors Stefan Roth, Michael J. Black (CVPR 2005) Products of Experts G. Hinton (ICANN 1999) Training products of experts by minimizing contrastive divergence G. Hinton (Neural Comp. 2002) Sparse coding with an over-complete basis set B. Olshausen and D. Field (VR1997)

3 3 Fields of Experts Contents Introduction Products of Experts Fields of Experts Application : Image denoising Summary

4 4 Fields of Experts Introduction (Image denoising) Spatial filter Gaussian, Mean, Median ….

5 5 Fields of Experts Introduction (Image denosing)

6 6 Fields of Experts Introduction Target Developing a framework for learning rich, generic image priors (potential function) that capture the statistics of natural scenes. Special features Sparse Coding methods and Products of Experts Extended version of Products of Experts. MRF(Markov Random Field) model with learning potential function in order to solving conventional PoE problems.

7 7 Fields of Experts Sparse Coding Sparse coding represent an image patch in terms of a linear combination of learned filters( or bases). To express the image probability with small parameters An example of mixture model

8 8 Fields of Experts Products of Experts Mixture model Build a model of a complicated data distribution by combining several simple models. Mixture models take a weighted sum of the distributions. Mixture model: Scale each distribution down and add them together

9 9 Fields of Experts Products of Experts Mixture model Mixture models are very inefficient in high- dimensional spaces.

10 10 Fields of Experts Products of Experts PoE model Build a model of a complicated data distribution by combining several simple models. multiply the distributions together and renormalize. The product is much sharper than the individual distributions. Product model: Multiply the two densities together at every point and then renormalize.

11 11 Fields of Experts Products of Experts PoE model PoE ’ s work well on high dimensional distributions. A normalization term is needed to convert the product of the individual densities into a combined density.

12 12 Fields of Experts Products of Experts Geoffrey E. Hinton : Products of Exports Most of perceptual systems produce a sharp posterior distribution on high-dimensional manifold. PoE model is very efficient to solve vision problem.

13 13 Fields of Experts Products of Experts PoE framework for vision problem Learning sparse topographic representation with products of Student-t distributions -M. Welling, G. Hinton, and S. Osindero(NIPS 2003)

14 14 Fields of Experts Products of Experts PoE framework for vision problem Experts : Student-t distribution Responses of linear filters applied to natural images typically resemble Studient-t experts Learning sparse topographic representation with products of Student-t distributions -M. Welling, G. Hinton, and S. Osindero(NIPS 2003)

15 15 Fields of Experts Products of Experts PoE framework for vision problem Probability density in Gibbs form

16 16 Fields of Experts Products of Experts Problems Patch based method Patch can be set to whole image or collection of patch with specific location in order to treat whole image region.

17 17 Fields of Experts Products of Experts Problems The number of parameters to learn would be too large. The model would only work for one specific image size and would not generalize to other image size. The model would not be translation invariant, which is a desirable property for generic image priors.

18 18 Fields of Experts Key idea Combining MRF models

19 19 Fields of Experts Key idea Define a neighborhood system that connects all nodes in an m x m rectangular region. Defines a maximal clique in the graph

20 20 Fields of Experts The Hammersley-Clifford theorem Set the probability density of graphical model as a Gibbs distribution. Translation-invariance of an MRF model assume that potential function is same for all cliques.

21 21 Fields of Experts Potential function Learn from training images Probability density of a full image under the FoE

22 22 Fields of Experts Learning Parameter and filter can be learned from a set of training images by maximizing its likelihood. Maximizing the likelihood for the PoE and the FoE model is equivalent. Perform a gradient ascent method on the log- likelihood

23 23 Fields of Experts Application Image denoising Given an observed noisy image y, Find the true image x that maximizes the posterior probability. Assumption The true image has been corrupted by additive, i.i.d Gaussian noise with zero mean and known standard deviation.

24 24 Fields of Experts Application Image denoising In order to maximize the posterior probability, gradient ascent method on the logarithm of the posterior probability is used. The gradient of the log-likelihood The gradient of the log-prior

25 25 Fields of Experts Application Image denoising The gradient ascent denoising algorithm

26 26 Fields of Experts Applications Image denoising a) Original image b) Noisy image c) Denoising image

27 27 Fields of Experts Summary Contribution Point out limitation of conventional Product of Experts. PoE focus on the modeling of small image patches rather than defining a prior model over an entire image. Propose FoE which models the prior probability of an entire image in term of random field with overlapping cliques, whose potentials are represented as a PoE.


Download ppt "Fields of Experts: A Framework for Learning Image Priors 2006. 7. 10 (Mon) Young Ki Baik, Computer Vision Lab."

Similar presentations


Ads by Google