Variational Bayesian Image Processing on Stochastic Factor Graphs Xin Li Lane Dept. of CSEE West Virginia University.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Feature Selection as Relevant Information Encoding Naftali Tishby School of Computer Science and Engineering The Hebrew University, Jerusalem, Israel NIPS.
Surface Compression with Geometric Bandelets Gabriel Peyré Stéphane Mallat.
Submodular Dictionary Selection for Sparse Representation Volkan Cevher Laboratory for Information and Inference Systems - LIONS.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Patch to the Future: Unsupervised Visual Prediction
Introduction to Belief Propagation and its Generalizations. Max Welling Donald Bren School of Information and Computer and Science University of California.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
The Factor Graph Approach to Model-Based Signal Processing Hans-Andrea Loeliger.
Belief Propagation on Markov Random Fields Aggeliki Tsoli.
Graphical models, belief propagation, and Markov random fields 1.
Yung-Lin Huang, Yi-Nung Liu, and Shao-Yi Chien Media IC and System Lab Graduate Institute of Networking and Multimedia National Taiwan University Signal.
Video Coding with Spatio-temporal Texture Synthesis and Edge-based inpainting Chunbo Zhu, Xiaoyan Sun, Feng Wu, and Houqiang Li ICME 2008.
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
1 Image Completion using Global Optimization Presented by Tingfan Wu.
Image Denoising via Learned Dictionaries and Sparse Representations
New Results in Image Processing based on Sparse and Redundant Representations Michael Elad The Computer Science Department The Technion – Israel Institute.
Learning Low-Level Vision William T. Freeman Egon C. Pasztor Owen T. Carmichael.
* Joint work with Michal Aharon Guillermo Sapiro
Multiscale transforms : wavelets, ridgelets, curvelets, etc.
Sparse and Redundant Representation Modeling for Image Processing Michael Elad The Computer Science Department The Technion – Israel Institute of technology.
(1) A probability model respecting those covariance observations: Gaussian Maximum entropy probability distribution for a given covariance observation.
Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem.
Linear Algebra and Image Processing
24 November, 2011National Tsin Hua University, Taiwan1 Mathematical Structures of Belief Propagation Algorithms in Probabilistic Information Processing.
Adaptive Regularization of the NL-Means : Application to Image and Video Denoising IEEE TRANSACTION ON IMAGE PROCESSING , VOL , 23 , NO,8 , AUGUST 2014.
A Brief Introduction to Graphical Models
Why do we Need Image Model in the first place?
Wavelets and Denoising Jun Ge and Gagan Mirchandani Electrical and Computer Engineering Department The University of Vermont October 10, 2003 Research.
INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp.
Message-Passing for Wireless Scheduling: an Experimental Study Paolo Giaccone (Politecnico di Torino) Devavrat Shah (MIT) ICCCN 2010 – Zurich August 2.
Sparsity-based Image Deblurring with Locally Adaptive and Nonlocally Robust Regularization Weisheng Dong a, Xin Li b, Lei Zhang c, Guangming Shi a a Xidian.
Markov Random Fields Probabilistic Models for Images
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
Image Enhancement [DVT final project]
Real-Time Exemplar-Based Face Sketch Synthesis Pipeline illustration Note: containing animations Yibing Song 1 Linchao Bao 1 Qingxiong Yang 1 Ming-Hsuan.
14 October, 2010LRI Seminar 2010 (Univ. Paris-Sud)1 Statistical performance analysis by loopy belief propagation in probabilistic image processing Kazuyuki.
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
Collective Sensing: a Fixed-Point Approach in the Metric Space 1 Xin Li LDCSEE, WVU 1 This work is partially supported by NSF ECCS
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 7th~10th Belief propagation Kazuyuki Tanaka Graduate School of Information Sciences,
EE565 Advanced Image Processing Copyright Xin Li Why do we Need Image Model in the first place? Any image processing algorithm has to work on a collection.
CS Statistical Machine learning Lecture 24
1 Markov random field: A brief introduction (2) Tzu-Cheng Jen Institute of Electronics, NCTU
Patch-based Image Interpolation: Algorithms and Applications
Patch-based Nonlocal Denoising for MRI and Ultrasound Images Xin Li Lane Dept. of CSEE West Virginia University.
Lecture 2: Statistical learning primer for biologists
Towards Total Scene Understanding: Classification, Annotation and Segmentation in an Automatic Framework N 工科所 錢雅馨 2011/01/16 Li-Jia Li, Richard.
SuperResolution (SR): “Classical” SR (model-based) Linear interpolation (with post-processing) Edge-directed interpolation (simple idea) Example-based.
Efficient Belief Propagation for Image Restoration Qi Zhao Mar.22,2006.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
EE565 Advanced Image Processing Copyright Xin Li Further Improvements Gaussian scalar mixture (GSM) based denoising* (Portilla et al.’ 2003) Instead.
Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling The research leading to these results has received funding from the European.
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
EE5965 Advanced Image Processing Copyright Xin Li Post-processing: Fighting Against Coding Artifacts Deblocking of DCT coded images – Image.
EE565 Advanced Image Processing Copyright Xin Li Why do we Need Image Model in the first place? Any image processing algorithm has to work on a collection.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Edge Preserving Spatially Varying Mixtures for Image Segmentation Giorgos Sfikas, Christophoros Nikou, Nikolaos Galatsanos (CVPR 2008) Presented by Lihan.
Iterative Techniques for Image Interpolation
1 Nonlinear models for Natural Image Statistics Urs Köster & Aapo Hyvärinen University of Helsinki.
Graduate School of Information Sciences, Tohoku University
Sublinear Computational Time Modeling in Statistical Machine Learning Theory for Markov Random Fields Kazuyuki Tanaka GSIS, Tohoku University, Sendai,
Today.
School of Electronic Engineering, Xidian University, Xi’an, China
Compressive Sensing Imaging
Graduate School of Information Sciences, Tohoku University
Graduate School of Information Sciences, Tohoku University
Physical Fluctuomatics 7th~10th Belief propagation
Probabilistic image processing and Bayesian network
Probabilistic image processing and Bayesian network
Lecture 7 Patch based methods: nonlocal means, BM3D, K- SVD, data-driven (tight) frame.
Presentation transcript:

Variational Bayesian Image Processing on Stochastic Factor Graphs Xin Li Lane Dept. of CSEE West Virginia University

Outline Statistical modeling of natural images Statistical modeling of natural images From old-fashioned local models to newly-proposed nonlocal models From old-fashioned local models to newly-proposed nonlocal models Factor graph based image modeling Factor graph based image modeling A powerful framework unifying local and nonlocal approaches A powerful framework unifying local and nonlocal approaches EM-based inference on stochastic factor graphs EM-based inference on stochastic factor graphs Applications and experimental results Applications and experimental results Denoising, inpainting, interpolation, post-processing, inverse halftoning, deblurring Denoising, inpainting, interpolation, post-processing, inverse halftoning, deblurring......

Cast Signal/Image Processing Under a Bayesian Framework Image restoration (Besag et al. ’ 1991) Image restoration (Besag et al. ’ 1991) Image denoising (Simoncelli&Adelson ’ 1996) Image denoising (Simoncelli&Adelson ’ 1996) Interpolation (Mackay ’ 1992) and super-resolution (Schultz& Stevenson ’ 1996 ) Interpolation (Mackay ’ 1992) and super-resolution (Schultz& Stevenson ’ 1996 ) Inverse halftoning (Wong ’ 1995) Inverse halftoning (Wong ’ 1995) Image segmentation (Bouman&Shapiro ’ 1994) Image segmentation (Bouman&Shapiro ’ 1994) x: Unobservable data y: Observation data Image prior (the focus of this talk) Likelihood (varies from application to application)

Statistical Modeling of Natural Images: the Pursuit of a Good Prior Local models Local models Markov Random Field (MRF) and its extensions (e.g., 2D Kalman-filtering, Field-of-Expert) Markov Random Field (MRF) and its extensions (e.g., 2D Kalman-filtering, Field-of-Expert) Sparsity-based: DCT, wavelets, steerable pyramids, geometric wavelets (edgelets, curvelets, ridgelets, bandelets) Sparsity-based: DCT, wavelets, steerable pyramids, geometric wavelets (edgelets, curvelets, ridgelets, bandelets) Nonlocal models Nonlocal models Bilateral filtering (Tomasi et al. ICCV ’ 1998) Bilateral filtering (Tomasi et al. ICCV ’ 1998) Texture synthesis (Efros&Leung ICCV ’ 1999) Texture synthesis (Efros&Leung ICCV ’ 1999) Exemplar-based inpainting (Criminisi et al. TIP ’ 2004) Exemplar-based inpainting (Criminisi et al. TIP ’ 2004) Nonlocal mean denoising (Buades et al. ’ CVPR ’ 2005) Nonlocal mean denoising (Buades et al. ’ CVPR ’ 2005) Total Least-Square denoising (Hirakawa&Parks TIP ’ 2006) Total Least-Square denoising (Hirakawa&Parks TIP ’ 2006) Block-matching 3D denoising (Dabov et al. TIP ’ 2007) Block-matching 3D denoising (Dabov et al. TIP ’ 2007)

Introducing a New Language of Factor Graphs Why Factor Graphs? Why Factor Graphs? The most general form of graphical probability models (both MRF and Bayesian networks can be converted to FGs) The most general form of graphical probability models (both MRF and Bayesian networks can be converted to FGs) Widely used in computer science and engineering ( ) Widely used in computer science and engineering (forward-backward algorithm, Viterbi algorithm, turbo decoding algorithm, Pearl ’ s belief propagation algorithm, Kalman filter 1 ) What is Factor Graph? What is Factor Graph? a bipartite graph that expresses which variables are arguments of which local functions Factor/function node (solid squares) vs. variable nodes (empty circles) Factor/function node (solid squares) vs. variable nodes (empty circles) B1B1 B2B2 B7B7 B8B8 B3B3 B4B4 B5B5 B6B6 f1f1 f2f2 f3f3 f4f4 f1f1 f2f2 f3f3 f4f4 1,2,4 3,6 5,7 7,8 L:F  V 1 Kschischang, F.R.; Frey, B.J.; Loeliger, H.-A., "Factor graphs and the sum-product algorithm," IEEE Transactions on Information Theory,, vol.47, no.2, pp , Feb 2001

Variable Nodes=Image Patches Neuroscience: receptive fields of neighboring cells in human vision system have severe overlapping Neuroscience: receptive fields of neighboring cells in human vision system have severe overlapping Engineering: patch has been under the disguise of many different names such as windows in digital filters, blocks in JPEG and the support of wavelet bases Engineering: patch has been under the disguise of many different names such as windows in digital filters, blocks in JPEG and the support of wavelet bases Cited from D. Hubel, “Eye, Brain and Vision”, 1988

Factorization: the Art of Statistical Image Modeling Wavelet-based statistical models (geometric proximity defines the neighborhood) Locally linear embedding 1 (perceptual similarity defines the neighborhood) SP ML Domain-Markovian Range-Markovian 1 S.T. Roweis and L.K. Saul, “ Nonlinear Dimensionality Reduction by Locally Linear Embedding ” (22 December 2000),Science 290 (5500), 2323.

Unification Using Factor Graphs f1f1 f2f2 f3f3 f4f4 B1B1 B2B2 B3B3 B4B4 naive Bayesian (DCT/wavelet-based models) MRF-based B0B0 B1B1 B2B2 B3B3 x B0B0 B1B1 B3B3 B2B2 B0B0 B1B1 B2B2 B3B3 kNN/kmeans clustering (nonlocal image models)

A Manifold Interpretation of Nonlocal Image Prior A Manifold Interpretation of Nonlocal Image Prior MRNMRN B1B1 BkBk B0B0 How to maximize the sparsity of a representation? Conventional wisdom: adapt basis to signal (e.g., basis pursuit, matching pursuit) New proposal: adapt signal to basis (by probing its underlying organization principle)

Organizing Principle: Latent Variable L P(y|x) xy image denoising image inpainting image coding image halftoning L B 11 B 22 B 14 B 13 B 12 B 41 B 31 B 21 B 33 B 32 B 23 B 24 B 34 B 44 B 43 B 42 fBfB fAfA fCfC image deblurring sparsifying transform “ Nature is not economical of structures but organizing principles. ” - Stanislaw M. Ulam L

Maximum-Likelihood Estimation of Graph Structure L Pack into 3D Array D For. Trans. Coring B0B0 BkBk B1B1 … Inv. Trans. unpack into 2D patches B0B0 BkBk B1B1 … ^ ^^ Update the estimate of L Update the estimate of x loop over every factor node f j A variational interpretation of such EM-based inference on FGs is referred to the paper P(y|x)

Problem 1: Image Denoising PSNR(DB) PERFORMANCE COMPARISON AMONG DIFFERENT SCHEMES FOR 12 TEST IMAGES ATσw = 100 SSIM PERFORMANCE COMPARISON AMONG DIFFERENT SCHEMES FOR 12 TEST IMAGES ATσw = 100 BM3D (kNN,iter=2) SFG (kmeans,iter=20) σwσw org

Problem 2: Image Recovery top-down: test1, test3, test5 top-down: test2, test4, test6 DCT FoE EXP BM3D LSP SFG PSNR(dB) performance comparison SSIM performance comparison Local models: DCT, FoE and LSP Nonlocal models: EXP, BM3D 1 and SFG 1 Our own extension into image recovery xy

xybicubicNEDI 1 FG 28.70dB 27.34dB 28.19dB 31.76dB 32.36dB 32.63dB 34.71dB 34.45dB 37.35dB 18.81dB 15.37dB 16.45dB Problem 3: Resolution Enhancement 1 X. Li and M. Orchard, “ New edge directed interpolation ”, IEEE TIP, 2001

29.06dB 31.56dB 34.96dB xy DT KRFG dB 31.16dB 36.51dB 17.90dB 18.49dB 29.25dB 26.04dB 24.63dB 29.91dB Problem 4: Irregular Interpolation DT- Delauney Triangle-based (griddata under MATLAB) KR- Kernal Regression-based (Takeda et al. IEEE TIP 2007 w/o parameter optimization) 1 X. Li, “ Patch-based image interpolation: algorithms and applications, ” Inter. Workshop on Local and Non-Local Approximation (LNLA) ’ % kept

Problem 5: Post-processing JPEG-decoded at rate of 0.32bpp (PSNR=32.07dB) SFG-enhanced at rate of 0.32bpp (PSNR=33.22dB) SPIHT-decoded at rate of 0.20bpp (PSNR=26.18dB) SFG-enhanced at rate of 0.20bpp (PSNR=27.33dB) Maximum-Likelihood (ML) Decoding Maximum a Posterior (MAP) Decoding

Problem 6: Inverse Halftoning without nonlocal prior 1 (PSNR=31.84dB, SSIM=0.8390) with nonlocal prior (PSNR=32.82dB, SSIM=0.8515) 1 Available from Image Halftoning Toolbox released by UT-Austin Researchers

Conclusions and Perspectives Despite the rich structures in natural images, the underlying organization principle is simple (self- similarity Despite the rich structures in natural images, the underlying organization principle is simple (self- similarity We have shown how similarity can lead to sparsity in a nonlinear representation of images We have shown how similarity can lead to sparsity in a nonlinear representation of images FG only represents one mathematical language for interpreting such principle (multifractal formalism is another) FG only represents one mathematical language for interpreting such principle (multifractal formalism is another) Image processing (low-level vision) could benefit from data clustering (higher-level vision): how does human visual cortex learn to decode the latent variable L through unsupervised learning? Image processing (low-level vision) could benefit from data clustering (higher-level vision): how does human visual cortex learn to decode the latent variable L through unsupervised learning? Reproducible Research: MATLAB codes accompanying this work are available at (more will be added)