Shaojie Zhuo, Dong Guo, Terence Sim School of Computing, National University of Singapore CVPR2010 Reporter: 周 澄 (A.J.) 01/16/2011 Key words: image deblur,

Slides:



Advertisements
Similar presentations
Inferring the kernel: multiscale method Input image Loop over scales Variational Bayes Upsample estimates Use multi-scale approach to avoid local minima:
Advertisements

Removing blur due to camera shake from images. William T. Freeman Joint work with Rob Fergus, Anat Levin, Yair Weiss, Fredo Durand, Aaron Hertzman, Sam.
Bayesian Belief Propagation
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
S INGLE -I MAGE R EFOCUSING AND D EFOCUSING Wei Zhang, Nember, IEEE, and Wai-Kuen Cham, Senior Member, IEEE.
Lecture 2: Convolution and edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Digital Photography with Flash and No-Flash Image Pairs By: Georg PetschniggManeesh Agrawala Hugues HoppeRichard Szeliski Michael CohenKentaro Toyama,
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Unnatural L 0 Representation for Natural Image Deblurring Speaker: Wei-Sheng Lai Date: 2013/04/26.
1 Removing Camera Shake from a Single Photograph Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis and William T. Freeman ACM SIGGRAPH 2006, Boston,
Rob Fergus Courant Institute of Mathematical Sciences New York University A Variational Approach to Blind Image Deconvolution.
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
3D Video Generation and Service Based on a TOF Depth Sensor in MPEG-4 Multimedia Framework IEEE Consumer Electronics Sung-Yeol Kim Ji-Ho Cho Andres Koschan.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
3-D Depth Reconstruction from a Single Still Image 何開暘
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
Lecture 1: Images and image filtering
High Dynamic Range Imaging: Spatially Varying Pixel Exposures Shree K. Nayar, Tomoo Mitsunaga CPSC 643 Presentation # 2 Brien Flewelling March 4 th, 2009.
Interactive Matting Christoph Rhemann Supervised by: Margrit Gelautz and Carsten Rother.
Image Deblurring with Optimizations Qi Shan Leo Jiaya Jia Aseem Agarwala University of Washington The Chinese University of Hong Kong Adobe Systems, Inc.
An Iterative Optimization Approach for Unified Image Segmentation and Matting Hello everyone, my name is Jue Wang, I’m glad to be here to present our paper.
Lecture 2: Image filtering
Our output Blur kernel. Close-up of child Our output Original photograph.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Understanding and evaluating blind deconvolution algorithms
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
Noise Estimation from a Single Image Ce Liu William T. FreemanRichard Szeliski Sing Bing Kang.
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
Image Priors and Optimization Methods for Low-level Vision Dilip Krishnan TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
Introduction to Computational Photography. Computational Photography Digital Camera What is Computational Photography? Second breakthrough by IT First.
Mutual Information-based Stereo Matching Combined with SIFT Descriptor in Log-chromaticity Color Space Yong Seok Heo, Kyoung Mu Lee, and Sang Uk Lee.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
A General Framework for Tracking Multiple People from a Moving Camera
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
Tzu ming Su Advisor : S.J.Wang MOTION DETAIL PRESERVING OPTICAL FLOW ESTIMATION 2013/1/28 L. Xu, J. Jia, and Y. Matsushita. Motion detail preserving optical.
Chapter 7 Case Study 1: Image Deconvolution. Different Types of Image Blur Defocus blur --- Depth of field effects Scene motion --- Objects in the scene.
Yu-Wing Tai, Hao Du, Michael S. Brown, Stephen Lin CVPR’08 (Longer Version in Revision at IEEE Trans PAMI) Google Search: Video Deblurring Spatially Varying.
#MOTION ESTIMATION AND OCCLUSION DETECTION #BLURRED VIDEO WITH LAYERS
Esmaeil Faramarzi, Member, IEEE, Dinesh Rajan, Senior Member, IEEE, and Marc P. Christensen, Senior Member, IEEE Unified Blind Method for Multi-Image Super-Resolution.
Vincent DeVito Computer Systems Lab The goal of my project is to take an image input, artificially blur it using a known blur kernel, then.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Motion Estimation using Markov Random Fields Hrvoje Bogunović Image Processing Group Faculty of Electrical Engineering and Computing University of Zagreb.
Lecture#4 Image reconstruction
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Vincent DeVito Computer Systems Lab The goal of my project is to take an image input, artificially blur it using a known blur kernel, then.
Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.
Instructor: Mircea Nicolescu Lecture 7
ICCV 2007 Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1,
Matte-Based Restoration of Vintage Video 指導老師 : 張元翔 主講人員 : 鄭功運.
RECONSTRUCTION OF MULTI- SPECTRAL IMAGES USING MAP Gaurav.
Heechul Han and Kwanghoon Sohn
White Balance under Mixed Illumination using Flash Photography
Image Deblurring and noise reduction in python
A Neural Approach to Blind Motion Deblurring
Motion and Optical Flow
Deconvolution , , Computational Photography
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Image Deblurring Using Dark Channel Prior
Rob Fergus Computer Vision
Vincent DeVito Computer Systems Lab
Announcements more panorama slots available now
Announcements more panorama slots available now
Advanced deconvolution techniques and medical radiography
Deblurring Shaken and Partially Saturated Images
Presentation transcript:

Shaojie Zhuo, Dong Guo, Terence Sim School of Computing, National University of Singapore CVPR2010 Reporter: 周 澄 (A.J.) 01/16/2011 Key words: image deblur, flash/no- flash technique, blur kernel estimate, sharp image reconstruction

Goal Major contributions Related work Flash deblurring framework ‐MAP optimization Practical implementation Results Limitations Future works Outline

Deblur a shaken image by using a corresponding flash image generated from a conventional hand-held camera. Goal

Propose a novel approach by adding a flash image as a key constraint. Well handle both the flash artifacts and the deconvolution artifacts. ‐Additional constraints introduced. High quality results. ‐Insensitive to noise, fine image details. Major contributions

Non-blind image deconvolution (Single image method). Def: Given the estimated blur kernel, the second step is to reconstruct a sharp image from the blurred image. Related work

[Fergus et al. TOG2006] Removing camera shake from a single photograph. Used a variational Bayes inference method with natural image statistics to estimate the motion blur kernel. Related work

[Jia. CVPR 2007] Single image motion deblurring using transparency. Investigated the relationship between object boundary transparency and the image motion blur and estimated the blur kernel from the alpha matte of motion objects. Related work

[Shan et al. TOG 2008] High-quality motion deblurring from a single image. Formulated the deblurring problem as an MAP problem and proposed a high-order derivatives image noise model and a local image prior to avoid trivial solution. Related work

Fergus, Jia, and Shan’s method are all able to obtain accurate kernels when the blur is small. =>Fail at large shaking & noise. Related work

Wiener filter and the Richardson-Lucy (RL) deconvolution. [Levin et al. CVPR2009] Understanding and evaluating blind deconvolution algorithms. Related work

Wiener filter and the Richardson-Lucy (RL) deconvolution. =>Suffer from deconvolution artifacts such as amplified noise and ringing artifacts. Related work

Regularization methods. ‐In order to reduce artifacts. [Wang et al. SIIMS2008] A new alternating minimization algorithm for total variation image reconstruction. [Yuan et al. TOG2008] Progressive interscale and intra-scale non-blind image deconvolution. Related work

Regularization methods. =>Lost fine image details since you cannot separate image details from artifacts like noise or ringing artifacts properly. Related work

Additional hardware(Hybrid camera). High resolution camera + low resolution video camera. =>Some image details still lost due to non-invertible motion blur. Related work

Multiple images solution. [Yuan et al. TOG2007] Image deblurring with blurred/noisy image pairs. [Chen et al. CVPR2008] Robust dual motion deblurring. [YW Tai et al. CVPR2005] Local color transfer via probabilistic segmentation by expectation-maximization. Related work

Flash/no-flash technique. [Agrawal et al. TOG2005] Removing photography artifacts using gradient projection and flashexposure sampling. [Petschnigg et al. TOG2004] Digital photography with flash and no-flash image pairs. [Eisemann et al. TOG2004] Flash photography enhancement via intrinsic relighting. Related work

Traditional flash/no-flash technique. =>Need good alignment between two images. Related work

Input: A blur image B and corresponding flash image F. Output: A visually pleasant sharp image I with least flash artifacts or deconvolution artifacts. Flash deblurring framework

Problem formulation: Given the blurred image B and flash image F, our goal is to estimate a blur kernel K and a sharp image I, so that I,K and B can be represented by the convolution model and the gradients of I are close to those in F.(We talk about gradients later) Flash deblurring framework

Kernel estimation Sharp image reconstruction Maximum-a-posteriori (MAP) framework

B = I * K + n, where n is the image noise which modeled as a set of independent and identically distributed(i.i.d.) Gaussian noise. The convolution model

Problem formulation: where L( . ) = - log (p( . )) MAP optimization

Likelihood term(SSD): Analyze the SSD of the estimated kernel convolution result and the blur image. MAP optimization

Kernel prior term: where the parameter α ≦ 1; α=0.8 used in the paper. MAP optimization

Key idea: The robust flash gradient constraint encourages the gradients of reconstructed image to be close to those in F, while at the locations of flash artifacts, ambient shadows or noise it allow their gradient to differ to avoid flash artifacts and keep ambient illumination. Flash gradient constraint

Flash gradient constraint: observation. MAP optimization 1D scanlines of intensities and gradients in R channel of the three images. In the gradient plot, ΔI(cyan) is much close to ΔF(magenta), which acts as a guide to reconstruct the sharp image I.

Flash gradient constraint: where is the Lorentzian robust estimator and ε is a predefined constant. MAP optimization

Objective function: Here we have the likelihood term, flash gradient constraint and kernel prior term, where λ f and λ k are the used to balance the three terms. MAP optimization

Flash deblurring framework Kernel estimation Sharp image reconstruction Maximum-a-posteriori (MAP) framework

Kernel estimation Sharp image reconstruction Fix K, we can estimate I by solving: Where the weight of re-weight least square for each pixel i at each iteration:

Kernel estimation Sharp image reconstruction Fix I, we can estimate K by solving: Both equation can be solved by iterative re-weight least squares(IRLS).

Kernel estimation Sharp image reconstruction The two steps are alternated until diff(K) is smaller than the threshold. To avoid local minimum when blur kernel is large, the kernel estimation is performed in a coarse-to-fine manner in the scale space.

Kernel estimation Sharp image reconstruction Three common artifacts. 1.Flash artifact regions. 2. λ f is set to be large to suppress noise or ringing artifacts. 3.Over-saturated regions.

Kernel estimation Sharp image reconstruction Build a mask image M to change the weight of flash gradient constraint locally.

Kernel estimation Sharp image reconstruction Flash artifacts detect. ‐|| K * I - F || 2. ‐Only need to manually mark the the flash shadow edge, since the shadow regions still contains useful gradients.

Kernel estimation Sharp image reconstruction Add in the sparse gradient constraint. The objective func becomes: where “ο” denotes the pixel-wise multiplication operator. Solved by IRLS.

Take flash image first, and then use high speed capturing mode capture blur image. As the time between two shots is small, the the motion during the shots is basically a translation, which just causes a shift in the estimated blur kernel. Therefore, no image alignment is required. Practical implementation

Resaults – kernel RMS error

Resaults

Results

It cannot handle the spatially invariant motion blur model. ‐Additional alignment needed. ‐Ex: Busy traffic. The exposure time between flash/no- flash images should be close. ‐Temporal incoherence. Two images should share same aperture value. ‐May generate blur artifacts caused by different focus. Limitations

Extend to video. ‐Coherent preserve. Support hybrid system such as combining with IR depth sensor or stereo sensor to get more information from image. Future works