Fast Removal of Non-uniform Camera Shake By Michael Hirsch, Christian J. Schuler, Stefan Harmeling and Bernhard Scholkopf Max Planck Institute for Intelligent.

Slides:



Advertisements
Similar presentations
Removing blur due to camera shake from images. William T. Freeman Joint work with Rob Fergus, Anat Levin, Yair Weiss, Fredo Durand, Aaron Hertzman, Sam.
Advertisements

Feature Based Image Mosaicing
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
Scale & Affine Invariant Interest Point Detectors Mikolajczyk & Schmid presented by Dustin Lennon.
Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović
S INGLE -I MAGE R EFOCUSING AND D EFOCUSING Wei Zhang, Nember, IEEE, and Wai-Kuen Cham, Senior Member, IEEE.
Optimizing and Learning for Super-resolution
CSE473/573 – Stereo and Multiple View Geometry
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
SE263 Video Analytics Course Project Initial Report Presented by M. Aravind Krishnan, SERC, IISc X. Mei and H. Ling, ICCV’09.
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Shaojie Zhuo, Dong Guo, Terence Sim School of Computing, National University of Singapore CVPR2010 Reporter: 周 澄 (A.J.) 01/16/2011 Key words: image deblur,
Unnatural L 0 Representation for Natural Image Deblurring Speaker: Wei-Sheng Lai Date: 2013/04/26.
Data mining and statistical learning - lecture 6
1 Removing Camera Shake from a Single Photograph Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis and William T. Freeman ACM SIGGRAPH 2006, Boston,
Rob Fergus Courant Institute of Mathematical Sciences New York University A Variational Approach to Blind Image Deconvolution.
Computer Vision Group Edge Detection Giacomo Boracchi 5/12/2007
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Optical Flow Methods 2007/8/9.
Object Recognition Using Geometric Hashing
Image Deblurring with Optimizations Qi Shan Leo Jiaya Jia Aseem Agarwala University of Washington The Chinese University of Hong Kong Adobe Systems, Inc.
Lecture 2: Image filtering
Image deblurring Seminar inverse problems April 18th 2007 Willem Dijkstra.
Understanding and evaluating blind deconvolution algorithms
Computational Photography: Image Processing Jinxiang Chai.
Retinex by Two Bilateral Filters Michael Elad The CS Department The Technion – Israel Institute of technology Haifa 32000, Israel Scale-Space 2005 The.
Adaptive Signal Processing
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Restoration.
Computer Vision - Restoration Hanyang University Jong-Il Park.
Mutual Information-based Stereo Matching Combined with SIFT Descriptor in Log-chromaticity Color Space Yong Seok Heo, Kyoung Mu Lee, and Sang Uk Lee.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Epipolar geometry The fundamental matrix and the tensor
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Yu-Wing Tai, Hao Du, Michael S. Brown, Stephen Lin CVPR’08 (Longer Version in Revision at IEEE Trans PAMI) Google Search: Video Deblurring Spatially Varying.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
Why is computer vision difficult?
Computer Vision, Robert Pless
Esmaeil Faramarzi, Member, IEEE, Dinesh Rajan, Senior Member, IEEE, and Marc P. Christensen, Senior Member, IEEE Unified Blind Method for Multi-Image Super-Resolution.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Image Segmentation and Edge Detection Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng.
Image Restoration.
Vincent DeVito Computer Systems Lab The goal of my project is to take an image input, artificially blur it using a known blur kernel, then.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
2D Texture Synthesis Instructor: Yizhou Yu. Texture synthesis Goal: increase texture resolution yet keep local texture variation.
CS 641 Term project Level-set based segmentation algorithms Presented by- Karthik Alavala (under the guidance of Dr. Jundong Liu)
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Vincent DeVito Computer Systems Lab The goal of my project is to take an image input, artificially blur it using a known blur kernel, then.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 7
: Chapter 5: Image Filtering 1 Montri Karnjanadecha ac.th/~montri Image Processing.
Instructor: Mircea Nicolescu Lecture 9
Digital Image Processing CSC331
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Convolutional Neural Networks for Direct Text Deblurring
Non-linear filtering Example: Median filter Replaces pixel value by median value over neighborhood Generates no new gray levels.
Image Deblurring and noise reduction in python
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
A Neural Approach to Blind Motion Deblurring
Image Deblurring Using Dark Channel Prior
Deblurring Shaken and Partially Saturated Images
Presentation transcript:

Fast Removal of Non-uniform Camera Shake By Michael Hirsch, Christian J. Schuler, Stefan Harmeling and Bernhard Scholkopf Max Planck Institute for Intelligent Systems, Tubingen, Germany ICCV 2011 Presented by Bhargava, EE, IISc

Motivation State-of-the-art methods for removing camera shake model the blur as a linear combination of homographically transformed versions of true image –So it is computationally demanding –Takes lot memory for storing of large transformation matrices –Not accurate since camera shake leads to non uniform image blurs

Reason for Image blur Camera motion during longer exposure times, e.g., in low light situations During hand held photography

Related Work For the case of uniform blur, model the blur as a space-invariant convolution which yields impressive results both in speed and quality This model is only sufficient if the camera shake is inside the sensor plane without any rotations

Related Work Fergus et al.in 2006 combined the variational approach of Miskin and MacKay with natural image statistics Shan et al., Cho and Lee and Xu et al. in 2009 refined that approach using carefully chosen regularization and fast optimization techniques Joshi et al. in 2010 exploit motion sensor information to recover the true trajectory of the camera during the shake

Related Work contd. Whyte et al. in 2010 proposed Projective Motion Path Blur (PMPB) to model non-uniform blur Hirsch et al. in 2010 proposed Efficient Filter Flow (EFF) in the context of imaging through air turbulence

Projective Motion Path Blur Blur is the result of integrating all intermediate images the camera “sees” along the trajectory of the camera shake These intermediate images are differently projected copies (i.e. homographies) of the true sharp scene This rules out sets of blur kernels that do not correspond to a valid camera motion

Efficient Filter Flow By a position dependent combination of a set of localized blur kernels, the EFF framework is able to express smoothly varying blur while still being linear in its parameters Making use of the FFT, an EFF transformation can be computed almost as efficiently as an ordinary convolution The EFF framework does not impose any global camera motion constraint on the non-uniform blur which makes kernel estimation for single images a delicate task

Fast forward model for camera shake Combine structural constraints of PMPB models and EFF framework EFF where a (r) is the r th blur kernel, f is the sharp image and g is the blurred image Can be implemented efficiently with FFT Patches chosen with sufficient overlap and a (r) are distinct, the blur will vary gradually from pixel to pixel

Fast forward model for camera shake contd. To restrict the possible blurs of EFF to camera shake, we create a basis for the blur kernels a (r) using homographies Apply all possible homographies only once to a grid of single pixel dots Possible camera shakes can be generated by linearly combining different homographies of the point grid Homographies can be precomputed without knowledge of the blurry image g

Non uniform PSF

Non uniform PSF contd Let p be the image of delta peaks, where the peaks are exactly located at the centers of the patches Center of a patch is determined by the center of the support of the corresponding weight images w (r) Generate different views p θ = H θ (p) of the point grid p by applying a homography H θ Chopping these views p θ into local blur kernels b (r), one for each patch, we obtain a basis for the local blur kernels

Fast forward model for camera shake PMPB θ-Index a set of homographies μ θ -determines the relevance of the corresponding homography for the overall blur Fast Forward Model

Run-time comparison of fast forward model

Relative error of a homographically transformed image

Deconvolution of non-stationary blurs Given photograph ‘g’ that has been blurred by non uniform camera shake, we recover the unknown sharp image ‘f’ in two phases – Blur estimation phase for non-stationary PSFs – Sharp image recovery using a non-blind deconvolution procedure, tailored to non- stationary blurs

Blur estimation phase Recover the motion undertaken by the camera during exposure given only the blurry photo –prediction step to reduce blur and enhance image quality by a combination of shock and bilateral filter –blur parameter estimation step to find the camera motion parameters –latent image estimation via non-blind deconvolution

Prediction step Shock Filters –The evolution of the initial image uo(x, y) as t tends to ∞ into a steady state solution u ∞ (x, y) through u(x, y, t), t> 0, is the filtering process –The processed image is piecewise smooth, non-oscillatory, and the jumps occur across zeros of an elliptic operator (edge detector) –The algorithm is relatively fast and easy to program. Bilateral filtering –Smooths images while preserving edges, by means of a nonlinear combination of nearby image values –It combines gray levels or colors based on both their geometric closeness and their photometric similarity

Blur parameter update The blur parameters are updated by minimizing where ∂g=[1,-1] T * g and m S is a weighting mask which selects only edges that are informative and facilitate kernel estimation

Blur parameter update contd. The first term is proportional to the log-likelihood, if we assume additive Gaussian noise n Shan et al. have shown that terms with image derivatives help to reduce ringing artifacts and it lowers the condition number of the optimization problem The second summand penalizes the L2 norm of μ and helps to avoid the trivial solution by suppressing high intensity values in μ The Third term enforces smoothness of μ, and thus favors connectedness in camera motion space

Overview of the blur estimation phase

Sharp image update Sharp image estimate f that is updated during the blur estimation phase does not need to recover the true sharp image. However, it should guide the PSF estimation Since most computational time is spent in this first phase, the sharp image update step should be fast. Cho and Lee gained large speed-ups for this step by replacing the iterative optimization in f by a pixel-wise division in Fourier space

Sharp image update contd. M is the forward model where –B (r) is the matrix with column vectors b θ (r) for varying θ –Matrices Cr and Er are appropriately chosen cropping matrices – F is the discrete Fourier transform matrix – Za is zero-padding matrix

Sharp image update contd. Following expression approximately “invert” the forward model g = Mf. where Diag(v) the diagonal matrix with vector v along its diagonal and is some additional weighting Above equation approximates the true sharp image f given the blurry photograph g and the blur parameters μ and can be implemented efficiently by reading it from right to left

Corrective Weighting

Sharp Image Recovery Phase Introducing the auxiliary variable v we minimize in f and v Note that the weight 2 t increases from 1 to 256 during nine alternating updates in f and v for t = 0, 1,..., 8 Choosing α= 2/3 allows an analytical formula for the update in v,

GPU Implementation A: kernel estimation B: final deconvolution C: total processing time.

Results

Results contd.

References S. Cho and S. Lee. Fast Motion Deblurring. ACM Trans. Graph., 28(5), 2009 S. Cho, Y. Matsushita, and S. Lee. Removing non-uniform motion blur from images. In Proc. Int. Conf. Comput. Vision. IEEE, 2007 R. Fergus, B. Singh, A. Hertzmann, S. Roweis, and W. Freeman. Removing camera shake from a single photograph. In ACM Trans. Graph. IEEE, 2006 A. Gupta, N. Joshi, L. Zitnick, M. Cohen, and B. Curless. Single image deblurring using motion density functions. In Proc. 10th European Conf. Comput. Vision. IEEE, 2010 M. Hirsch, S. Sra, B. Sch¨olkopf, and S. Harmeling. Efficient Filter Flow for Space-Variant Multiframe Blind Deconvolution In Proc. Conf. Comput. Vision and Pattern Recognition. IEEE, 2010 D. Krishnan and R. Fergus. Fast image deconvolution using hyper- Laplacian priors. In Advances in Neural Inform. Processing Syst. NIPS, 2009

References Y.W. Tai, P. Tan, L. Gao, and M. S. Brown. Richardson- Lucy deblurring for scenes under projective motion path. Technical report, KAIST, 2009 C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In Int. Conf. Comput. Vision, pages 839–846. IEEE, 2002 O. Whyte, J. Sivic, A. Zisserman, and J. Ponce. Non- uniform deblurring for shaken images. In Proc. Conf. Comput. Vision and Pattern Recognition. IEEE, 2010 L. Xu and J. Jia. Two-phase kernel estimation for robust motion deblurring. In Proc. 10th European Conf. Comput.Vision. IEEE, 2010

Thank You