Deconvolution , , Computational Photography

Slides:



Advertisements
Similar presentations
Removing blur due to camera shake from images. William T. Freeman Joint work with Rob Fergus, Anat Levin, Yair Weiss, Fredo Durand, Aaron Hertzman, Sam.
Advertisements

Bayesian Belief Propagation
IPIM, IST, José Bioucas, Convolution Operators Spectral Representation Bandlimited Signals/Systems Inverse Operator Null and Range Spaces Sampling,
S INGLE -I MAGE R EFOCUSING AND D EFOCUSING Wei Zhang, Nember, IEEE, and Wai-Kuen Cham, Senior Member, IEEE.
Optimizing and Learning for Super-resolution
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Shaojie Zhuo, Dong Guo, Terence Sim School of Computing, National University of Singapore CVPR2010 Reporter: 周 澄 (A.J.) 01/16/2011 Key words: image deblur,
Unnatural L 0 Representation for Natural Image Deblurring Speaker: Wei-Sheng Lai Date: 2013/04/26.
1 Removing Camera Shake from a Single Photograph Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis and William T. Freeman ACM SIGGRAPH 2006, Boston,
Rob Fergus Courant Institute of Mathematical Sciences New York University A Variational Approach to Blind Image Deconvolution.
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
ECE 472/572 - Digital Image Processing Lecture 8 - Image Restoration – Linear, Position-Invariant Degradations 10/10/11.
EE465: Introduction to Digital Image Processing
What are Good Apertures for Defocus Deblurring? Columbia University ICCP 2009, San Francisco Changyin Zhou Shree K. Nayar.
Advanced Computer Graphics (Spring 2006) COMS 4162, Lecture 3: Sampling and Reconstruction Ravi Ramamoorthi
Image Deblurring with Optimizations Qi Shan Leo Jiaya Jia Aseem Agarwala University of Washington The Chinese University of Hong Kong Adobe Systems, Inc.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 3: Sampling and Reconstruction Ravi Ramamoorthi
Media Cybernetics Deconvolution Approaches and Challenges Jonathan Girroir
Understanding and evaluating blind deconvolution algorithms
Basic Image Processing January 26, 30 and February 1.
(1) A probability model respecting those covariance observations: Gaussian Maximum entropy probability distribution for a given covariance observation.
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Restoration.
Chapter 7 Case Study 1: Image Deconvolution. Different Types of Image Blur Defocus blur --- Depth of field effects Scene motion --- Objects in the scene.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
Deconvolution, Deblurring and Restoration T , Biomedical Image Analysis Seminar Presentation Seppo Mattila & Mika Pollari.
Extracting Barcodes from a Camera-Shaken Image on Camera Phones Graduate Institute of Communication Engineering National Taiwan University Chung-Hua Chu,
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
Yu-Wing Tai, Hao Du, Michael S. Brown, Stephen Lin CVPR’08 (Longer Version in Revision at IEEE Trans PAMI) Google Search: Video Deblurring Spatially Varying.
Image Processing Edge detection Filtering: Noise suppresion.
Why is computer vision difficult?
Image Restoration.
8-1 Chapter 8: Image Restoration Image enhancement: Overlook degradation processes, deal with images intuitively Image restoration: Known degradation processes;
Vincent DeVito Computer Systems Lab The goal of my project is to take an image input, artificially blur it using a known blur kernel, then.
Chapter 11 Filter Design 11.1 Introduction 11.2 Lowpass Filters
Lecture#4 Image reconstruction
Removing motion blur from a single image
Vincent DeVito Computer Systems Lab The goal of my project is to take an image input, artificially blur it using a known blur kernel, then.
CS654: Digital Image Analysis Lecture 22: Image Restoration.
Today Defocus Deconvolution / inverse filters. Defocus.
ICCV 2007 Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1,
RECONSTRUCTION OF MULTI- SPECTRAL IMAGES USING MAP Gaurav.
Miguel Tavares Coimbra
Degradation/Restoration Model
Image Deblurring and noise reduction in python
A Neural Approach to Blind Motion Deblurring
Image gradients and edges
Two-view geometry Computer Vision Spring 2018, Lecture 10
Image Analysis Image Restoration.
Image Deblurring Using Dark Channel Prior
Lecture 1: Images and image filtering
Fast image deconvolution using Hyper-Laplacian Prior
Image gradients and edges April 11th, 2017
Rob Fergus Computer Vision
Removing motion blur from a single image
2D transformations (a.k.a. warping)
Gary Margrave and Michael Lamoureux
Basic Image Processing
Fourier Optics P47 – Optics: Unit 8.
Non-local Means Filtering
Applications of Fourier Analysis in Image Recovery
Digital Image Processing
Adaptive Filter A digital filter that automatically adjusts its coefficients to adapt input signal via an adaptive algorithm. Applications: Signal enhancement.
Lecture 1: Images and image filtering
Wrap-up and discussion
Deblurring Shaken and Partially Saturated Images
Lecture 7 Patch based methods: nonlocal means, BM3D, K- SVD, data-driven (tight) frame.
Presentation transcript:

Deconvolution 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 http://graphics.cs.cmu.edu/courses/15-463

Course announcements Homework 4 is out. - Due October 26th. - There was another typo in HW4, download new version. - Drop by Yannis’ office to pick up cameras any time. Homework 5 will be out on Thursday. - You will need cameras for that one as well, so keep the ones you picked up for HW4. Project ideas were due on Piazza on Friday 20th. - Responded to most of you. - Some still need to post their ideas. Project proposals are due on Monday 30th.

Overview of today’s lecture Telecentric lenses. Sources of blur. Deconvolution. Blind deconvolution.

Slide credits Most of these slides were adapted from: Fredo Durand (MIT). Gordon Wetzstein (Stanford).

Why are our images blurry?

Why are our images blurry? Lens imperfections. Camera shake. Scene motion. Depth defocus.

Lens imperfections Ideal lens: An point maps to a point at a certain plane. object distance D focus distance D’

Lens imperfections Ideal lens: An point maps to a point at a certain plane. Real lens: A point maps to a circle that has non-zero minimum radius among all planes. object distance D focus distance D’ What is the effect of this on the images we capture?

Lens imperfections Ideal lens: An point maps to a point at a certain plane. Real lens: A point maps to a circle that has non-zero minimum radius among all planes. blur kernel object distance D focus distance D’ Shift-invariant blur.

Lens imperfections What causes lens imperfections?

Lens imperfections What causes lens imperfections? Aberrations. Diffraction. small aperture large aperture

Lens as an optical low-pass filter Point spread function (PSF): The blur kernel of a lens. “Diffraction-limited” PSF: No aberrations, only diffraction. Determined by aperture shape. blur kernel diffraction-limited PSF of a circular aperture object distance D focus distance D’

Lens as an optical low-pass filter Point spread function (PSF): The blur kernel of a lens. “Diffraction-limited” PSF: No aberrations, only diffraction. Determined by aperture shape. blur kernel diffraction-limited OTF of a circular aperture diffraction-limited PSF of a circular aperture object distance D focus distance D’ Optical transfer function (OTF): The Fourier transform of the PSF. Equal to aperture shape.

Lens as an optical low-pass filter image from a perfect lens * imperfect lens PSF = image from imperfect lens x c = b *

Lens as an optical low-pass filter If we know c and b, can we recover x? image from a perfect lens * imperfect lens PSF = image from imperfect lens x c = b *

Deconvolution x c = b * If we know c and b, can we recover x?

x c = b * . F(x) F(c) = F(b) Deconvolution Reminder: convolution is multiplication in Fourier domain: . F(x) F(c) = F(b) If we know c and b, can we recover x?

x c = b * . F(x) F(c) = F(b) F(xest) = F(c) \ F(b) xest = Deconvolution x c = b * Reminder: convolution is multiplication in Fourier domain: . F(x) F(c) = F(b) Deconvolution is division in Fourier domain: F(xest) = F(c) \ F(b) After division, just do inverse Fourier transform: xest = F-1 ( F(c) \ F(b) )

Deconvolution Any problems with this approach?

zeros at high frequencies Deconvolution The OTF (Fourier of PSF) is a low-pass filter zeros at high frequencies The measured signal includes noise b = c * x + n noise term Any problems with this approach?

zeros at high frequencies Deconvolution The OTF (Fourier of PSF) is a low-pass filter zeros at high frequencies The measured signal includes noise b = c * x + n noise term When we divide by zero, we amplify the high frequency noise

-1 = * b c-1 = xest * Naïve deconvolution Even tiny noise can make the results awful. Example for Gaussian of σ = 0.05 -1 = * b c-1 = xest *

|F(c)|2 xest = F-1 ( ⋅ ) |F(c)|2 + 1/SNR(ω) F(b) F(c) Wiener Deconvolution Apply inverse kernel and do not divide by zero: |F(c)|2 xest = F-1 ( ⋅ ) |F(c)|2 + 1/SNR(ω) F(b) F(c) amplitude-dependent damping factor Derived as solution to maximum-likelihood problem under Gaussian noise assumption Requires noise of signal-to-noise ratio at each frequency SNR(ω) = mean signal at ω noise std at ω

Deconvolution comparisons naïve deconvolution Wiener deconvolution

Deconvolution comparisons σ = 0.01 σ = 0.05 σ = 0.01

|F(c)|2 xest = F-1 ( ⋅ ) |F(c)|2 + 1/SNR(ω) F(b) F(c) Wiener Deconvolution Apply inverse kernel and do not divide by zero: |F(c)|2 xest = F-1 ( ⋅ ) |F(c)|2 + 1/SNR(ω) F(b) F(c) amplitude-dependent damping factor Derived as solution to maximum-likelihood problem under Gaussian noise assumption Requires noise of signal-to-noise ratio at each frequency SNR(ω) = mean signal at ω noise std at ω

Natural image and noise spectra Natural images tend to have spectrum that scales as 1 / ω2 This is a natural image statistic

Natural image and noise spectra Natural images tend to have spectrum that scales as 1 / ω2 This is a natural image statistic Noise tends to have flat spectrum, σ(ω) = constant We call this white noise What is the SNR?

Natural image and noise spectra Natural images tend to have spectrum that scales as 1 / ω2 This is a natural image statistic Noise tends to have flat spectrum, σ(ω) = constant We call this white noise SNR(ω) = 1 / ω2 Therefore, we have that:

|F(c)|2 xest = F-1 ( ⋅ ) |F(c)|2 + 1/SNR(ω) F(b) F(c) Wiener Deconvolution Apply inverse kernel and do not divide by zero: |F(c)|2 xest = F-1 ( ⋅ ) |F(c)|2 + 1/SNR(ω) F(b) F(c) amplitude-dependent damping factor Derived as solution to maximum-likelihood problem under Gaussian noise assumption Requires noise of signal-to-noise ratio at each frequency SNR(ω) = 1 ω2

minx ‖b – c ∗ x‖2 + ‖∇x‖2 Wiener Deconvolution For natural images and white noise, it can be re-written as the minimization problem minx ‖b – c ∗ x‖2 + ‖∇x‖2 gradient regularization What does this look like? How can it be solved?

Deconvolution comparisons blurry input gradient regularization naive deconvolution original

Deconvolution comparisons blurry input gradient regularization naive deconvolution original

… and a proof-of-concept demonstration noisy input gradient regularization naive deconvolution

Can we do better than that?

Can we do better than that? Use different gradient regularizations: L2 gradient regularization (Tikhonov regularization, same as Wiener deconvolution) minx ‖b – c ∗ x‖2 + ‖∇x‖2 L1 gradient regularization (sparsity regularization, same as total variation) minx ‖b – c ∗ x‖2 + ‖∇x‖1 How do we solve for these? Ln<1 gradient regularization (fractional regularization) minx ‖b – c ∗ x‖2 + ‖∇x‖0.8 All of these are motivated by natural image statistics. Active research area.

Comparison of gradient regularizations squared gradient regularization fractional gradient regularization input

High quality images using cheap lenses [Heide et al., “High-Quality Computational Imaging Through Simple Lenses,” TOG 2013]

* = ? x c = b * Deconvolution If we know b and c, can we recover x? How do we measure this? * = ? x c = b *

PSF calibration Take a photo of a point source Image of PSF Image with sharp lens Image with cheap lens

Deconvolution If we know b and c, can we recover x? * = ? x c = b *

* = ? ? x c = b * Blind deconvolution If we know b, can we recover x and c? * = ? ? x c = b *

Camera shake

Camera shake as a filter If we know b, can we recover x and c? image from static camera * PSF from camera motion = image from shaky camera x c = b *

Multiple possible solutions How do we detect this one?

Use prior information Among all the possible pairs of images and blur kernels, select the ones where: The image “looks like” a natural image. The kernel “looks like” a motion PSF.

Use prior information Among all the possible pairs of images and blur kernels, select the ones where: The image “looks like” a natural image. The kernel “looks like” a motion PSF.

Shake kernel statistics Gradients in natural images follow a characteristic “heavy-tail” distribution. sharp natural image blurry natural image

Shake kernel statistics Gradients in natural images follow a characteristic “heavy-tail” distribution. sharp natural image blurry natural image Can be approximated by ‖∇x‖0.8

Use prior information Among all the possible pairs of images and blur kernels, select the ones where: The image “looks like” a natural image. The kernel “looks like” a motion PSF. Gradients in natural images follow a characteristic “heavy-tail” distribution. Shake kernels are very sparse, have continuous contours, and are always positive How do we use this information for blind deconvolution?

Regularized blind deconvolution Solve regularized least-squares optimization minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1 What does each term in this summation correspond to?

Regularized blind deconvolution Solve regularized least-squares optimization minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1 data term natural image prior shake kernel prior Note: Solving such optimization problems is complicated (no longer linear least squares).

deconvolved image and kernel A demonstration input deconvolved image and kernel

A demonstration input deconvolved image and kernel This image looks worse than the original… This doesn’t look like a plausible shake kernel…

Regularized blind deconvolution Solve regularized least-squares optimization minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1 loss function

Regularized blind deconvolution Solve regularized least-squares optimization minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1 inverse loss loss function Where in this graph is the solution we find? pixel intensity

Regularized blind deconvolution Solve regularized least-squares optimization minx,b ‖b – c ∗ x‖2 + ‖∇x‖0.8 + ‖c‖1 inverse loss loss function many plausible solutions here Rather than keep just maximum, do a weighted average of all solutions optimal solution pixel intensity

This image looks worse than the original… A demonstration input maximum-only average This image looks worse than the original…

More examples

Results on real shaky images

Results on real shaky images

Results on real shaky images

Results on real shaky images

More advanced motion deblurring [Shah et al., High-quality Motion Deblurring from a Single Image, SIGGRAPH 2008]

Why are our images blurry? Lens imperfections. Camera shake. Scene motion. Depth defocus. Can we solve all of these problems in the same way?

Why are our images blurry? Lens imperfections. Camera shake. Scene motion. Depth defocus. Can we solve all of these problems in the same way? No, because blur is not always shift invariant. See next lecture.

References Basic reading: Szeliski textbook, Sections 3.4.3, 3.4.4, 10.1.4, 10.3. Fergus et al., “Removing camera shake from a single image,” SIGGRAPH 2006. the main motion deblurring and blind deconvolution paper we covered in this lecture. Additional reading: Heide et al., “High-Quality Computational Imaging Through Simple Lenses,” TOG 2013. the paper on high-quality imaging using cheap lenses, which also has a great discussion of all matters relating to blurring from lens aberrations and modern deconvolution algorithms. Levin, “Blind Motion Deblurring Using Image Statistics,” NIPS 2006. Levin et al., “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH 2007. Levin et al., “Understanding and evaluating blind deconvolution algorithms,” CVPR 2009 and PAMI 2011. Krishnan and Fergus, “Fast Image Deconvolution using Hyper-Laplacian Priors,” NIPS 2009. Levin et al., “Efficient Marginal Likelihood Optimization in Blind Deconvolution,” CVPR 2011. a sequence of papers developing the state of the art in blind deconvolution of natural images, including the use Laplacian (sparsity) and hyper-Laplacian priors on gradients, analysis of different loss functions and maximum a- posteriori versus Bayesian estimates, the use of variational inference, and efficient optimization algorithms. Minskin and MacKay, “Ensemble Learning for Blind Image Separation and Deconvolution,” AICA 2000. the paper explaining the mathematics of how to compute Bayesian estimators using variational inference. Shah et al., “High-quality Motion Deblurring from a Single Image,” SIGGRAPH 2008. a more recent paper on motion deblurring.