Download presentation
Presentation is loading. Please wait.
Published byDarrell Arnold Modified over 9 years ago
1
Single Image Blind Deconvolution Presented By: Tomer Peled & Eitan Shterenbaum
2
Agenda 1.Problem Statement 2.Introduction to Non-Blind Deconvolution 3.Solutions & Approaches A.Image Deblurring PSF Estimation using Sharp Edge Prediction / Neel Joshi et. Al. B.MAP x,k Solution Analysis Understanding and evaluating blind deconvolution algorithms / Anat Levin et. Al. C. Variational Method MAP k Removing Camera Shake from a Single Photograph / Rob Fergus et. Al 4.Summary 2
3
Problem statement Blur = Degradation of sharpness and contrast of the image, causing loss of high frequencies. Technically - convolution with certain kernel during the imaging process. 3
4
Camera Motion blur 4
5
Defocus blur 5
6
6
7
7
8
Blur – generative model = = Point Spread Function Optical Transfer Function fft(Image) Sharp imageBlured image fft(Blured image)
9
Object Motion blur 9
10
Local Camera Motion 10
11
Depth of field – Local defocus 11
12
Lucy Richardson Evolution of algorithms ? Camera motion blur Simple kernels Non blind deconvolution 12 1972 Wiener 1949 Joshi 2008 Shan 2008 Volunteers ? Fergus 2006
13
Introduction to Non-Blind Deconvolution blur kernelblurred image sharp image Deconvolution Evolution: Simple no-Noise Case Noise Effect Over Simple Solution Wiener Deconvolution RL Deconvolution noise 13
14
Simple no-Noise Case: BlurredImageRecovered 14
15
Noisy case: 15 Original (x)Blured + noise (y)Recovered x
16
Original signal FT of original signal Convolved signals w/o noise FT of convolved signals sd Reconstructed FT of the signal High Frequency Noise Amplified 16 Noisy case, 1D Example: Noisy Signal Original Signal
17
Regularizing by window Window size: 51151191 17
18
Wiener Deconvolution Blurred noisy image Recovered image 18
19
Non Blind Iterative Method : Richardson –Lucy Algorithm Assumptions: Blurred image y i ~P(y i ), Sharp image x j ~P(x j ) i point in y, j point in x Target: Recover P(x) given P(y) & P(y|x) From Bayes theorem Object distribution can be expressed iteratively: Richardson, W.H., “Bayesian-Based Iterative Method of Image Restoration”, J. Opt. Soc. Am., 62, 55, (1972). Lucy, L.B., “An iterative technique for the rectification of observed distributions”, Astron. J., 79, 745, (1974). 19 where
20
Richardson-Lucy Application Simulated Multiple Star measurement PSF Identification reconstruction of 4 th Element 20
21
21
22
Richardson-Lucy iterative approach (examples) Blurred noisy image 10 iterations 50 iterations 100 iterations 22
23
Solution Approaches A.Image Deblurring PSF Estimation using Sharp Edge Prediction Neel Joshi Richard Szeliski David J. Kriegman B.MAP x,k Solution Analysis Understanding and evaluating blind deconvolution algorithms Anat Levin, Yair Weiss, Fredo Durand, William T. Freeman C. Variational Method MAP k Removing Camera Shake from a Single Photograph Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis, William T. Freeman 23
24
PSF Estimation by Sharp Edge Prediction Given edge steps, debluring can be reduced to Kernel Optimization Suggested in PSF Estimation by Sharp Edge Prediction \ Neel Joshi et. el. in Select Edge Step (Masking) Estimate Blurring Kernel Recover Latent Image 24
25
PSF Estimation by Sharp Edge Prediction - Masking Original ImageEdge Prediction Masking Min Max Valid Region 25
26
Masking, Cont. Which is Best the Signals ? Edge Impulse Original Blurred 26
27
Blurr Model: y=x*k+n, n ~ N( 0,σ 2 ) Bayseian Framework: P(k|y) = P(y|k)P(k)/P(y) Map Model: argmax k P(k|y) = argmin k L(y|k) + L(k) PSF Estimation by Sharp Edge Prediction – PSF Estimation 27
28
PSF Estimation by Sharp Edge Prediction – Recovery Recovery through Lucy-Richardson Iterations given the PSF kernel 28 BlurredRecovered
29
PSF Estimation by Sharp Edge, Summary & Improvements 1.Handle RGB Images – perform processing in parallel 2.Local Kernel Variations: Sub divide image into sub-image units Limitations: – Highly depends on the quality of the edge detection – Requires Strong Edges in multiple orientations for proper kernel estimation – Assumes knowledge of noise error figure. 29
30
blur kernel MAP x,k, Blind Deconvolution Definition blurred image sharp image noise Input (known) Unknown, need to estimate ? ? Courtesy of Anat Levin CVPR 09 Slides30
31
MAP x,k Cont. - Natural Image Priors Derivative histogram from a natural image Parametric models Derivative distributions in natural images are sparse: Log prob x x Gaussian: -x 2 Laplacian: - |x| -|x| 0.5 -|x| 0.25 Courtesy of Anat Levin CVPR 09 Slides31
32
Naïve MAP x,k estimation Given blurred image y, Find a kernel k and latent image x minimizing: Should favor sharper x explanations Convolution constraint Sparse prior Courtesy of Anat Levin CVPR 09 Slides32
33
The MAP x,k paradox P (, ) >P (, ) Let be an arbitrarily large image sampled from a sparse prior, and Then the delta explanation is favored Latent image kernel Latent image kernel Courtesy of Anat Levin CVPR 09 Slides33
34
? The MAP x,k failure sharpblurred Courtesy of Anat Levin CVPR 09 Slides34
35
The MAP x,k failure Red windows = [ p(sharp x) >p(blurred x) ] 15x15 windows 25x25 windows 45x45 windows simple derivatives [-1,1],[-1;1] FoE filters (Roth&Black) 35
36
P (blurred step edge) sum of derivatives: cheaper The MAP x,k failure - intuition P (blurred impulse) P (impulse) sum of derivatives: cheaper > P (step edge) < k=[0.5,0.5] Courtesy of Anat Levin CVPR 09 Slides36
37
P (blurred real image) MAP x,k Cont. - Blur Reduces Derivative Contrast Noise and texture behave as impulses - total derivative contrast reduced by blur < P (sharp real image) cheaper Courtesy of Anat Levin CVPR 09 Slides37
38
MAP x,k Reweighting Solution Alternating Optimization Between x & k Minimization term: MAP x,k High Quality Motion Debluring From Single Image / Shan et al.
39
39 MAP x,k Reweighting - Blurred 39
40
40 MAP x,k Reweighting - Recovered
41
MAP x,k Reweighting – Cont. 41 Coarse –to-Fine Recovery
42
MAP x,k Reweighting Intuition 42 w i high value penalty for smoooth areas → Momentum for escaping delta kernel λ Increases through iterations → “Cooling Effect” increasing Probability for Halting 0=λ Sharper image on the horizon ∞=λ Further iterations not possible
44
44
45
45 Example 2 45
46
46 Output 2 46
47
Solution Approaches A.Image Deblurring PSF Estimation using Sharp Edge Prediction Neel Joshi Richard Szeliski David J. Kriegman B.MAP x,k Solution Analysis Understanding and evaluating blind deconvolution algorithms Anat Levin, Yair Weiss, Fredo Durand, William T. Freeman C. Variational Method MAP k Removing Camera Shake from a Single Photograph Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis, William T. Freeman 47
48
MAP k estimation Given blurred image y, Find a kernel minimizing: Again, Should favor sharper x explanations Convolution constraint Sparse priorKernel prior 48
49
Superiority of MAP k over MAP k,x Toy Problem : y=kx+n The joint distribution p(x, k|y). Maximum for x → 0, k → ∞. p(k|y) produce optimum closer to true k ∗. uncertainty of p(k|y) reduces given multiple observations y j =kx j + n j. 49
50
Evaluation on 1D signals MAP k variational approximation (Fergus et al.) Exact MAP k MAP x,k Favors delta solution MAP k Gaussian prior Favor correct solution despite wrong prior! Courtesy of Anat Levin CVPR 09 Slides 50
51
Intuition: dimensionality asymmetry MAP x,k – Estimation unreliable. Number of measurements always lower than number of unknowns: #y<#x+#k MAP k – Estimation reliable. Many measurements for large images: #y>>#k Large, ~10 5 unknownsSmall, ~10 2 unknowns blurred image y kernel k sharp image x ~10 5 measurements Courtesy of Anat Levin CVPR 09 Slides51
52
Courtesy of Rob Fergus Slides52
53
Three sources of information Courtesy of Rob Fergus Slides53
54
Likelihood p(y|b,x) Courtesy of Rob Fergus Slides54
55
Image prior p(x) Courtesy of Rob Fergus Slides55
56
Blur prior p(b) Courtesy of Rob Fergus Slides56
57
The obvious thing to do Courtesy of Rob Fergus Slides57
58
Variational Bayesian approach Courtesy of Rob Fergus Slides58
59
Variational Bayesian methods Variational Bayesian = ensemble learning, A family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. Lower bound the marginal likelihood (i.e. "evidence") of several models with a view to performing model selection. 59
60
Setup of Variational Approach
61
Ensemble Learning for Blind Source Separation / J.W. Miskin, D.J.C. MacKay Small synthetic blurs large real world blurs Cartoon images Gradients of natural images Independent Factor Analysis \ H. Attias An introduction to variational methods for graphical models \ JORDAN M. et al. 61
63
63
64
64
65
65
66
Courtesy of Rob Fergus Slides66
67
Example 1 67
68
Output 1 68
69
Example 2 69
70
Output 2 70
71
Example 3 71
72
Output 3 72
73
Achievements Work on real world images Deals with large camera motions (up to 60 pixels) Getting close to practical generic solution of an old problem. 73
74
Limitations Targeted at camera motion blur – No in plane rotation – No motion in picture – Out of focus blur Manual input – Region of Interest – Kernel size & orientation – Other parameters e.g. scale offset, kernel TH & 9 other semi-fixed parameters Sensitive to image compression, noise(dark images) & saturation Still contains artifacts (solvable by upgrading from Lucy Richardson) 74
75
Evaluation Cumulative histogram of deconvolution successes : bin r = #{ deconv error < r } MAP k, Gaussian prior Shan et al. SIGGRAPH08 Fergus, variational MAP k MAP x,k sparse prior 100 80 60 40 20 Successes percent 75
76
Ground truth data acquisition 4 images x 8 kernels = 32 test images Data available online: http://www.wisdom.weizmann.ac.il/~levina/ 76
77
Fergus et al. SIGGRAPH06 MAP k, variational approx. Comparison Shan et al. SIGGRAPH08 adjusted MAP x,k MAP x,k MAP k, Gaussian prior Ground truth 77
78
Summary Variational MAP k Fergus Reweighted MAP KX Shan Quasi-MAP K Joshi Method Camera motion blur Complex sparse PSF Defocus blur simple PSF Distortion model User selectedEdge region Region of interest Variational Bayes for K estimation (MAP k equivalent) MAP KX Quasi-MAP K Optimization model O(K+X prior + PRIOR ) O(K+X)O(K)Degrees of freedom Multiscale iterative (internal altering) Alternating iterativeGradient based least squares Scheme 78
79
Conclusion reduce # of estimated parameters using – Priors. – Kernel marginalization Seperating the problem into kernel recovery & non-blind deconvolution Existing challenges & potential research – Solutions to spatially varying kernels – Robustness to user’s parameters & initial priors Debluring single image under constrained problem
80
Debluring is underconstrained Debluring single image under constrained problem ? Blured image Recovered image Recovered kernel
81
Priors do the trick ? Blured image Image prior Recovered kernel
82
Kernel marginalization ? Blured image Recovered kernel Image prior
83
Back to non-blind deconvolution ? Recovered imageBlured image Recovered kernel
84
Existing challenges and potential research Robustness to user’s parameters & initial priors Solutions to spatially varying kernels 84
85
Thank You Eitan & Tomer The End
87
87
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.