Download presentation
Presentation is loading. Please wait.
Published byGeoffrey Taylor Modified over 6 years ago
1
Fast edge-directed single-image super-resolution
Mushfiqur Rouf1 Dikpal Reddy2 Kari Pulli2 Rabab K Ward1 1University of British Columbia 2Light co an improved EDI Design an image prior Develop a PD algorithm for SISR (Describe the terms)
2
Single image super-resolution
2x2 SISR as we know…
3
Fast edge-directed SISR
Combines Sparse gradient “Smooth contour” Small overhead Prior more general purpose than SISR SISR methods Filtering / forward methods Bilinear, bicubic, … Anisotropic methods Deep learning methods Inverse / reconstruction methods Total variation (TV) optimization Markov random fields Bilateral filtering + optimization Can’t begin to cover all the techniques… Forward and inverse Inverse problems use a global optimization; slower but are expected to distribute error
4
= Image formation model Priors Sparse gradient Smooth contour
[ Observed image Anti-aliasing filter Latent image = Priors Sparse gradient Smooth contour Convolution Downsampling First formalize how LR is produced Assuming Gaussian noise… Forward model
5
Inverse problem formulation
At what cost? Observed image Is it worth the cost? Complementary Anti-aliasing filter Is the method useful? Latent image Priors Sparse gradient Smooth contour Convolution Downsampling Only sharpens image across edges. Edges could be jaggy. Also smooths edge structure along the edge contours. No jaggy edge. …The corresponding inverse problem becomes L2 minimization Underdetermined Need priors Two priors… Questions we ask Complementary is the key Data fitting
6
Inverse problem formulation
Observed image Anti-aliasing filter TV-only SR Latent image Smooth contour Convolution Downsampling Gradient Why not bilateral filter here? Total variation With TV this is the standard TV-only SR As the second prior why not bilateral? Data fitting Sparse gradient
7
Inverse problem formulation
Observed image Anti-aliasing filter TV-only SR Latent image Proposed Convolution Downsampling Gradient Downsampling anisotropic interpolation Total variation Smooth contour “anisotropic interpolatedness” needs to be faaast Combats jagginess improved reconst That’s all in theory… Data fitting Sparse gradient Little computation overhead Improved reconstruction
8
Smooth contour prior – at what cost?
Our improvement Smooth contour …let’s see what happens in practice. … And here’s some intuition as to why that happens… Edge directed interpolation [Li and Orchard 2001] Sparse gradient prior only (TV optimization) Little computation overhead Little computation overhead Improved reconstruction Improved reconstruction
9
Ours (sparse gradient and smooth contour)
Smooth contour prior Ground truth Bicubic TV optimization Star chart Ours (sparse gradient and smooth contour) LR input EDI
10
Inverse problem formulation
Observed image Anti-aliasing filter anisotropic interpolation Latent image Convolution Downsampling Gradient Downsampling Total variation Smooth contour Back to the formulation Data fitting Sparse gradient Little computation overhead Improved reconstruction
11
Choice of anisotropic interpolation
Goals: Local calculations Direct estimation of interpolation weights Fast implementation We choose: [Li and Orchard 2001] Newer versions of the method overkill
12
Intro to EDI [Li and Orchard 2001]
Edge directed interpolation Detects local “edge orientation” Sliding-window linear least-squares regressions Interpolates along the edge [Li and Orchard 2001]
13
Intro to EDI [Li and Orchard 2001]
Two step process for upsampling
14
Intro to EDI [Li and Orchard 2001]
Sliding window process ?
15
Intro to EDI [Li and Orchard 2001]
Downsides Edge misestimation artifacts Fixed 2x2 upsampling Upsampled image not sharp Performs very well where the edge estimates are accurate 4x4 [
16
Our improvements to EDI
Wrapped up in a prior and used a data fitting term and a complementary prior Removes misestimation artifacts Regularized regression Speedup iterative application possible
17
EDI Speedup Original EDI too slow to use iteratively
We propose a speedup: Dynamic programming: Remove costly overlapping recalculations
18
Inverse problem formulation
Observed image Anti-aliasing filter Edge directed upsampling Latent image Convolution Downsampling Gradient Downsampling Total variation Smooth contour Little computation overhead Data fitting Sparse gradient Improved reconstruction
19
Primal dual optimization
Convex Mixture of L1 and L2 priors --> Primal dual method
20
Primal dual optimization
21
Primal dual optimization
Primal dual form
22
Primal dual optimization
Standard TV optimization Our prior [Chambolle and Pock 2010]
23
Results - dyadic 2x2 Ground truth
24
Results - dyadic 2x2 PSNR: 32.25 SSIM: [He-Siu 2011]
25
Results - dyadic 2x2 PSNR: 34.97 SSIM: [Kwon et al. 2014]
26
Results - dyadic 2x2 PSNR: 35.13 SSIM: Our method
27
Results - nondyadic 3x3 Ground truth
28
Results - nondyadic 3x3 PSNR: 23.28 SSIM: [Yang 2010]
29
Results - nondyadic 3x3 PSNR: 24.17 SSIM: [Kwon et al. 2014]
30
Results - nondyadic 3x3 PSNR: 23.93 SSIM: Our method
31
Comparisons with deep learning
4x4 Ground truth
32
Comparisons with deep learning
4x4 PSNR: 32.95 SSIM: Our method
33
Comparisons with deep learning
4x4 PSNR: 33.12 SSIM: [Dong et al. 2014]
34
Comparisons with deep learning
4x4 PSNR: 33.28 SSIM: [Timofte et al. 2014]
35
Conclusions Proposed a novel natural image prior Application in
Fast. Complementary to sparse gradient prior Any anisotropic upsampling method can be used Potentially deep learning methods? (Future work!) Application in SISR Similar image reconstruction problems (future work)
36
Fast edge-directed single-image super-resolution
Thanks! Fast edge-directed single-image super-resolution Mushfiqur Rouf1 Dikpal Reddy2 Kari Pulli2 Rabab K Ward1 1University of British Columbia 2Light co
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.