Fast edge-directed single-image super-resolution Mushfiqur Rouf1 Dikpal Reddy2 Kari Pulli2 Rabab K Ward1 1University of British Columbia 2Light co an improved EDI Design an image prior Develop a PD algorithm for SISR (Describe the terms)
Single image super-resolution 2x2 SISR as we know…
Fast edge-directed SISR Combines Sparse gradient “Smooth contour” Small overhead Prior more general purpose than SISR SISR methods Filtering / forward methods Bilinear, bicubic, … Anisotropic methods Deep learning methods Inverse / reconstruction methods Total variation (TV) optimization Markov random fields Bilateral filtering + optimization Can’t begin to cover all the techniques… Forward and inverse Inverse problems use a global optimization; slower but are expected to distribute error
= Image formation model Priors Sparse gradient Smooth contour [http://www.imaging-resource.com/PRODS/pentax-k3/pentax-k3SELECTIVE_LPF.HTM] Observed image Anti-aliasing filter Latent image = Priors Sparse gradient Smooth contour Convolution Downsampling First formalize how LR is produced Assuming Gaussian noise… Forward model
Inverse problem formulation At what cost? Observed image Is it worth the cost? Complementary Anti-aliasing filter Is the method useful? Latent image Priors Sparse gradient Smooth contour Convolution Downsampling Only sharpens image across edges. Edges could be jaggy. Also smooths edge structure along the edge contours. No jaggy edge. …The corresponding inverse problem becomes L2 minimization Underdetermined Need priors Two priors… Questions we ask Complementary is the key Data fitting
Inverse problem formulation Observed image Anti-aliasing filter TV-only SR Latent image Smooth contour Convolution Downsampling Gradient Why not bilateral filter here? Total variation With TV this is the standard TV-only SR As the second prior why not bilateral? Data fitting Sparse gradient
Inverse problem formulation Observed image Anti-aliasing filter TV-only SR Latent image Proposed Convolution Downsampling Gradient Downsampling anisotropic interpolation Total variation Smooth contour “anisotropic interpolatedness” needs to be faaast Combats jagginess improved reconst That’s all in theory… Data fitting Sparse gradient Little computation overhead Improved reconstruction
Smooth contour prior – at what cost? Our improvement Smooth contour …let’s see what happens in practice. … And here’s some intuition as to why that happens… Edge directed interpolation [Li and Orchard 2001] Sparse gradient prior only (TV optimization) Little computation overhead Little computation overhead Improved reconstruction Improved reconstruction
Ours (sparse gradient and smooth contour) Smooth contour prior Ground truth Bicubic TV optimization Star chart Ours (sparse gradient and smooth contour) LR input EDI
Inverse problem formulation Observed image Anti-aliasing filter anisotropic interpolation Latent image Convolution Downsampling Gradient Downsampling Total variation Smooth contour Back to the formulation Data fitting Sparse gradient Little computation overhead Improved reconstruction
Choice of anisotropic interpolation Goals: Local calculations Direct estimation of interpolation weights Fast implementation We choose: [Li and Orchard 2001] Newer versions of the method overkill
Intro to EDI [Li and Orchard 2001] Edge directed interpolation Detects local “edge orientation” Sliding-window linear least-squares regressions Interpolates along the edge [Li and Orchard 2001]
Intro to EDI [Li and Orchard 2001] Two step process for upsampling
Intro to EDI [Li and Orchard 2001] Sliding window process ?
Intro to EDI [Li and Orchard 2001] Downsides Edge misestimation artifacts Fixed 2x2 upsampling Upsampled image not sharp Performs very well where the edge estimates are accurate 4x4 [http://chiranjivi.tripod.com/EDITut.html]
Our improvements to EDI Wrapped up in a prior and used a data fitting term and a complementary prior Removes misestimation artifacts Regularized regression Speedup iterative application possible
EDI Speedup Original EDI too slow to use iteratively We propose a speedup: Dynamic programming: Remove costly overlapping recalculations
Inverse problem formulation Observed image Anti-aliasing filter Edge directed upsampling Latent image Convolution Downsampling Gradient Downsampling Total variation Smooth contour Little computation overhead Data fitting Sparse gradient Improved reconstruction
Primal dual optimization Convex Mixture of L1 and L2 priors --> Primal dual method
Primal dual optimization
Primal dual optimization Primal dual form
Primal dual optimization Standard TV optimization Our prior [Chambolle and Pock 2010]
Results - dyadic 2x2 Ground truth
Results - dyadic 2x2 PSNR: 32.25 SSIM: 0.9858 [He-Siu 2011]
Results - dyadic 2x2 PSNR: 34.97 SSIM: 0.9961 [Kwon et al. 2014]
Results - dyadic 2x2 PSNR: 35.13 SSIM: 0.9928 Our method
Results - nondyadic 3x3 Ground truth
Results - nondyadic 3x3 PSNR: 23.28 SSIM: 0.9041 [Yang 2010]
Results - nondyadic 3x3 PSNR: 24.17 SSIM: 0.9218 [Kwon et al. 2014]
Results - nondyadic 3x3 PSNR: 23.93 SSIM: 0.9122 Our method
Comparisons with deep learning 4x4 Ground truth
Comparisons with deep learning 4x4 PSNR: 32.95 SSIM: 0.9442 Our method
Comparisons with deep learning 4x4 PSNR: 33.12 SSIM: 0.9504 [Dong et al. 2014]
Comparisons with deep learning 4x4 PSNR: 33.28 SSIM: 0.9513 [Timofte et al. 2014]
Conclusions Proposed a novel natural image prior Application in Fast. Complementary to sparse gradient prior Any anisotropic upsampling method can be used Potentially deep learning methods? (Future work!) Application in SISR Similar image reconstruction problems (future work)
Fast edge-directed single-image super-resolution Thanks! Fast edge-directed single-image super-resolution Mushfiqur Rouf1 Dikpal Reddy2 Kari Pulli2 Rabab K Ward1 1University of British Columbia 2Light co