Download presentation
1
Optimizing and Learning for Super-resolution
Lyndsey C. Pickup, Stephen J. Roberts & Andrew Zisserman Robotics Research Group, University of Oxford
2
The Super-resolution Problem
Given a number of low-resolution images differing in: geometric transformations lighting (photometric) transformations camera blur (point-spread function) image quantization and noise. Estimate a high-resolution image:
3
Low-resolution image 1
4
Low-resolution image 2
5
Low-resolution image 3
6
Low-resolution image 4
7
Low-resolution image 5
8
Low-resolution image 6
9
Low-resolution image 7
10
Low-resolution image 8
11
Low-resolution image 9
12
Low-resolution image 10
13
Super-Resolution Image
14
Generative Model High-resolution image, x. y1 y2 y3 y4
Low-resolution images W4 W3 W2 W1 Registrations, lighting and blur.
15
Generative Model We don’t have: We have: Geometric registrations
Point-spread function Photometric registrations Set of low-resolution input images, y.
16
Maximum a Posteriori (MAP) Solution
y1 y2 y3 y4 W4 W3 W2 W1 x Standard method: Compute registrations from low-res images. Solve for SR image, x, using gradient descent. [Irani & Peleg ‘90, Capel ’01, Baker & Kanade ’02, Borman ‘04]
17
What’s new We solve for registrations and SR image jointly.
We also find appropriate values for parameters in the prior term at the same time. Hardie et al. ’97: adjust registration while optimizing super-resolution estimate. Exhaustive search limits them to translation only. Simple smoothness prior softens image edges. i.e. given the low-res images, y, we solve for the SR image x and the mappings, W simultaneously. y1 y2 y3 y4 W4 W3 W2 W1 x
18
Overview of rest of talk
Simultaneous Approach Model details Initialisation technique Optimization loop Learning values for the prior’s parameters Results on real images
19
Maximum a Posteriori (MAP) Solution
y1 y2 y3 y4 W4 W3 W2 W1 x Image x. Corrupt with additive Gaussian noise. Warp, with parameters Φ. Blur by point-spread function. Decimate by zoom factor. y
20
Details of Huber Prior Huber function is quadratic in the middle, and linear in the tails. ρ (z,α) p (z|α,v) Red: large α Blue: small α Probability distribution is like a heavy-tailed Gaussian. This is applied to image gradients in the SR image estimate.
21
Details of Huber Prior Advantages: simple, edge-preserving, leads to convex form for MAP equations. Solutions as α and v vary: Ground Truth α=0.1 v=0.4 α=0.05 v=0.05 α=0.01 v=0.01 α=0.01 v=0.005 Edges are sharper Too much smoothing Too little smoothing
22
Advantages of Simultaneous Approach
Learn from lessons of Bundle Adjustment: improve results by optimizing the scene estimate and the registration together. Registration can be guided by the super-resolution model, not by errors measured between warped, noisy low-resolution images. Use a non-Gaussian prior which helps to preserve edges in the super-resolution image. Remember, the classical approach is…. Fix ‘n’ solve.
23
Overview of Simultaneous Approach
Start from a feature-based RANSAC-like registration between low-res frames. Select blur kernel, then use average image method to initialise registrations and SR image. Iterative loop: Update Prior Values Update SR estimate Update registration estimate
24
ML-sharpened estimate
Initialisation Average image Use average image as an estimate of the super-resolution image (see paper). Minimize the error between the average image and the low-resolution image set. Use an early-stopped iterative ML estimate of the SR image to sharpen up this initial estimate. ML-sharpened estimate
25
Optimization Loop Update prior’s parameter values (next section)
Update estimate of SR image Update estimate of registration and lighting values, which parameterize W Repeat till converged.
26
Joint MAP Results Registration Fixed Joint MAP
Decreasing prior strength
27
Learning Prior Parameters α, ν
Split the low-res images into two sets: Use first set to obtain an SR image. Find error on validation set.
28
Learning Prior Parameters α, ν
Split the low-res images into two sets: Use first set to obtain an SR image. Find error on validation set. But what if one of the validation images is mis-registered?
29
Learning Prior Parameters α, ν
Instead, we select pixels from across all images, choosing differently at each iteration. We evaluate an SR estimate using the unmarked pixels, then use the forward model to compare the estimate to the red pixels.
30
Learning Prior Parameters α, ν
Instead, we select pixels from across all images, choosing differently at each iteration. We evaluate an SR estimate using the unmarked pixels, then use the forward model to compare the estimate to the red pixels.
31
Learning Prior Parameters α, ν
To update the prior parameters: Re-select a cross-validation pixel set. Run the super-resolution image MAP solver for a small number of iterations, starting from the current SR estimate. Predict the low-resolution pixels of the validation set, and measure error. Use gradient descent to minimise the error with respect to the prior parameters.
32
Results: Eye Chart MAP version: fixing registrations then super-resolving Joint MAP version with adaptation of prior’s parameter values
33
Results: Groundhog Day
34
Results: Groundhog Day
The blur estimate can still be altered to change the SR result. More ringing and artefacts appear in the regular MAP version. Blur radius = 1 Blur radius = 1.4 Blur radius = 1.8 Regular MAP Simultaneous
35
Lola Rennt
36
Real Data: Lola Rentt
37
Real Data: Lola Rentt
38
Real Data: Lola Rentt
39
Real Data: Lola Rentt
40
Conclusions Joint MAP solution: better results by optimizing SR image and registration parameters simultaneously. Learning prior values: preserve image edges without having to estimate image statistics in advance. DVDs: Automatically zoom in on regions with a registrations up to a projective transform and with an affine lighting model. Further work: marginalize over the registration – see NIPS 2006.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.