A Sampled Texture Prior for Image Super-Resolution Lyndsey C. Pickup, Stephen J. Roberts and Andrew Zisserman, Robotics Research Group, University of Oxford.

Slides:



Advertisements
Similar presentations
Bayesian Learning & Estimation Theory
Advertisements

Example 1 Generating Random Photorealistic Objects Umar Mohammed and Simon Prince Department of Computer Science, University.
Active Appearance Models
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Optimizing and Learning for Super-resolution
INTRODUCTION TO MACHINE LEARNING Bayesian Estimation.
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
LECTURE 11: BAYESIAN PARAMETER ESTIMATION
Cost of surrogates In linear regression, the process of fitting involves solving a set of linear equations once. For moving least squares, we need to.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Multimedia communications EG 371Dr Matt Roach Multimedia Communications EG 371 and EE 348 Dr Matt Roach Lecture 6 Image processing (filters)
Reducing Drift in Parametric Motion Tracking
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Visual Recognition Tutorial
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
Bayesian Image Super-resolution, Continued Lyndsey C. Pickup, David P. Capel, Stephen J. Roberts and Andrew Zisserman, Robotics Research Group, University.
MACHINE LEARNING 9. Nonparametric Methods. Introduction Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 
Motion Analysis (contd.) Slides are from RPI Registration Class.
ON THE IMPROVEMENT OF IMAGE REGISTRATION FOR HIGH ACCURACY SUPER-RESOLUTION Michalis Vrigkas, Christophoros Nikou, Lisimachos P. Kondi University of Ioannina.
Region Filling and Object Removal by Exemplar-Based Image Inpainting
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Rician Noise Removal in Diffusion Tensor MRI
Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem.
3D-Assisted Facial Texture Super-Resolution Pouria Mortazavian, Josef Kittler, William Christmas 10 September 2009 Centre for Vision, Speech and Signal.
Image Analogies Aaron Hertzmann (1,2) Charles E. Jacobs (2) Nuria Oliver (2) Brian Curless (3) David H. Salesin (2,3) 1 New York University 1 New York.
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Multimodal Interaction Dr. Mike Spann
CAP5415: Computer Vision Lecture 4: Image Pyramids, Image Statistics, Denoising Fall 2006.
Segmental Hidden Markov Models with Random Effects for Waveform Modeling Author: Seyoung Kim & Padhraic Smyth Presentor: Lu Ren.
CDS 301 Fall, 2009 Vector Visualization Chap. 6 October 7, 2009 Jie Zhang Copyright ©
MML Inference of RBFs Enes Makalic Lloyd Allison Andrew Paplinski.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Line detection Assume there is a binary image, we use F(ά,X)=0 as the parametric equation of a curve with a vector of parameters ά=[α 1, …, α m ] and X=[x.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Esmaeil Faramarzi, Member, IEEE, Dinesh Rajan, Senior Member, IEEE, and Marc P. Christensen, Senior Member, IEEE Unified Blind Method for Multi-Image Super-Resolution.
Single Image Super-Resolution: A Benchmark Chih-Yuan Yang 1, Chao Ma 2, Ming-Hsuan Yang 1 UC Merced 1, Shanghai Jiao Tong University 2.
INTRODUCTION TO Machine Learning 3rd Edition
July 11, 2006Bayesian Inference and Maximum Entropy Probing the covariance matrix Kenneth M. Hanson T-16, Nuclear Physics; Theoretical Division Los.
Progressive AAM and Multi- band modeling 报告人:崔 滢 2011 年 12 月 16 日.
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
Image Priors and the Sparse-Land Model
3.7 Adaptive filtering Joonas Vanninen Antonio Palomino Alarcos.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Maximum Likelihood Estimation
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Machine Learning 5. Parametric Methods.
Adaptive Wavelet Packet Models for Texture Description and Segmentation. Karen Brady, Ian Jermyn, Josiane Zerubia Projet Ariana - INRIA/I3S/UNSA June 5,
Statistical Models of Appearance for Computer Vision 主講人:虞台文.
Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers Zhaozheng Yin RI/CMU, Fall 2009.
ICCV 2007 Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1,
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Edge Preserving Spatially Varying Mixtures for Image Segmentation Giorgos Sfikas, Christophoros Nikou, Nikolaos Galatsanos (CVPR 2008) Presented by Lihan.
Filters– Chapter 6. Filter Difference between a Filter and a Point Operation is that a Filter utilizes a neighborhood of pixels from the input image to.
RECONSTRUCTION OF MULTI- SPECTRAL IMAGES USING MAP Gaurav.
Ch3: Model Building through Regression
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Dynamical Statistical Shape Priors for Level Set Based Tracking
Bias and Variance of the Estimator
A guide to SR different approaches
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
LECTURE 07: BAYESIAN ESTIMATION
Parametric Methods Berlin Chen, 2005 References:
FOCUS PRIOR ESTIMATION FOR SALIENT OBJECT DETECTION
Presentation transcript:

A Sampled Texture Prior for Image Super-Resolution Lyndsey C. Pickup, Stephen J. Roberts and Andrew Zisserman, Robotics Research Group, University of Oxford Goal: Super-resolution aims to produce a high-resolution image from a set of low-resolution images by recovering or inventing plausible high-frequency information. With more than one low-resolution image, high-frequency information can be recovered by making use of sub-pixel image displacements. Given some basic knowledge about the image subject, it should be possible to improve the super-resolution estimate by incorporating information available from similar images. Introduction The Sample-Based Prior There are K low-resolution images y (k), generated by y (k) = W (k) x +  G (k), where  G is a vector of i.i.d. Gaussians such that  G » N(0,  -1 ), and  G is the noise precision. If x has N pixels and each y (k) has M pixels, W (k) is an M£N matrix which encompasses the warping, blurring and decimation of x to produce y (k). We denote the registration and blurring parameters of y (k) by  (k). Low-resolution images (inputs) The goal is to reconstruct the high-resolution image to the right. It is assumed that the low-resolution images have been generated from this image through a process that blurs and subsamples the high-resolution image, and adds Gaussian noise. A few results Maximum Likelihood Solution The maximum likelihood images are very noisy because the problem can be ill-conditioned. where g is an approximation to the image gradient at each pixel in four different directions (horizontal, vertical, both diagonals), and  is the Huber function, defined Since this is an image of a fairly specific type, it should be possible to do better. Rather than using a generic Huber-style prior based on image statistics, samples from similar images can be used to build a more appropriate prior. An approach that has been shown to work well in texture synthesis is used to build the image prior. Rather than developing a parametric model, samples are taken from other images. Small sample image patches of edge length 5 to15 pixels are taken from a training image containing similar textures to the images being super-resolved. For each patch, neighbourhood information is stored along with a central pixel value. The patches are weighted by a 2D Gaussian so pixels near the centre carry more weight. To evaluate this texture-based prior Take each pixel x i in the super-resolution image x, and find its neighbourhood R(x i ). Find the closest matching neighbourhood in the patches sampled from the training image. Look up the central pixel value of that neighbourhood, L R (x i ). Assume x i is Gaussian distributed about L R (x i ) with precision  T. This gives us: Solving for the super-resolution image Start with an initial estimate of x, for instance the average image. We assume that the image registration has already been estimated accurately. Sample patches from an image in the same class as the super-resolution image being recovered. Store the neighbourhood and patch centre data for use with the texture-based prior. Use SCG to optimize the MAP version of the super-resolution model incorporating the new p(x): where  encompasses the  T and  G terms from the prior and data likelihood.  = 0.16  = 0.64  = 0.01  = 0.04 Original image Sample texture With a weak prior, the input data contains enough evidence for the prior to look like the ground truth, but as  increases, the prior term dominates. Maximum A Posteriori Solution Pixel neighbourhood Central pixel Using the wrong texture Text truth Text low-res Text training image Brick training image Brick truthBrick low-res Beads training image Beads truthBeads low-res A plot of root mean square (RMS) errors obtained from texture-based prior against those from the Huber-MAP images, where the error is calculated with respect to the ground truth images. In all cases, the texture- based prior’s image has a lower RMS error. For each image type, nine datasets were generated, using varying numbers of inputs and different noise levels. For each dataset, the best of several runs (while varying free model parameters) is shown. Note that training images do not overlap with image sections used as ground truth. The model Image x.Warp, with parameters  Blur by point-spread function. Decimate by zoom factor. Corrupt with additive Gaussian noise Noise (grey levels) Number of images Noise (grey levels) Number of images