Image Super-resolution via Sparse Representation

Slides:



Advertisements
Similar presentations
A Two-Step Approach to Hallucinating Faces: Global Parametric Model and Local Nonparametric Model Ce Liu Heung-Yeung Shum Chang Shui Zhang CVPR 2001.
Advertisements

Joint work with Irad Yavneh
Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY.
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Submodular Dictionary Selection for Sparse Representation Volkan Cevher Laboratory for Information and Inference Systems - LIONS.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang
Learning sparse representations to restore, classify, and sense images and videos Guillermo Sapiro University of Minnesota Supported by NSF, NGA, NIH,
Extensions of wavelets
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
Ilias Theodorakopoulos PhD Candidate
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
Robust Object Tracking via Sparsity-based Collaborative Model
Image classification by sparse coding.
Patch Based Synthesis for Single Depth Image Super-Resolution (ECCV 2012) Oisin Mac Aodha, Neill Campbell, Arun Nair and Gabriel J. Brostow Presented By:
Image Super-Resolution Using Sparse Representation By: Michael Elad Single Image Super-Resolution Using Sparse Representation Michael Elad The Computer.
Graph Based Semi- Supervised Learning Fei Wang Department of Statistical Science Cornell University.
Soft Edge Smoothness Prior for Alpha Channel Super Resolution Shengyang Dai 1, Mei Han 2, Wei Xu 2, Ying Wu 1, Yihong Gong 2 1.EECS Department, Northwestern.
Image Denoising via Learned Dictionaries and Sparse Representations
Very Low Resolution Face Recognition Problem
* Joint work with Michal Aharon Guillermo Sapiro
Sparse and Redundant Representation Modeling for Image Processing Michael Elad The Computer Science Department The Technion – Israel Institute of technology.
Image Classification using Sparse Coding: Advanced Topics
Seminar presented by: Tomer Faktor Advanced Topics in Computer Vision (048921) 12/01/2012 SINGLE IMAGE SUPER RESOLUTION.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Cs: compressed sensing
EE369C Final Project: Accelerated Flip Angle Sequences Jan 9, 2012 Jason Su.
Sparse Matrix Factorizations for Hyperspectral Unmixing John Wright Visual Computing Group Microsoft Research Asia Sept. 30, 2010 TexPoint fonts used in.
Learning With Structured Sparsity
Jointly Optimized Regressors for Image Super-resolution Dengxin Dai, Radu Timofte, and Luc Van Gool Computer Vision Lab, ETH Zurich 1.
Fast Direct Super-Resolution by Simple Functions
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
Structured Face Hallucination Chih-Yuan Yang Sifei Liu Ming-Hsuan Yang Electrical Engineering and Computer Science 1.
Image Enhancement [DVT final project]
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
The 18th Meeting on Image Recognition and Understanding 2015/7/29 Depth Image Enhancement Using Local Tangent Plane Approximations Kiyoshi MatsuoYoshimitsu.
Spoken Language Group Chinese Information Processing Lab. Institute of Information Science Academia Sinica, Taipei, Taiwan
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
SuperResolution (SR): “Classical” SR (model-based) Linear interpolation (with post-processing) Edge-directed interpolation (simple idea) Example-based.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling The research leading to these results has received funding from the European.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Super-resolution MRI Using Finite Rate of Innovation Curves Greg Ongie*, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa.
Image Resampling & Interpolation
Chapter 7. Classification and Prediction
Fast edge-directed single-image super-resolution
Summary of “Efficient Deep Learning for Stereo Matching”
Multiplicative updates for L1-regularized regression
Unsupervised Riemannian Clustering of Probability Density Functions
Systems Biology for Translational Medicine
Nonparametric Semantic Segmentation
Fast Preprocessing for Robust Face Sketch Synthesis
Machine Learning Basics
School of Electronic Engineering, Xidian University, Xi’an, China
By: Kevin Yu Ph.D. in Computer Engineering
Recursively Adapted Radial Basis Function Networks and its Relationship to Resource Allocating Networks and Online Kernel Learning Weifeng Liu, Puskal.
Improving K-SVD Denoising by Post-Processing its Method-Noise
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
Autoencoders Supervised learning uses explicit labels/correct output in order to train a network. E.g., classification of images. Unsupervised learning.
FOCUS PRIOR ESTIMATION FOR SALIENT OBJECT DETECTION
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

Image Super-resolution via Sparse Representation Jianchao Yang, John Wright, Yi Ma Coordinated Science Laboratory Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign SIAM Imaging Science Symposium, July 7, 2008

OUTLINE Super-resolution as sparse representation in dictionary of raw image patches Solution via -norm minimization Global consistency, feature selection Experiments: Qualitative comparison to previous methods Quantitative comparison to previous methods Conclusions and Discussions

LEARNING-BASED SUPER-RESOLUTION – Problem formulation Problem: given a single low-resolution input, and a set of pairs (high- and low-resolution) of training patches sampled from similar images, reconstruct a high-resolution version of the input. Training patches Input Output Original Advantage: more widely applicable than reconstructive (many image) approaches. Difficulty: single-image super-resolution is an extremely ill-posed problem.

LEARNING-BASED SUPER-RESOLUTION – Prior work How should we regularize the super-resolution problem? Markov random field [Freeman et. Al. IJCV ‘00] Primal sketch prior [Sun et. Al. CVPR ‘03] Neighbor embedding [Chang et. Al. CVPR ‘04] Soft edge prior [Dai et. Al. ICCV ‘07] ?

LEARNING-BASED SUPER-RESOLUTION – Prior work How should we regularize the super-resolution problem? Markov random field [Freeman et. Al. IJCV ‘00] Primal sketch prior [Sun et. Al. CVPR ‘03] Neighbor embedding [Chang et. Al. CVPR ‘04] Soft edge prior [Dai et. Al. ICCV ‘07] ? Our approach: High-resolution patches have a sparse linear representation with respect to an overcomplete dictionary of patches randomly sampled from similar images. output high-resolution patch high-resolution dictionary for some with

LINEAR SPARSE REPRESENTATION – SR as Compressed Sensing We do not directly observe the high resolution patch , but rather (features of) its low-resolution version: dictionary of low-resolution patches. downsampling / blurring operator The input low-resolution patch satisfies linear measurements of sparse coefficient vector !

LINEAR SPARSE REPRESENTATION – SR as Compressed Sensing If we can recover the sparse solution to the underdetermined system of linear equations , we can reconstruct as Formally, we seek the sparsest solution: convex relaxation This problem can be efficiently solved by linear programming. In many circumstances it recovers the sparsest solution [Donoho 2006 CPAM].

ALGORITHM DETAILS – Enforcing patch consistency Combining local (patch) estimates: Sample 3 x 3 low resolution patches on a regular grid. Allow 1 pixel overlap between adjacent patches. Enforce agreement between overlapping high-resolution reconstructions. Simultaneous solution for for all patches: large, but sparse convex program. Still too slow in practice. Fast approximation: compute for each patch in raster scan order, enforce consistency with previously computed patch solutions: T, T’: select overlap between patches F : linear feature extraction operator

ALGORITHM DETAILS – Feature extraction F : linear feature extraction operator Here, F concatenates first and second image partial derivatives, computed from a bicubic interpolation of the low-resolution input. Emphasizes the part of the signal that is most relevant for human perception and for predicting the high-resolution output. Transforms usual fidelity criterion into a more perceptually meaningful Mahalanobis distance. Complete feature vector for each low-resolution patch is 384 dimensional.

SUPERRESOLUTION VIA SPARSITY – Algorithm pseudocode

RELATIONSHIP TO PREVIOUS WORK – Adaptivity, simplicity Adaptivity of representation -minimization automatically selects the smallest number of training samples that can represent the input. Number of nonzero coefficients, Rectifies overfitting and underfitting issues inherent in fixed-neighbor methods (e.g., Neighbor Embedding [Chang CVPR ‘04]). Simplicity of dictionary Sparsity in fixed bases (wavelet, curvelet), or learned bases (K-SVD, alternating minimization) has been applied extensively to image compression, denoising, inpainting, and more recently to classification and categorization. For superresolution, sparse representation in simple bases of randomly sampled patches already performs competitively.

EXPERIMENTAL SETUP: Dictionary preparation Two training sets: Flower images -- smooth textures, sharp edges Animal images -- high-frequency textures Randomly sample 100,000 high-resolution / low-resolution patch pairs from each set of training images:

QUALITATIVE COMPARISON: Flower, zoom by 3x (flower dictionary) Low-resolution input: Bicubic Neighbor embedding [Chang CVPR ‘04] Our method Original

QUALITATIVE COMPARISON: Girl, zoom by 3x (flower dictionary) Low-resolution input Bicubic Neighbor embedding [Chang CVPR ‘04] Our method Original

QUALITATIVE COMPARISON: Parthenon, zoom by 3x (flower dictionary) Input Image Bicubic Neighbor embedding Our method

QUALITATIVE COMPARISON: Raccoon, zoom by 3x (animal dictionary) Low-resolution input: Bicubic Neighbor embedding [Chang CVPR ‘04] Our method

FURTHER EXAMPLES: zoom by 3x Input: Output:

QUALITATIVE COMPARISON: girl, zoom by 4x (flower dictionary) Input, upsampled Bicubic MRF / BP [Freeman IJCV ‘00] Soft edge prior [Dai ICCV ‘07] Our method Original

QUANTITATIVE COMPARISON: RMS error Image Bicubic Neighborhood embedding Our method Flower 3.51 4.20 3.23 Girl 5.90 6.66 5.61 Parthenon 12.74 13.56 12.25 Raccoon 9.74 9.85 9.19 Our approach outperforms bicubic interpolation and neighbor embedding on all examples tested.

OTHER APPLICATIONS: Face Hallucination Jianchao Yang, Hao Tang, Thomas Huang, Yi Ma, appeared in ICIP’08

CONCLUSIONS Assumption: High-resolution image patches have sparse representation in a dictionary of patches randomly sampled from similar images. Super-resolution as sparse representation: Observe a small number of linear measurements (in this case the low-resolution image) Recover sparse representation via minimization Framework can incorporate overlap constraints, ect. Implications: Surprisingly good results (competitive with state-of-the-art) with a very simple algorithm Randomly sampled patches provide an effective dictionary

FUTURE PROBLEMS Connections to compressed sensing: minimum patch size or feature space dimension to recover sparse representation? How much data: How many training samples are required to sparsify natural image categories? How restricted does the category have to be for the sparse representation to be recoverable by - minimization? Combining dictionaries from multiple classes: simultaneous supervised image segmentation and super-resolution.

REFERENCES & ACKNOWLEDGMENT “Image super-resolution as sparse representation of raw image patches,” CVPR 2008 People: Prof. Thomas Huang, ECE, University of Illinois Prof. Yi Ma, ECE, University of Illinois Jianchao Yang, PhD student, ECE, University of Illinois Funding: NSF EHS-0509151 NSF CCF-0514955 ONR YIP N00014-05-1-0633 NSF IIS 0703756

THANK YOU Questions, please? Image super-resolution via sparse representation, John Wright, 2008