Jointly Optimized Regressors for Image Super-resolution Dengxin Dai, Radu Timofte, and Luc Van Gool Computer Vision Lab, ETH Zurich 1.

Slides:



Advertisements
Similar presentations
Shape Matching and Object Recognition using Low Distortion Correspondence Alexander C. Berg, Tamara L. Berg, Jitendra Malik U.C. Berkeley.
Advertisements

Image Super-resolution via Sparse Representation
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
MIT CSAIL Vision interfaces Approximate Correspondences in High Dimensions Kristen Grauman* Trevor Darrell MIT CSAIL (*) UT Austin…
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Patch to the Future: Unsupervised Visual Prediction
Patch Based Synthesis for Single Depth Image Super-Resolution (ECCV 2012) Oisin Mac Aodha, Neill Campbell, Arun Nair and Gabriel J. Brostow Presented By:
Computer Vision Group, University of BonnVision Laboratory, Stanford University Abstract This paper empirically compares nine image dissimilarity measures.
Image Super-Resolution Using Sparse Representation By: Michael Elad Single Image Super-Resolution Using Sparse Representation Michael Elad The Computer.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
CSE 589 Applied Algorithms Spring 1999 Image Compression Vector Quantization Nearest Neighbor Search.
Fast and Compact Retrieval Methods in Computer Vision Part II A. Torralba, R. Fergus and Y. Weiss. Small Codes and Large Image Databases for Recognition.
Soft Edge Smoothness Prior for Alpha Channel Super Resolution Shengyang Dai 1, Mei Han 2, Wei Xu 2, Ying Wu 1, Yihong Gong 2 1.EECS Department, Northwestern.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Image Denoising via Learned Dictionaries and Sparse Representations
Very Low Resolution Face Recognition Problem
ON THE IMPROVEMENT OF IMAGE REGISTRATION FOR HIGH ACCURACY SUPER-RESOLUTION Michalis Vrigkas, Christophoros Nikou, Lisimachos P. Kondi University of Ioannina.
Learning Low-Level Vision William T. Freeman Egon C. Pasztor Owen T. Carmichael.
1 Budgeted Nonparametric Learning from Data Streams Ryan Gomes and Andreas Krause California Institute of Technology.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
Supervised Distance Metric Learning Presented at CMU’s Computer Vision Misc-Read Reading Group May 9, 2007 by Tomasz Malisiewicz.
1 Blind Image Quality Assessment Based on Machine Learning 陈 欣
K-means Based Unsupervised Feature Learning for Image Recognition Ling Zheng.
Statistical Learning: Pattern Classification, Prediction, and Control Peter Bartlett August 2002, UC Berkeley CIS.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Seminar presented by: Tomer Faktor Advanced Topics in Computer Vision (048921) 12/01/2012 SINGLE IMAGE SUPER RESOLUTION.
Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem.
3D-Assisted Facial Texture Super-Resolution Pouria Mortazavian, Josef Kittler, William Christmas 10 September 2009 Centre for Vision, Speech and Signal.
Super-Resolution Barak Zackay Yaron Kassner. Outline Introduction to Super-Resolution Reconstruction Based Super Resolution –An Algorithm –Limits on Reconstruction.
Computer Vision James Hays, Brown
嵌入式視覺 Pattern Recognition for Embedded Vision Template matching Statistical / Structural Pattern Recognition Neural networks.
Functional Brain Signal Processing: EEG & fMRI Lesson 8 Kaushik Majumdar Indian Statistical Institute Bangalore Center M.Tech.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao.
Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic and Andrew Zisserman.
MEDICAL IMAGE REGISTRATION BY MAXIMIZATION OF MUTUAL INFORMATION Dissertation Defense by Chi-hsiang Lo June 27, 2003 PRESENTATION.
Fast Direct Super-Resolution by Simple Functions
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Single Image Super-Resolution: A Benchmark Chih-Yuan Yang 1, Chao Ma 2, Ming-Hsuan Yang 1 UC Merced 1, Shanghai Jiao Tong University 2.
Jump to first page. The Generalised Mapping Regressor (GMR) neural network for inverse discontinuous problems Student : Chuan LU Promotor : Prof. Sabine.
SuperResolution (SR): “Classical” SR (model-based) Linear interpolation (with post-processing) Edge-directed interpolation (simple idea) Example-based.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Iterative K-Means Algorithm Based on Fisher Discriminant UNIVERSITY OF JOENSUU DEPARTMENT OF COMPUTER SCIENCE JOENSUU, FINLAND Mantao Xu to be presented.
Debrup Chakraborty Non Parametric Methods Pattern Recognition and Machine Learning.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods.
1 Bilinear Classifiers for Visual Recognition Computational Vision Lab. University of California Irvine To be presented in NIPS 2009 Hamed Pirsiavash Deva.
SUPER RESOLUTION USING NEURAL NETS Hila Levi & Eran Amar Weizmann Ins
Deeply-Recursive Convolutional Network for Image Super-Resolution
Semi-Supervised Clustering
A. M. R. R. Bandara & L. Ranathunga
Fast edge-directed single-image super-resolution
Summary of “Efficient Deep Learning for Stereo Matching”
CSSE463: Image Recognition Day 11
Nonparametric Semantic Segmentation
Machine Learning Basics
Presenter: Hajar Emami
Unsupervised Face Alignment by Robust Nonrigid Mapping
Object Modeling with Layers
Junheng, Shengming, Yunsheng 11/09/2018
CSSE463: Image Recognition Day 11
CSSE463: Image Recognition Day 11
Single image super-resolution with limited number of filters
Presentation transcript:

Jointly Optimized Regressors for Image Super-resolution Dengxin Dai, Radu Timofte, and Luc Van Gool Computer Vision Lab, ETH Zurich 1

2 The Super-resolution Problem Blur and Decimation + Noise Recover the HR image from the LR one Interpolate: align coordinates Super-resolution: high-freq. content ≈

Why Image Super-resolution? This kitten made out of legos? They aren’t cuddly at all! This kitten is adorable! I want to adopt her and give her a good home! (1) For good visual quality Image source: 3

Low-resolution Super-resolution result (2) Pre-processing component for other computer vision systems, such as recognition Features & models are often trained with images of normal resolution Why Image Super-resolution? 4

Example-based approaches Input Output Training examples Ground truth Not available during testing Highly ill-posed problem Core part: learning 5

Core idea – patch enhancement Input Learning transformation function for small patches 1.Less complex, tractable 2.Better chance to find similar patterns from exemplars Interp. Patch enhance Output & average 6

Training data LR images … … Training pairs (easy to create) HR images Matching patch-pairs … Learning Feature Extraction (LR) Feature Extraction (HR) 7

Training the Dictionaries – General Feature extraction (HR) Interpolate Down-sample HR LRHigh-Freq. Patch Size: 6x6, 9x9 or 12x12 8

Training the Dictionaries – General Feature extraction (LR) Down-sample HR Interpolate LR GradientLaplacian Patch Size: 6x6, 9x9 or 12x12 9

Training the Dictionaries – General Learning methods The transformation from LR patches to HR ones : Related Work: kNN + Markov random field [Freeman et al. 00] Neighbor embedding [Chang et al. 04] Non-parametric Support vector regression [Ni et al. 07] Deep neural network [Dong et al. 14] A highly non-linear function Simple functions [Yang & Yang 13] Anchored neighborhood regression [Timofte et al. 13] A set of local (linear) functions Efficient, but regressors learned separately Computationally heavy 10 Complex optimization

Training the Dictionaries – General Differences to related approaches 11 Methods Simple functions [Yang & Yang 13] and ANR [Timofte et al. 13] Ours GoalA set of local regressors Partition space LR patchesRegression functions RegressorsLearned separatelyLearned jointly # of regressors 1024 (typical) 32 (typical)

Training the Dictionaries – General Our approach – Jointly Optimized Regressors Learning: a set of local regressors, collectively yield smallest error for all training pairs Individually precise Mutually complementary Testing: each patch is super-resolved by its most suitable regressor, voted by nearest neighbors input output Regressor 1 Regressor O … Regressor 2 12

Training the Dictionaries – General Our approach – learning Two iterative steps (similar to k-means): Update step: learn repressors to minimize the SR error of all pairs of each cluster Assignment step: assign each pair to the regressor yielding the least SR error Initialization: separate matching pairs into O clusters … … … … … … … Cluster 1Cluster 2 Cluster O LR HR 13

Training the Dictionaries – General Our approach – learning Update step: learn a regressor per group by minimizing the SR error Regressor 1 Regressor 2 Regressor O … … … … … … … … LR HR 14 Ridge Regression:

Training the Dictionaries – General Our approach – learning Assign. step: assign each pair to the regressor yielding the least SR error … … … … … … … Cluster 1 , Regressor 1 Cluster 2 Regressor 2 Cluster 3, Regressor 3 LR HR Re1Re4Re2Re3Re5 Assign. stepUpdate step Until convergence (~10 iterations) SR error 15

Training the Dictionaries – General Our approach – learning kd-Tree … After iterations, each LR patch is associated with a vector indicating the SR error by each of the O regressors LR HR SR error 5 million patches 16 [Vedaldi and Fulkerson 08]

Training the Dictionaries – General Our approach – testing interpolate LR input filtering search kNN Kd-Tree vote Re1Re4Re2Re3Re5 Regressors SR error Similar patches share regressors 17 LR

Training the Dictionaries – General Our approach – testing interpolate LR input output Regressor 3 = Ridge Regression Average High-Freq. 18

Results Compared with 7 competing methods on 4 datasets (1 newly collected) Our method, yet simple, outperforms others consistently 19 Average PSNR (dB) on Set5, Set14, BD100, and SuperTex136

Results Better results with more iterations Better results with more regressors 20 PSNR (dB) The number of iterations The number of regressors

Better results with more training patch pairs 21 PSNR (dB) The number of training patches Results

22 Ground truth / PSNR Factor x3

23 Bicubic / 27.9 dB Factor x3

24 Zeyde et al. /28.7 dB Factor x3

25 SRCNN /29.0 dB Factor x3

26 JOR /29.3 dB Factor x3

27 Results: factor x4 JOR / 32.3 dB SRCNN/ 31.4 dBGround truth / PSNR

28 Results: factor x4 JOR / 33.7 dB Bicubic / 32.8 dBGround truth / PSNR

29 Bicubic JOR / 27.7 dBSRCNN / 27.1 dBANR/ 26.9 dB Zeyde et al. / 26.7 dBBicubic / 25.5 dBGround truth / PSNR Results: factor x4

Conclusion A new method by jointly optimizing regressors with the ultimate goal of ISR The method, yet simple, outperforms competing methods The code is available at A new dataset, 136 textures evaluating texture recovery ability 30

Thanks for your attention! Questions? 31 JOR / 34.0 dBSRCNN / 33.3 dB Bicubic / 31.2 dB Ground truth / PSNR

Reference Dai, D., R. Timofte, and L. Van Gool. "Jointly optimized regressors for image super-resolution." In Eurographics, 2015.