School of Electronic Engineering, Xidian University, Xi’an, China

Slides:



Advertisements
Similar presentations
Edge Preserving Image Restoration using L1 norm
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
11/11/02 IDR Workshop Dealing With Location Uncertainty in Images Hasan F. Ates Princeton University 11/11/02.
MMSE Estimation for Sparse Representation Modeling
Joint work with Irad Yavneh
1 Low-Dose Dual-Energy CT for PET Attenuation Correction with Statistical Sinogram Restoration Joonki Noh, Jeffrey A. Fessler EECS Department, The University.
Computer vision: models, learning and inference Chapter 8 Regression.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Image Denoising using Locally Learned Dictionaries Priyam Chatterjee Peyman Milanfar Dept. of Electrical Engineering University of California, Santa Cruz.
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Visual Recognition Tutorial
Image Super-Resolution Using Sparse Representation By: Michael Elad Single Image Super-Resolution Using Sparse Representation Michael Elad The Computer.
Image Denoising via Learned Dictionaries and Sparse Representations
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
1 Bayesian Restoration Using a New Nonstationary Edge-Preserving Image Prior Giannis K. Chantas, Nikolaos P. Galatsanos, and Aristidis C. Likas IEEE Transactions.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
SOS Boosting of Image Denoising Algorithms
A Weighted Average of Sparse Several Representations is Better than the Sparsest One Alone Michael Elad The Computer Science Department The Technion –
(1) A probability model respecting those covariance observations: Gaussian Maximum entropy probability distribution for a given covariance observation.
Seminar presented by: Tomer Faktor Advanced Topics in Computer Vision (048921) 12/01/2012 SINGLE IMAGE SUPER RESOLUTION.
Unitary Extension Principle: Ten Years After Zuowei Shen Department of Mathematics National University of Singapore.
PATTERN RECOGNITION AND MACHINE LEARNING
1 Logistic Regression Adapted from: Tom Mitchell’s Machine Learning Book Evan Wei Xiang and Qiang Yang.
Cs: compressed sensing
EE369C Final Project: Accelerated Flip Angle Sequences Jan 9, 2012 Jason Su.
Fast and incoherent dictionary learning algorithms with application to fMRI Authors: Vahid Abolghasemi Saideh Ferdowsi Saeid Sanei. Journal of Signal Processing.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Sparsity-based Image Deblurring with Locally Adaptive and Nonlocally Robust Regularization Weisheng Dong a, Xin Li b, Lei Zhang c, Guangming Shi a a Xidian.
Fast Direct Super-Resolution by Simple Functions
Structured Face Hallucination Chih-Yuan Yang Sifei Liu Ming-Hsuan Yang Electrical Engineering and Computer Science 1.
Virtual Vector Machine for Bayesian Online Classification Yuan (Alan) Qi CS & Statistics Purdue June, 2009 Joint work with T.P. Minka and R. Xiang.
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
Image Denoising Using Wavelets
1 Markov random field: A brief introduction (2) Tzu-Cheng Jen Institute of Electronics, NCTU
Patch-based Image Interpolation: Algorithms and Applications
Zhilin Zhang, Bhaskar D. Rao University of California, San Diego March 28,
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
SuperResolution (SR): “Classical” SR (model-based) Linear interpolation (with post-processing) Edge-directed interpolation (simple idea) Example-based.
Optimal Reverse Prediction: Linli Xu, Martha White and Dale Schuurmans ICML 2009, Best Overall Paper Honorable Mention A Unified Perspective on Supervised,
Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.
EE565 Advanced Image Processing Copyright Xin Li Further Improvements Gaussian scalar mixture (GSM) based denoising* (Portilla et al.’ 2003) Instead.
Single Image Interpolation via Adaptive Non-Local Sparsity-Based Modeling The research leading to these results has received funding from the European.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Iterative Techniques for Image Interpolation
Biointelligence Laboratory, Seoul National University
Sparsity Based Poisson Denoising and Inpainting
Compressive Coded Aperture Video Reconstruction
Fast edge-directed single-image super-resolution
Computer vision: models, learning and inference
Fast Preprocessing for Robust Face Sketch Synthesis
Machine Learning Basics
Latent Variables, Mixture Models and EM
Roberto Battiti, Mauro Brunato
Presenter: Xudong Zhu Authors: Xudong Zhu, etc.
Learning with information of features
Compressive Sensing Imaging
Exposing Digital Forgeries by Detecting Traces of Resampling Alin C
Principal Component Analysis
Improving K-SVD Denoising by Post-Processing its Method-Noise
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
Sudocodes Fast measurement and reconstruction of sparse signals
FOCUS PRIOR ESTIMATION FOR SALIENT OBJECT DETECTION
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
Advanced deconvolution techniques and medical radiography
Non-Negative Matrix Factorization
Lecture 7 Patch based methods: nonlocal means, BM3D, K- SVD, data-driven (tight) frame.
Presentation transcript:

School of Electronic Engineering, Xidian University, Xi’an, China Sparsity-based Image Interpolation With Nonlocal Autoregressive Modeling Weisheng Dong (董伟生) School of Electronic Engineering, Xidian University, Xi’an, China Sample codes available: http://see.xidian.edu.cn/faculty/wsdong W. Dong, L. Zhang, R. Lukac, and G. Shi, “Sparse representation based image interpolation with nonlocal autoregressive modeling,” IEEE Trans. Image Process., vol. 22, no. 4, Apr. 2013 11/15/2018

Outline Experimental results Conclusions Background: image interpolation Edge-directed methods Sparsity-based methods Sparse image interpolation with nonlocal autoregressive modeling The nonlocal autoregressive modeling The clustering-based sparsity regularization Experimental results Conclusions First, I will briefly introduce the classic image interpolation methods, Then I will introduce the two key components of the proposed method, i.e., the nonlocal autoregressive modeling, and the clustering-based sparsity regularization. And report the experimental comparison results.

Image Upconversion is Highly Needed Small images Display small images / video in HD devices SD videos Related applications: deinterlacing: 11/15/2018

Challenge Jagged artifacts Original HR image 2X upscaling LR image Challenge: How can we upconvert the image without annoying jagged artifacts? 11/15/2018

Previous work: edge-based methods Linear interpolators: bilinear, bicubic Blurring edges, annoying jagged artifacts Edge directed interpolators (EDI) [TIP’95] Interpolate along edge directions Difficult to estimate the edge directions from LR image New EDI [TIP’01], Soft-decision adaptive inter. [TIP’08] Local autoregressive (AR) model of LR image Ghost artifacts due to Wrong AR models Smooth variations As introduced by Yaniv’s, the image interpolation is very challenging. When there is no low-pass filtering, then the image super-resolution problem will become more challenging. The problem is often called as image interpolation. In this problem, the low-resolution image is obtained by direct downsampling, and there is no smoothing filtering. Image interpolation problem has also been extensively studied in the past decade. Classic image interpolation methods include linear interpolation methods, e.g., bilinear, bicubic. For better image interpolation results, edges directed interpolation methods try to interpolate the images along edges, e.g., the well-known NEDI method, and the SAI method. The SAI method is the current state-of-the-art method. Sparsity-based interpolation method, such as the sparse mixing estimators by Steph Mallat has also been proposed. We have also proposed an effective image interpolation method by using the CSR model. 11/15/2018

Artifacts of Edge-based methods Original Bicubic NEDI [TIP’01] SAI [TIP’08] 11/15/2018

Previous works: sparsity-based methods Dictionary learning based method, [TIP’10]; Nonlocal method, [ICCV’09]; Failed when the LR image is directly downsampled Bicubic (32.07 dB) ICCV09 (29.87 dB) Bicubic (23.48 dB) ICCV09 (21.88 dB) As introduced by Yaniv’s, the image interpolation is very challenging. When there is no low-pass filtering, then the image super-resolution problem will become more challenging. The problem is often called as image interpolation. In this problem, the low-resolution image is obtained by direct downsampling, and there is no smoothing filtering. Image interpolation problem has also been extensively studied in the past decade. Classic image interpolation methods include linear interpolation methods, e.g., bilinear, bicubic. For better image interpolation results, edges directed interpolation methods try to interpolate the images along edges, e.g., the well-known NEDI method, and the SAI method. The SAI method is the current state-of-the-art method. Sparsity-based interpolation method, such as the sparse mixing estimators by Steph Mallat has also been proposed. We have also proposed an effective image interpolation method by using the CSR model. 11/15/2018

Previous works: sparsity-based methods Upscaling via solving a sparse coding problem: Failed too, ringing and zippers artifacts Original DCT (32.49 dB) PCA (32.50 dB) This figure shows some results of the interpolated images by the L1-norm sparsity regularization methods. Sub-figure (a) is the original image; (b) is the result recovered using the DCT basis; (c) is the HR image recovered using the local PCA bases. From this figure we can see that the L1-norm sparsity method fail to reconstruct satisfied HR image. There are many jaggy artifacts in the interpolated images. 11/15/2018

Coherence issue of sparsity method Two fundamental premises of sparsity recovery method: Incoherence sampling: sampling operator D and the dictionary Φ should be incoherent. The coherence value: Sparsity: the original image should have sparse representation over Φ Problem: The downsampling matrix D is often coherent with common dictionaries, e.g., wavelets, K-SVD dictionary According to the compressive sensing theory, in order for a faithful reconstruction of the original image x from linear measurements y, the following conditions should be satisfied. First, the degradation matrix, here is the downsampling matrix D, should be incoherent with the dictionary. The coherence value between then can be computed by this formula. Second, the original image should have sparse representation under dictionary \Phi. But in image interpolation, the downsampling matrix is usually coherent with common bases or dictionaries, e.g., wavelets, K-SVD dictionary. Therefore, we cannot directly use the sparse model / CSR model for image interpolation. D. Donoho and M. Elad, PNAS, 2003. E. Candes, et al., “An introduction to Compressive sensing,” IEEE SPM, 2008 11/15/2018

Contributions Propose a new image upscaling framework: combining edge-based method and sparsity-based method Suppress the artifacts of edge-based method using sparse regularization Overcome the coherence issue by nonlocal autoregressive modeling Introducing a structured sparse regularization model Improve the sparse regularization performance According to the compressive sensing theory, in order for a faithful reconstruction of the original image x from linear measurements y, the following conditions should be satisfied. First, the degradation matrix, here is the downsampling matrix D, should be incoherent with the dictionary. The coherence value between then can be computed by this formula. Second, the original image should have sparse representation under dictionary \Phi. But in image interpolation, the downsampling matrix is usually coherent with common bases or dictionaries, e.g., wavelets, K-SVD dictionary. Therefore, we cannot directly use the sparse model / CSR model for image interpolation. 11/15/2018

Local autoregressive (AR) modeling Local AR image modeling Compute the AR model: least-square Exploiting the local image structure AR-based image interpolation methods: NEDI, [TIP 2001] (Over 1300+ citation) SAI, [TIP 2008] (Previous state-of-the-art method) The Autoregressive model is a classic image model, widely used in image processing, e.g., image compression, image interpolation. In AR modeling, we assume that each pixel can be well predicted by its local neighbors as a linear combination. The regression coefficients can be computed using the least-square method. The AR model is effective in exploiting the local image structure, and has been successfully used in image interpolation. For example, the NEDI method and the soft-decision adaptive method . 11/15/2018

Nonlocal self-similarity The local AR model is very effective in exploiting the local image structures. On the other hand, it has been well-known that natural images contain rich self-similarity structures. 11/15/2018

Nonlocal NAR modeling (NARM) Nonlocal regression: Patch matching: nonlocal similar neighbor selection Regularized least-square: Nonlocal AR modeling: x = Sx + ex To exploit both the local and nonlocal redundancies, we extend the local AR model to the nonlocal autoregressive model. In natural images, for each patch, we can find many local and nonlocal patches similar to it. Then each pixel can be well approximated by its local and nonlocal neighbors, i.e., xi equals to the weighted sum of its nonlocal neighbors. Then the overall image x can be expressed by this formula. The weights are computed by the regularized Least-square method, and it can be computed efficiently using a conjugate gradient (CG) method. The nonlocal AR model is used to improve the observation model. And the new observation model is formulated by this formula. 11/15/2018

NARM based image interpolation Improved objective function S: impose structural constraint on Φ The benefit of the NARM The coherence value between new sampling matrix DS and Φ is significantly decreased Coherence values (8x8 patch): wavelets: 1.20~4.77; Local PCA: 1.05~4.08 By introducing the nonlocal AR model into the observation model, the proposed objective function for image interpolation can be expressed as this equally constrained minimization problem. Here, matrix S acts as a conventional blurring or compressive sensing kernel, but it has different physical meaning. It dependent on the image content, and will be iteratively updated using the recovered images. With the nonlocal AR model, we found that the coherence values between the new sampling matrix DS and the dictionary is significantly decreased. Thus, the sparse estimation becomes much more stable. The regularization term used here is the CSR penalty term, and the dictionary here is the local PCA dictionaries. 11/15/2018

NARM based image interpolation Local PCA bases: (a) (b) (c) (d) (e) (a) 1=3.78 vs. 2=1.60; (b) 1=3.28 vs. 2=1.80; (c) 1=3.22 vs. 2=2.08; (d) 1=6.41 vs. 2=1.47; (e) 1=7.50 vs. 2=1.69. μ1 -- coherence values between D and Φ μ2 -- coherence values between DS and Φ 11/15/2018

Structural sparse regularization Conventional sparsity regularization Cannot exploit structural dependencies between nonzero coefficients Clustering-based sparse representation (CSR) [Dong, Zhang, et al., TIP 2013] Unify structural clustering and sparse representation into a variational framework In addition to the coherence issue, designing an appropriate sparsity regularizer is also very important. Using nonlocal AR model, the L1-norm sparsity-based image interpolation problem can be formulated as this formula. However, it has been well-known that the L1-norm sparsity regularizer cannot exploit the structural dependencies between nonzero coefficients. In this work, we propose to use the clustering-based sparse representation to better regularize the solution space. The regularization term used here is the CSR penalty term, and the dictionary here is the local PCA dictionaries. 11/15/2018

Clustering-based Sparse representation Motivation: Nonzero sparse coefficients are NOT randomly distributed The motivation behind our work lies in the fact that the nonzero sparse coefficients are NOT randomly distributed, instead they exhibit repetitive structures in the sparse domain. The fist row shows the original natural images; the second row plots the distributions of the sparse coefficients of the K-SVD dictionary. Only the coefficients associated with the 3rd atoms are plotted. Form this figure, we can see that the nonzero-coefficients are local structured, and also exhibit nonlocal self-similarity property. Unfortunately, the most of the existing sparse models don’t exploit the structured correlation between the nonzero coefficients. Since the nonzero coefficients exhibit nonlocal repetitive patterns, the clustering provides a convenient tool for exploiting such nonlinear correlation. The distribution of the sparse coefficients associated with the 3rd atoms in the K-SVD approach 11/15/2018

Clustering-based Sparse representation The clustering-based regularization Exploiting the self-similarity: Unifying the clustering-based sparsity and the learned local sparsity To exploit the structural information of the nonzero coefficients, we introduce a clustering-based regularization. We use a clustering algorithm to learn the centroids \mu_k from the image patches, and enforce the similarity between the centroid and the corresponding patches. The centroids can also be represented with the dictionary \phi. To see how the clustering can promote the sparsity, we unify the clustering-based regularization term into the sparse representation model. An intuitive interpretation of the proposed objective function is that the sparse coefficients are en-coded with respect to the centroids. local sparsity Clustering-based regularization 11/15/2018

Clustering-based Sparse representation Final CSR objective function The unitary property of the dictionary What does the CSR model mean? Encode the sparse codes with the learned exemplars Unify dictionary learning and structure clustering into a variational framework We arrive at the final objective function through the following two steps: First, we assume that dictionary is unitary and we can then remove the dictionary \phi from the clustering-based regularization term. Second, inspired by the success of the L1-regularization in compressive sensing, we replace the L2-norm in the clustering-based sparsity regularization term with the L1-norm. Then we get this double header L1-minimization problem. This is our final objective function of the proposed CSR model. From the final objective function, we can see that the dictionary learning and the structure clustering are unified in a variational framework, and the sparse codes are encoded with the learned exemplars. 11/15/2018

Bayesian interpretation of CSR The connection between the sparse representation and the Bayesian denoising, e.g., wavelet soft-thresholding The connection between CSR and the MAP estimator Laplacian Prior Gaussian likelihood term To better understand the proposed CSR model, we give a Bayesian interpretation of the CSR model. In the past decades, the connection between the sparse representation and the Bayesian denoising has been well established. The sparse representation model is in fact equivalent to this MAP estimator. The first term is the Gaussian likelihood term, and the second term is the Laplaician prior term. The basic idea behind the CSR model is to treat the centroids of the K clusters as the hidden variables to the sparse coefficients, we can then formulate this MAP estimator. The first term is the Gaussian likelihood term and the second is the joint prior distribution of the sparse codes and the learned centroids. The joint prior term ! 11/15/2018

Bayesian interpretation of CSR The factorization of the joint prior term where is the prediction residual / noise assume that and are independent, i.e., Generally, the joint prior probability density function (PDF) of \alpha and \beta is very hard to estimate. In this work, we use the structural constraint to approximate this joint prior PDF. The basic idea is to introduce the \gamma, which is the prediction residual of \alpha using \beta. Then we can rewrite the joint PDF as this. Since this deviation prediction can be viewed as another level of sparse coding, the \gamma is approximately independent from \alpha. In this work, we choose to model \gamma and \alpha by i.i.d Laplaican distribution, and we can write the joint prior model explicitly. Gaussian prior Laplacian prior 11/15/2018

Iterative reweighted CSR The final MAP estimator The iterative reweighted CSR Adaptively estimate λ and η for each local sparse coefficients Update λ and η iteratively By substituting the likelihood term and the prior term into the MAP estimator, we obtain the exact objection function of the CSR model. From the MAP estimator, we can write the two regularization parameters, \lambda_1 and \lambda_2 using this formula. In traditional sparse representation framework, we usually have to manually optimize the sparsity regularization parameters! In contrast, in our work, by using this Bayesian interpretation, we can adaptive compute the two sparsity regularization parameters in an iterative process. 11/15/2018

The proposed objective function Adaptive selection of the dictionary: local PCA Variable splitting: βk update: Since we are using the patch-based sparse model, we introduced an auxiliary variable x into the objective function, and the final objective function is formulated as this formula. The x denotes desired high-resolution whole image; \alpha_i denotes the sparse code of the patch x_i with respective to dictionary \phi_k. By introducing the auxiliary variable x, we split the objective function into two sub-problems, i.e., the x-subproblem, and the \alpha-subproblem. The x-subproblem is a equally constrained quadratic optimization problem; and the \alpha-subproblem is a CSR sparse coding problem. Both subproblems can be easily solved. 11/15/2018

Alternative optimization algorithm α-subproblem: for each i Closed-form solution: bi-variate shrinkage operator Since we are using the patch-based sparse model, we introduced an auxiliary variable x into the objective function, and the final objective function is formulated as this formula. The x denotes desired high-resolution whole image; \alpha_i denotes the sparse code of the patch x_i with respective to dictionary \phi_k. By introducing the auxiliary variable x, we split the objective function into two sub-problems, i.e., the x-subproblem, and the \alpha-subproblem. The x-subproblem is a equally constrained quadratic optimization problem; and the \alpha-subproblem is a CSR sparse coding problem. Both subproblems can be easily solved. 11/15/2018

Alternative optimization algorithm X-subproblem: Alternative direction method of multiplier (ADMM) Since we are using the patch-based sparse model, we introduced an auxiliary variable x into the objective function, and the final objective function is formulated as this formula. The x denotes desired high-resolution whole image; \alpha_i denotes the sparse code of the patch x_i with respective to dictionary \phi_k. By introducing the auxiliary variable x, we split the objective function into two sub-problems, i.e., the x-subproblem, and the \alpha-subproblem. The x-subproblem is a equally constrained quadratic optimization problem; and the \alpha-subproblem is a CSR sparse coding problem. Both subproblems can be easily solved. 11/15/2018

Alternative optimization algorithm Solved by a conjugate gradient method 11/15/2018

Overall interpolation algorithm 11/15/2018

Exp. Results (scaling factor = 2) Original NEDI (29.36 dB) SAI (30.76 dB) Proposed (31.72 dB) In this figure, we show some image interpolation results by the proposed methods and some other competing methods. The scaling factor in these two examples are 2. The SAI is the previous state-of-the-art edge-guided image interpolation method. We can see that the proposed method outperforms the previous edge-guided methods. The PSNR gain over SAI can be up to 1 dB. Original NEDI (22.97 dB) SAI (23.78 dB) Proposed (24.79 dB) 11/15/2018

Exp. Results (scaling factor = 2) Original NEDI (27.36 dB) SAI (29.17 dB) Proposed (30.30 dB) In this figure, we show some image interpolation results by the proposed methods and some other competing methods. The scaling factor in these two examples are 2. The SAI is the current state-of-the-art edge-guided image interpolation method. We can see that the proposed method outperforms the previous edge-guided methods. The PSNR gain over SAI can be up to 1 dB. Original NEDI (33.85 dB) SAI (34.13 dB) Proposed (34.46 dB) 11/15/2018

Exp. Results (scaling factor = 3) Original bicubic (23.48 dB) ScSR (23.84 dB) Proposed (25.57 dB) This figure shows interpolation results of scaling factor 3. Previous edge-guided methods are in general for scaling factor 2, and then not compared here. We can see that the proposed method can still produce much sharper edges than other methods when the scaling factor is increased. Original bicubic (30.14 dB) ScSR (30.00 dB) Proposed (31.16 dB) 11/15/2018

Exp. Results (scaling factor = 3) Original bicubic (21.85 dB) ScSR (21.93 dB) Proposed (23.33 dB) This figure shows interpolation results of scaling factor 3. Previous edge-guided methods are in general for scaling factor 2, and then not compared here. We can see that the proposed method can still produce much sharper edges than other methods when the scaling factor is increased. Original bicubic (32.07 dB) ScSR (32.29 dB) Proposed (34.80 dB) 11/15/2018

Conclusions A new image upconversion framework combining edge- based interpolator with sparse regularization A nonlocal AR model is proposed for edge-based interpolation The nonlocal AR model can increase the stability of the sparse reconstruction Now we draw the conclusions. In our work, the local learned sparse coding and the structural clustering are unified into a variational framework. A Bayesian interpretation is given for the proposed CSR model, and this interpretation extends the CSR into an iterative re-weighted CSR model. To efficiently solved the results double-header L1-minimization problem, we extend the conventional single variate shrinkage operator, and developed a bi-variate shrinkage operator. The experimental results show that the proposed CSR method performs much better than BM3D on texture images, and the proposed CSR method achieve comparable or even better denoising performance than BM3D on other natural images Clustering-based sparsity regularization is adopted to exploit the structural dependencies 11/15/2018

References W. Dong, L. Zhang, et al., “Sparse representation based image interpolation with nonlocal autoregressive modeling,” IEEE Trans. Image Process., vol. 22, Apr. 2013. Y. Romano, M. Protter, and M. Elad, “Single image interpolation via adaptive non- local sparsity-based modeling,” IEEE Trans. Image Processing, vol. 23, July 2014. X. Zhang and X. Wu, “Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation”, IEEE Trans. Image Processing, vol. 17, no. 6, 2008. X. Li and M. Orchard, “New edge-directed interpolation”, IEEE Trans. Image Processing, vol. 10, no. 10 2001. J. Yang, J. Wright, et al., “Image super-resolution via sparse representation,” IEEE Trans. Image Processing, 2010. W. Dong, X. Li, et al., “Sparsity-based image denoising via dictionary learning and structural clustering,” IEEE CVPR, 2011. G. Shi, W. Dong, X. Wu and L. Zhang, “Context-based adaptive image resolution upconversion”, Journal of Electronic imaging, vol. 19, 2010. 11/15/2018

Thanks for your attention! Questions? 11/15/2018