Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang

Slides:



Advertisements
Similar presentations
Nonnegative Matrix Factorization with Sparseness Constraints S. Race MA591R.
Advertisements

Active Appearance Models
IPIM, IST, José Bioucas, Shrinkage/Thresholding Iterative Methods Nonquadratic regularizers Total Variation lp- norm Wavelet orthogonal/redundant.
Overview Definition of Norms Low Rank Matrix Recovery Low Rank Approaches + Deformation Optimization Applications.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Compressive Sensing IT530, Lecture Notes.
Multi-Label Prediction via Compressed Sensing By Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang (NIPS 2009) Presented by: Lingbo Li ECE, Duke University.
Joint work with Irad Yavneh
Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY.
Globally Optimal Estimates for Geometric Reconstruction Problems Tom Gilat, Adi Lakritz Advanced Topics in Computer Vision Seminar Faculty of Mathematics.
Structured Sparse Principal Component Analysis Reading Group Presenter: Peng Zhang Cognitive Radio Institute Friday, October 01, 2010 Authors: Rodolphe.
Learning Measurement Matrices for Redundant Dictionaries Richard Baraniuk Rice University Chinmay Hegde MIT Aswin Sankaranarayanan CMU.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Extensions of wavelets
Ilias Theodorakopoulos PhD Candidate
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Entropy-constrained overcomplete-based coding of natural images André F. de Araujo, Maryam Daneshi, Ryan Peng Stanford University.
Sparse Representation and Compressed Sensing: Theory and Algorithms
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Sparse and Overcomplete Data Representation
Image Denoising via Learned Dictionaries and Sparse Representations
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
Recent Trends in Signal Representations and Their Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
GRADIENT PROJECTION FOR SPARSE RECONSTRUCTION: APPLICATION TO COMPRESSED SENSING AND OTHER INVERSE PROBLEMS M´ARIO A. T. FIGUEIREDO ROBERT D. NOWAK STEPHEN.
Alfredo Nava-Tudela John J. Benedetto, advisor
Compressed Sensing Compressive Sampling
Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization and Shrinkage Rodrigo C. de Lamare* + and Raimundo Sampaio-Neto * + Communications.
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Compressive Sampling: A Brief Overview
1 Decentralized Jointly Sparse Optimization by Reweighted Lq Minimization Qing Ling Department of Automation University of Science and Technology of China.
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Game Theory Meets Compressed Sensing
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Cs: compressed sensing
Introduction to Compressive Sensing
“A fast method for Underdetermined Sparse Component Analysis (SCA) based on Iterative Detection- Estimation (IDE)” Arash Ali-Amini 1 Massoud BABAIE-ZADEH.
Selective Block Minimization for Faster Convergence of Limited Memory Large-scale Linear Models Kai-Wei Chang and Dan Roth Experiment Settings Block Minimization.
Phase Retrieval from the Short-Time Fourier Transform
Discriminant Functions
Efficient and Numerically Stable Sparse Learning Sihong Xie 1, Wei Fan 2, Olivier Verscheure 2, and Jiangtao Ren 3 1 University of Illinois at Chicago,
Center for Evolutionary Functional Genomics Large-Scale Sparse Logistic Regression Jieping Ye Arizona State University Joint work with Jun Liu and Jianhui.
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
Direct Robust Matrix Factorization Liang Xiong, Xi Chen, Jeff Schneider Presented by xxx School of Computer Science Carnegie Mellon University.
Nonlinear Learning Using Local Coordinate Coding K. Yu, T. Zhang and Y. Gong, NIPS 2009 Improved Local Coordinate Coding Using Local Tangents K. Yu and.
Network Lasso: Clustering and Optimization in Large Graphs
SuperResolution (SR): “Classical” SR (model-based) Linear interpolation (with post-processing) Edge-directed interpolation (simple idea) Example-based.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Li-Wei Kang and Chun-Shien Lu Institute of Information Science, Academia Sinica Taipei, Taiwan, ROC {lwkang, April IEEE.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
For convex optimization
StingyCD: Safely Avoiding Wasteful Updates in Coordinate Descent
Jeremy Watt and Aggelos Katsaggelos Northwestern University
Zhu Han University of Houston Thanks for Dr. Mingyi Hong’s slides
Zhu Han University of Houston Thanks for Dr. Mingyi Hong’s slides
Segmentation of Dynamic Scenes
Basic Algorithms Christina Gallner
NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
A Motivating Application: Sensor Array Signal Processing
Sudocodes Fast measurement and reconstruction of sparse signals
Optimal sparse representations in general overcomplete bases
Segmentation of Dynamical Scenes
CIS 700: “algorithms for Big Data”
Learned Convolutional Sparse Coding
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang ICCV 2013 wmzuo@hit.edu.cn Harbin Institute of Technology

Overview From L1-norm sparse coding to Lp-norm sparse coding Existing solvers for Lp-minimization Generalized shrinkage / thresholding function Algorithm and analysis Connections with soft/hard-thresholding functions Generalized Iterated Shrinkage Algorithms Experimental results

Overcomplete Representation Compressed Sensing, image restoration, image classification, machine learning, … Overcomplete Representation Infinite solutions of x What’s the optimal?

L0-Sparse Coding Impose some prior (constraint) on x: Problems Sparser is better Problems Is the sparsest solution unique? How can we obtain the optimal solution?

Theory: Uniqueness of Sparse Solution (L0) Nonconvex optimization, intractable Greedy algorithms: matching pursuit (MP), orthogonal matching pursuit (OMP)

Convex Relaxation: L1-Sparse Coding Problems When L1- and L0- Sparse Coding have the same solution Algorithms for L1-Sparse Coding

Theory: Uniqueness of Sparse Solution (L1)

Theory: Uniqueness of Sparse Solution (L1) Restricted Isometry Property Convex, various algorithms have been proposed.

Algorithms for L1-Sparse Coding Iterative shrinkage/thresholding algorithm Augmented Lagrangian method Accelerated Proximal Gradient Homotopy Primal-Dual Interior-Point Method … Allen Y. Yang, Zihan Zhou, Arvind Ganesh, Shankar Sastry, and Yi Ma. Fast l1-minimization algorithms for robust face recognition. IEEE Transactions on Image Processing, 2013. Generalized Iterated Shrinkage Algorithm

Lp-norm Approximation L0-norm: The number of non-zero values Lp-norm L1-norm: convex envolope of L0 L0-norm

Theory: Uniqueness of Sparse Solution (Lp) Weaker restricted isometry property is sufficient to guarantee perfect recovery in the Lp case. R. Chartrand and V. Staneva, "Restricted isometry properties and nonconvex compressive sensing", Inverse Problems, vol. 24, no. 035020, pp. 1--14, 2008

Existing Lp-sparse coding algorithms Analytic solutions: Only suitable for some special cases, e.g., p = 1/2, or p = 1/3. IRLS, IRL1, ITM_Lp: would not converge to the global optimal solution even for solving the simplest problem Lookup table Efficient, pre-computation

IRLS for Lp-sparse Coding (1) (2)     M. Lai, J. Wang. An unconstrained lq minimization with 0 < q < 1 for sparse solution of under-determined linear systems. SIAM Journal on Optimization, 21(1):82–101, 2011. Generalized Iterated Shrinkage Algorithm

IRL1 for Lp-Sparse Coding (1) (2) E. J. Candes, M. Wakin, S. Boyd. Enhancing sparsity by reweighted l1 minimization. Journal of Fourier Analysis and Applications, 14(5):877–905, 2008. Generalized Iterated Shrinkage Algorithm

ITM_Lp for Lp-Sparse Coding where   Root of the equation   Y. She. Thresholding-based iterative selection procedures for model selection and shrinkage. Electronic Journal of Statistics, 3:384–415, 2009. Generalized Iterated Shrinkage Algorithm

p = 0.5, λ = 1, and y = 1.3 Generalized Iterated Shrinkage Algorithm

Generalized Shrinkage / Thresholding Keys of soft-thresholding Thresholding rule:  Shrinkage rule: Generalization of soft-thresholding What’s the thresholding value for Lp? How to modify the shrinkage rule?

(a) y = 1, (b) y = 1.19, (c) y = 1.3, (d) y = 1.5, and (e) y = 1.6 Motivation   (a) y = 1, (b) y = 1.19, (c) y = 1.3, (d) y = 1.5, and (e) y = 1.6

Determining the threshold The first derivative of the nonzero extreme point is zero The second derivative of the nonzero extreme point higher than zero The function value at the nonzero extreme point is equivalent with that at zero

Determining the shrinkage operator k = 0, x(k) = |y| Iterate on k = 0, 1, ..., J k  k + 1   Generalized Iterated Shrinkage Algorithm

Generalized Shrinkage / Thresholding Function Generalized Iterated Shrinkage Algorithm

GST: Theoretical Analysis

Connections with soft / hard-thresholding functions p = 1: GST is equivalent with soft-thresholding p = 0: GST is equivalent with hard-thresholding

Generalized Iterated Shrinkage Algorithms Lp-sparse coding Gradient descent Generalized Shrinkage / Thresholding Generalized Iterated Shrinkage Algorithm

Comparison with Iterated Shrinkage Algorithms Iterative Shrinkage / Thresholding Gradient descent Soft thresholding

GISA

Sparse gradient based image deconvolution Generalized Iterated Shrinkage Algorithm

Application I: Deconvolution

Application I: Deconvolution

Application II: Face Recognition Extended YaleB

Conclusion Compared with the state-of-the-art methods, GISA is theoretically solid, easy to understand and efficient to implement, and it can converge to a more accurate solution. Compared with LUT, GISA is more general and does not need to compute and store the look-up tables. GISA can be readily used to solve the many lp–norm minimization problems in various vision and learning applications. Generalized Iterated Shrinkage Algorithm

Looking forward Applications to other vision problems. Incorporation of the primal-dual algorithm for better solution Extension of GISA for constrained Lp-minimization, e.g.,