An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS

Slides:



Advertisements
Similar presentations
An Introduction to Compressed Sensing Student : Shenghan TSAI Advisor : Hsuan-Jung Su and Pin-Hsun Lin Date : May 02,
Advertisements

Compressive Sensing IT530, Lecture Notes.
Multi-Label Prediction via Compressed Sensing By Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang (NIPS 2009) Presented by: Lingbo Li ECE, Duke University.
Joint work with Irad Yavneh
Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY.
Short-course Compressive Sensing of Videos Venue CVPR 2012, Providence, RI, USA June 16, 2012 Organizers: Richard G. Baraniuk Mohit Gupta Aswin C. Sankaranarayanan.
Learning Measurement Matrices for Redundant Dictionaries Richard Baraniuk Rice University Chinmay Hegde MIT Aswin Sankaranarayanan CMU.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Submodular Dictionary Selection for Sparse Representation Volkan Cevher Laboratory for Information and Inference Systems - LIONS.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
The Analysis (Co-)Sparse Model Origin, Definition, and Pursuit
K-SVD Dictionary-Learning for Analysis Sparse Models
Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang
Approximate Message Passing for Bilinear Models
Extensions of wavelets
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Compressive Signal Processing Richard Baraniuk Rice University.
Learning With Dynamic Group Sparsity Junzhou Huang Xiaolei Huang Dimitris Metaxas Rutgers University Lehigh University Rutgers University.
Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator Volkan Cevher Laboratory for Information and Inference Systems – LIONS / EPFL.
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
Dictionary-Learning for the Analysis Sparse Model Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
Sparse Representation and Compressed Sensing: Theory and Algorithms
“Random Projections on Smooth Manifolds” -A short summary
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Quantization and compressed sensing Dmitri Minkin.
Sparse and Overcomplete Data Representation
SRINKAGE FOR REDUNDANT REPRESENTATIONS ? Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel.
Compressed Sensing for Networked Information Processing Reza Malek-Madani, 311/ Computational Analysis Don Wagner, 311/ Resource Optimization Tristan Nguyen,
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
Compressed Sensing Compressive Sampling
Model-based Compressive Sensing
Compressive Sensing A New Approach to Image Acquisition and Processing
ACCESS IC LAB Graduate Institute of Electronics Engineering, NTU Reconstruction Algorithms for Compressive Sensing II Presenter: 黃乃珊 Advisor: 吳安宇 教授 Date:
Richard Baraniuk Rice University Model-based Sparsity.
Compressive Sampling: A Brief Overview
Sensing via Dimensionality Reduction Structured Sparsity Models Volkan Cevher
IC Research Seminar A Teaser Prof. Dr. Volkan Cevher LIONS/Laboratory for Information and Inference Systems
Game Theory Meets Compressed Sensing
Compressive Sensing A New Approach to Signal Acquisition and Processing Richard Baraniuk Rice University.
Recovery of Clustered Sparse Signals from Compressive Measurements
Cs: compressed sensing
Iterated Denoising for Image Recovery Onur G. Guleryuz To see the animations and movies please use full-screen mode. Clicking on.
Introduction to Compressive Sensing
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
“A fast method for Underdetermined Sparse Component Analysis (SCA) based on Iterative Detection- Estimation (IDE)” Arash Ali-Amini 1 Massoud BABAIE-ZADEH.
Learning With Structured Sparsity
Source Localization on a budget Volkan Cevher Rice University Petros RichAnna Martin Lance.
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
SCALE Speech Communication with Adaptive LEarning Computational Methods for Structured Sparse Component Analysis of Convolutive Speech Mixtures Volkan.
Compressible priors for high-dimensional statistics Volkan Cevher LIONS/Laboratory for Information and Inference Systems
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
Nonlinear Approximation Based Image Recovery Using Adaptive Sparse Reconstructions Onur G. Guleryuz Epson Palo Alto Laboratory.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Modulated Unit Norm Tight Frames for Compressed Sensing
Jeremy Watt and Aggelos Katsaggelos Northwestern University
Lecture 13 Compressive sensing
Learning With Dynamic Group Sparsity
Basic Algorithms Christina Gallner
Probabilistic Models for Linear Regression
Bounds for Optimal Compressed Sensing Matrices
Sudocodes Fast measurement and reconstruction of sparse signals
Optimal sparse representations in general overcomplete bases
* *Joint work with Ron Rubinstein Tomer Peleg Remi Gribonval and
The Analysis (Co-)Sparse Model Origin, Definition, and Pursuit
CIS 700: “algorithms for Big Data”
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS

Linear Dimensionality Reduction Compressive sensingnon-adaptive measurements Sparse Bayesian learningdictionary of features Theoretical computer science sketching matrix / expander

Linear Dimensionality Reduction Challenge:nullspace of

A Deterministic View Compressive Sensing

1.Sparse / compressible not sufficient alone 2.Projection information preserving/ special nullspace 3.Decoding algorithms tractable Compressive Sensing Insights

Sparse signal: only K out of N coordinates nonzero –model: union of K -dimensional subspaces aligned with coordinate axes Compressible signal: sorted coordinates decay rapidly to zero well-approximated by a K -sparse signal (simply by thresholding) sorted index Basic Signal Priors

Model: K-sparse RIP: stable embedding Restricted Isometry Property (RIP) K-planes Random subGaussian (iid Gaussian, Bernoulli) matrix RIP w.h.p.

Sparse Recovery Algorithms Goal:given recover - minimization: -minimization formulations –basis pursuit, Lasso, scalarization … –iterative re-weighted algorithms Greedy algorithms: IHT, CoSaMP, SP, OMP,… NP-Hard

1-Norm Minimization Properties (sparse signals) –Complexity polynomial time e.g., interior point methods: first order methods <> faster but less accurate –Theoretical guarantees –Number of measurements (in general, dashed line) CS recovery error signal K-term approx error noise Threshold = [Donoho and Tanner]

Greedy Approaches Properties (sparse signals; CoSaMP, IHT, SP,…) –Complexity polynomial time first-order like:only need forward and adjoint operators fast –Theoretical guarantees (typically perform worse than linear program) –Number of measurements (after tuning) c.f. Figure. CS recovery error signal K-term approx error noise [Maleki and Donoho]LP > LARS > TST (SP>CoSaMP)> IHT > IST

The Need for First-order & Greedy Approaches Complexity<>low complexity –images with millions of pixels (MRI, interferometry, hyperspectral, etc.) –communication signals hidden in high bandwidths Performance:(simple sparse) – -minimization<>best performance –First-order, greedy<>performance/complexity trade-off

The Need for First-order & Greedy Approaches Complexity<>low complexity Performance:(simple sparse) – -minimization<>best performance –First-order, greedy<>performance tradeoff Flexibility:(union-of-subspaces) – -minimization<>restricted models block-sparse, all positive,… –Greedy<>union-of-subspace models with tractable approximation algorithms

The Need for First-order & Greedy Approaches Complexity<>low complexity Performance:(simple sparse) – -minimization<>best performance –First-order, greedy<>performance tradeoff Flexibility:(union-of-subspaces) – -minimization<>restricted models block-sparse, all positive,… –Greedy<>union-of-subspace models with tractable approximation algorithms <> faster, more robust recovery from fewer samples

The Need for First-order & Greedy Approaches Complexity<>low complexity Performance:(simple sparse) – -minimization<>best performance –First-order, greedy<>performance tradeoff Flexibility:(union-of-subspaces) – -minimization<>restricted models –Greedy<>union-of-subspace models (model-based iterative recovery) Can we have all three in a first-order algorithm?

ENTER Algebraic Pursuits—ALPS

Two Algorithms Algebraic pursuits (ALPS) Lipschitz iterative hard tresholding<>LIHT Fast Lipschitz iterative hard tresholding<>FLIHT Objective: canonical sparsity for simplicity objective function

Bregman Distance & RIP Recall RIP: Bregman distance

Majorization-Minimization Model-based combinatorial projection: e.g., tree-sparse projection

What could be wrong with this naïve approach? percolations

Majorization-Minimization How can we avoid the void? Note: LP requires

LIHT vs. IHT & ISTA + GraDes Iterative hard thresholding – Nesterov/B & T variant –IHT: –LIHT: IHT<>quick initial descent wasteful iterations afterwards LIHT<>linear convergence GaussianFourier Ex: K=100, M=300, N=1000, L=10.5. Sparse LIHT extends GraDes to overcomplete representations [Blumensath and Davies]

FLIHT Fast Lipschitz iterative hard thresholding FLIHT <> linear convergence more restrictive in isometry constants GaussianFourierSparse [Nesterov ’83]

The Intuition behind ALPS ALPS<>exploit structure of optimization objective LIHT<>majorization-minimization FLIHT<>capture a history of previous estimates FLIHT > LIHT Convergence speed exampleRobustness noise level

Redundant Dictionaries CS theory<>orthonormal basis ALPS<>orthonormal basis + redundant dictionaries Key ingredient<>D-RIP [Rauhut, Schnass, Vanderghensynt; Candes, Eldar, Needell] ALPS analysis formulation<> strong guarantees tight frame

A2D Conversion Analog-to-digital conversion 43× overcomplete Gabor dictionary recovery < a few seconds FLIHT: 25.4dB N=8192; M= 80 Target DCT: 50 sparse l1-magic recovery with DCT

Conclusions Better, stronger, faster CS<>exploit structure in sparse coefficients objective function <> first-order methods ALPS algorithms –automated selection RIP analysis <>strong convexity parameter + Lipschitz constant “Greed is good” in moderationtuning of IHT, etc. Potential gains <>analysis / cosparse models Further work game theoretic sparse recovery (this afternoon)