Richard Baraniuk Rice University Model-based Sparsity.

Slides:



Advertisements
Similar presentations
An Introduction to Compressed Sensing Student : Shenghan TSAI Advisor : Hsuan-Jung Su and Pin-Hsun Lin Date : May 02,
Advertisements

On the Power of Adaptivity in Sparse Recovery Piotr Indyk MIT Joint work with Eric Price and David Woodruff, 2011.
Compressive Sensing IT530, Lecture Notes.
Richard G. Baraniuk Chinmay Hegde Sriram Nagaraj Go With The Flow A New Manifold Modeling and Learning Framework for Image Ensembles Aswin C. Sankaranarayanan.
Short-course Compressive Sensing of Videos Venue CVPR 2012, Providence, RI, USA June 16, 2012 Organizers: Richard G. Baraniuk Mohit Gupta Aswin C. Sankaranarayanan.
Learning Measurement Matrices for Redundant Dictionaries Richard Baraniuk Rice University Chinmay Hegde MIT Aswin Sankaranarayanan CMU.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Submodular Dictionary Selection for Sparse Representation Volkan Cevher Laboratory for Information and Inference Systems - LIONS.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Richard Baraniuk Rice University dsp.rice.edu/cs Lecture 2: Compressive Sampling for Analog Time Signals.
Beyond Nyquist: Compressed Sensing of Analog Signals
compressive nonsensing
More MR Fingerprinting
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
Richard Baraniuk Rice University Progress in Analog-to- Information Conversion.
Compressive Signal Processing Richard Baraniuk Rice University.
Learning With Dynamic Group Sparsity Junzhou Huang Xiaolei Huang Dimitris Metaxas Rutgers University Lehigh University Rutgers University.
Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator Volkan Cevher Laboratory for Information and Inference Systems – LIONS / EPFL.
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
Sparse Representation and Compressed Sensing: Theory and Algorithms
“Random Projections on Smooth Manifolds” -A short summary
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
SRINKAGE FOR REDUNDANT REPRESENTATIONS ? Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel.
Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Signal Processing.
Compressive Signal Processing
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
6.829 Computer Networks1 Compressed Sensing for Loss-Tolerant Audio Transport Clay, Elena, Hui.
Hybrid Dense/Sparse Matrices in Compressed Sensing Reconstruction
Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard.
Compressed Sensing Compressive Sampling
Model-based Compressive Sensing
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Compressive Sensing A New Approach to Image Acquisition and Processing
ACCESS IC LAB Graduate Institute of Electronics Engineering, NTU Reconstruction Algorithms for Compressive Sensing II Presenter: 黃乃珊 Advisor: 吳安宇 教授 Date:
Compressive Sampling: A Brief Overview
Sensing via Dimensionality Reduction Structured Sparsity Models Volkan Cevher
Game Theory Meets Compressed Sensing
Go With The Flow A New Manifold Modeling and Learning Framework for Image Ensembles Richard G. Baraniuk Rice University.
Compressive Sensing A New Approach to Signal Acquisition and Processing Richard Baraniuk Rice University.
Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference.
Compressed Sensing Based UWB System Peng Zhang Wireless Networking System Lab WiNSys.
Recovery of Clustered Sparse Signals from Compressive Measurements
Cs: compressed sensing
Introduction to Compressive Sensing
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
Learning With Structured Sparsity
Source Localization on a budget Volkan Cevher Rice University Petros RichAnna Martin Lance.
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
SCALE Speech Communication with Adaptive LEarning Computational Methods for Structured Sparse Component Analysis of Convolutive Speech Mixtures Volkan.
Compressive Sensing for Multimedia Communications in Wireless Sensor Networks By: Wael BarakatRabih Saliba EE381K-14 MDDSP Literary Survey Presentation.
Compressible priors for high-dimensional statistics Volkan Cevher LIONS/Laboratory for Information and Inference Systems
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
ALISSA M. STAFFORD MENTOR: ALEX CLONINGER DIRECTED READING PROJECT MAY 3, 2013 Compressive Sensing & Applications.
An Introduction to Compressive Sensing Speaker: Ying-Jou Chen Advisor: Jian-Jiun Ding.
Image Priors and the Sparse-Land Model
Zhilin Zhang, Bhaskar D. Rao University of California, San Diego March 28,
An Introduction to Compressive Sensing
Compressive Sensing Techniques for Video Acquisition EE5359 Multimedia Processing December 8,2009 Madhu P. Krishnan.
n-pixel Hubble image (cropped)
Lecture 13 Compressive sensing
Learning With Dynamic Group Sparsity
Basic Algorithms Christina Gallner
Compressive Sensing Imaging
Bounds for Optimal Compressed Sensing Matrices
Sudocodes Fast measurement and reconstruction of sparse signals
Introduction to Compressive Sensing Aswin Sankaranarayanan
Sudocodes Fast measurement and reconstruction of sparse signals
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

Richard Baraniuk Rice University Model-based Sparsity

Marco Duarte Chinmay Hegde Volkan Cevher

The Digital Universe Size: 281 billion gigabytes generated in 2007 digital bits > stars in the universe growing by a factor of 10 every 5 years > Avogadro’s number (6.02x10 23 ) in 15 years Growth fueled by multimedia data audio, images, video, surveillance cameras, sensor nets, … (2B fotos on Flickr, 4.2M security cameras in UK) In 2007 digital data generated > total storage by 2011, ½ of digital universe will have no home [Source: IDC Whitepaper “The Diverse and Exploding Digital Universe” March 2008]

Three Challenges 1. Data acquisition 2. Data compression 3. Data processing

One Solution 1. Data acquisition 2. Data compression 3. Data processing parsity

Sparsity / Compressibility pixels large wavelet coefficients (blue = 0) wideband signal samples large Gabor (TF) coefficients time frequency

Sparse signal: only K out of N coordinates nonzero –model: union of K-dimensional subspaces aligned w/ coordinate axes Concise Signal Structure sorted index

Sparse signal: only K out of N coordinates nonzero –model: union of K-dimensional subspaces Compressible signal:sorted coordinates decay rapidly to zero well-approximated by a K-sparse signal (simply by thresholding) sorted index Concise Signal Structure

Sparse signal: only K out of N coordinates nonzero –model: union of K-dimensional subspaces Compressible signal:sorted coordinates decay rapidly to zero –model: ball: Concise Signal Structure sorted index power-law decay

Compressive Sensing Sensing via randomized dimensionality reduction Sub-Nyquist sampling without aliasing random measurements sparse signal nonzero entries

Compressive Sensing Sensing via randomized dimensionality reduction random measurements sparse signal nonzero entries Recovery:solve this ill-posed inverse problem via sparse approximation exploit the geometrical structure of sparse/compressible signals

Restricted Isometry Property (RIP) Preserve the structure of sparse/compressible signals K-dim subspaces

Restricted Isometry Property (RIP) Preserve the structure of sparse/compressible signals

Restricted Isometry Property (RIP) “Stable embedding” RIP of order 2K implies: for all K-sparse x 1 and x 2 K-dim subspaces

Compressive Sensing Random subGaussian matrix has the RIP whp if random measurements sparse signal nonzero entries

Compressive Sensing Signal recovery via optimization [Candes, Romberg, Tao; Donoho] random measurements sparse signal nonzero entries

Compressive Sensing Signal recovery via iterative greedy algorithm –(orthogonal) matching pursuit [Gilbert, Tropp] –iterated thresholding [Nowak, Figueiredo; Kingsbury, Reeves; Daubechies, Defrise, De Mol; Blumensath, Davies; …] –CoSaMP [Needell and Tropp] random measurements sparse signal nonzero entries

Iterated Thresholding update signal estimate prune signal estimate (best K-term approx) update residual

Performance Using methods, CoSaMP, IHT Sparse signals –noise-free measurements: exact recovery –noisy measurements: stable recovery Compressible signals –recovery as good as K-sparse approximation CS recovery error signal K-term approx error noisesignal K-term approx error

Single-Pixel Camera target N=65536 pixels M=1300 measurements (2%) M=11000 measurements (16%) M randomized measurements

From Sparsity to Structured Sparsity

Beyond Sparse Models Sparse signal model captures simplistic primary structure wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images

Beyond Sparse Models Sparse signal model captures simplistic primary structure Modern compression/processing algorithms capture richer secondary coefficient structure wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images

Sparse Signals K-sparse signals comprise a particular set of K-dim subspaces

Structured-Sparse Signals A K-sparse signal model comprises a particular (reduced) set of K-dim subspaces [Blumensath and Davies] Fewer subspaces <> relaxed RIP <> stable recovery using fewer measurements M

Model-based CS Running Example: Tree-Sparse Signals

Wavelet Sparse Typical of wavelet transforms of natural signals and images (piecewise smooth)

Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Typical of wavelet transforms of natural signals and images (piecewise smooth)

Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Sparse approx:find best set of coefficients –sorting –hard thresholding Tree-sparse approx: find best rooted subtree of coefficients –CSSA [B] –dynamic programming [Donoho]

Wavelet Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree RIP: stable embedding K-planes

Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding [Blumensath and Davies] K-planes

Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding [Blumensath and Davies] Recovery: inject tree-sparse approx into IHT/CoSaMP

Recall: Iterated Thresholding update signal estimate prune signal estimate (best K-term approx) update residual

Iterated Model Thresholding update signal estimate prune signal estimate (best K-term model approx) update residual

Tree-Sparse Signal Recovery target signal CoSaMP, (RMSE=1.12) Tree-sparse CoSaMP (RMSE=0.037) signal length N=1024 random measurements M=80 L1-minimization (RMSE=0.751)

Compressible Signals Real-world signals are compressible, not sparse Recall: compressible <> approximable by sparse –compressible signals lie close to a union of subspaces –ie: approximation error decays rapidly as –nested in that If has RIP, then both sparse and compressible signals are stably recoverable via LP or greedy alg sorted index

Model-Compressible Signals Model-compressible <> approximable by model-sparse –model-compressible signals lie close to a reduced union of subspaces –ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that

Model-Compressible Signals Model-compressible <> approximable by model-sparse –model-compressible signals lie close to a reduced union of subspaces –ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that

Model-Compressible Signals Model-compressible <> approximable by model-sparse –model-compressible signals lie close to a reduced union of subspaces –ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that

Model-Compressible Signals Model-compressible <> approximable by model-sparse –model-compressible signals lie close to a reduced union of subspaces –ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that

Model-Compressible Signals Model-compressible <> approximable by model-sparse –model-compressible signals lie close to a reduced union of subspaces –ie: model-approx error decays rapidly as Nested approximation property (NAP): model-approximations nested in that New result: while model-RIP enables stable model-sparse recovery, model-RIP is not sufficient for stable model-compressible recovery!

Model-Compressible Signals Model-compressible <> approximable by model-sparse –model-compressible signals lie close to a reduced union of subspaces –ie: model-approx error decays rapidly as New result: while model-RIP enables stable model-sparse recovery, model-RIP is not sufficient for stable model-compressible recovery! Ex: If has the Tree-RIP, then tree-sparse signals can be stably recovered with However, tree-compressible signals cannot be stably recovered

Stable Recovery Result: Stable model-compressible signal recovery requires that have both: –RIP + Restricted Amplification Property RAmP: controls anisometry of in the approximation’s residual subspaces optimal K-term model recovery (error controlled by RIP) optimal 2K-term model recovery (error controlled by RIP) residual subspace (error not controlled by RIP)

Tree-RIP, Tree-RAmP Theorem: An MxN iid subgaussian random matrix has the Tree(K)-RIP if Theorem: An MxN iid subgaussian random matrix has the Tree(K)-RAmP if

Performance Using model-based IHT, CoSaMP with RIP and RAmP Model-sparse signals –noise-free measurements: exact recovery –noisy measurements: stable recovery Model-compressible signals –recovery as good as K-model-sparse approximation CS recovery error signal model K-term approx error noisesignal model K-term approx error

Simulation Recovery performance (MSE) vs. number of measurements Piecewise cubic signals + wavelets Models/algorithms: –sparse (CoSaMP) –tree-sparse (tree-CoSaMP)

Simulation Number samples for correct recovery Piecewise cubic signals + wavelets Models/algorithms: –sparse (CoSaMP) –tree-sparse (tree-CoSaMP)

Other Useful Models Clustered coefficients [C, Duarte, Hegde, B], [C, Indyk, Hegde, Duarte, B] Dispersed coefficients [Tropp, Gilbert, Strauss], [Stojnic, Parvaresh, Hassibi], [Eldar, Mishali], [Baron, Duarte et al], [B, C, Duarte, Hegde]

Clustered Signals targetIsing-model recovery CoSaMP recovery LP (FPC) recovery Probabilistic approach via graphical model Model clustering of significant pixels in space domain using Ising Markov Random Field Ising model approximation performed efficiently using graph cuts [Cevher, Duarte, Hegde, B’08]

Block-Sparse Model N = 4096 K = 6 active blocks J = block length = 64 M = 2.5JK = 960 msnts [Stojnic, Parvaresh, Hassibi], [Eldar, Mishali], [B, Cevher, Duarte, Hegde] target CoSaMP (MSE = 0.723) block-sparse model recovery (MSE=0.015)

Block-Compressible Model target CoSaMP (MSE=0.711) block-sparse recovery (MSE=0.195) best 5-block approximation (MSE=0.116 )

Sparse Spike Trains Sequence of pulses Simple model: –sequence of Dirac pulses –refractory period between each pulse Model-based RIP if Stable recovery via iterative algorithm (exploit total unimodularity) [Hedge, Duarte, Cevher ‘09]

N=1024 K=50 =10 M=150 original model-based recovery error CoSaMP recovery error Sparse Spike Trains

Sparse Pulse Trains More realistic model: –sequence of Dirac pulses * pulse shape of length –refractory period between each pulse of length Model-based RIP if

More realistic model: –sequence of Dirac pulses * pulse shape of length –refractory period between each pulse of length Model-based RIP if N=4076 K=7 =25 =10 M=290 originalCoSaMPmodel-alg Sparse Pulse Trains

Summary Why CS work:stable embedding for signals with concise geometric structure Sparse signals >> model-sparse signals Compressible signals >> model-compressible signals Greedy model-based signal recovery algorithms upshot: provably fewer measurements more stable recovery new concept:RIP >> RAmP

dsp.rice.edu/cs

Manifold Models Why CS works: stable embedding for signals with concise geometric structure Manifolds a K-dimensional smooth manifold in N-dimensional space stably is embedded by a subGaussian random projection if Compressive inference: –many inference problems have a manifold interpretation –can solve directly on compressive measurements (obviates recovery) [B, Wakin, FOCM, 2007]

Matched Filtering Object classification with K unknown camera articulation parameters Each set of articulated images lives on a K-dim manifold

Smashed Filtering Object classification with K unknown camera articulation parameters Each set of articulated images lives on a K-dim manifold

Open Positions open postdoc positions in sparsity / compressive sensing at Rice University dsp.rice.edu

Connexions (cnx.org) non-profit open publishing project goal: make high-quality educational content available to anyone, anywhere, anytime for free on the web and at very low cost in print open-licensed repository of Lego-block modules for authors, instructors, and learners to create, rip, mix, burn global reach: >1M users monthly from 200 countries collaborators:IEEE (IEEEcnx.org), Govt. Vietnam, TI, NI, …

Why We Want RIP PHI x1 x2 y1 y2 d Approx d

Why We Want RIP PHI x1 x2 y1 y2 d Approx d