Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,

Slides:



Advertisements
Similar presentations
The A-tree: An Index Structure for High-dimensional Spaces Using Relative Approximation Yasushi Sakurai (NTT Cyber Space Laboratories) Masatoshi Yoshikawa.
Advertisements

An Introduction to Compressed Sensing Student : Shenghan TSAI Advisor : Hsuan-Jung Su and Pin-Hsun Lin Date : May 02,
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Compressive Sensing IT530, Lecture Notes.
11/11/02 IDR Workshop Dealing With Location Uncertainty in Images Hasan F. Ates Princeton University 11/11/02.
Multi-Label Prediction via Compressed Sensing By Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang (NIPS 2009) Presented by: Lingbo Li ECE, Duke University.
Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Fast Bayesian Matching Pursuit Presenter: Changchun Zhang ECE / CMR Tennessee Technological University November 12, 2010 Reading Group (Authors: Philip.
Submodular Dictionary Selection for Sparse Representation Volkan Cevher Laboratory for Information and Inference Systems - LIONS.
Exploiting Sparse Markov and Covariance Structure in Multiresolution Models Presenter: Zhe Chen ECE / CMR Tennessee Technological University October 22,
Richard Baraniuk Rice University dsp.rice.edu/cs Lecture 2: Compressive Sampling for Analog Time Signals.
Extensions of wavelets
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator Volkan Cevher Laboratory for Information and Inference Systems – LIONS / EPFL.
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
“Random Projections on Smooth Manifolds” -A short summary
Compressive Oversampling for Robust Data Transmission in Sensor Networks Infocom 2010.
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Quantization and compressed sensing Dmitri Minkin.
Sparse and Overcomplete Data Representation
Distributed Regression: an Efficient Framework for Modeling Sensor Network Data Carlos Guestrin Peter Bodik Romain Thibaux Mark Paskin Samuel Madden.
Wavelet Packets For Wavelets Seminar at Haifa University, by Eugene Mednikov.
1 Variance Reduction via Lattice Rules By Pierre L’Ecuyer and Christiane Lemieux Presented by Yanzhi Li.
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
6.829 Computer Networks1 Compressed Sensing for Loss-Tolerant Audio Transport Clay, Elena, Hui.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Compressed Sensing Compressive Sampling
Model-based Compressive Sensing
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Richard Baraniuk Rice University Model-based Sparsity.
Compressive Sampling: A Brief Overview
Sensing via Dimensionality Reduction Structured Sparsity Models Volkan Cevher
Game Theory Meets Compressed Sensing
Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference.
Recovery of Clustered Sparse Signals from Compressive Measurements
Compressive Sensing Based on Local Regional Data in Wireless Sensor Networks Hao Yang, Liusheng Huang, Hongli Xu, Wei Yang 2012 IEEE Wireless Communications.
Cs: compressed sensing
Introduction to Compressive Sensing
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
Learning With Structured Sparsity
Source Localization on a budget Volkan Cevher Rice University Petros RichAnna Martin Lance.
SCALE Speech Communication with Adaptive LEarning Computational Methods for Structured Sparse Component Analysis of Convolutive Speech Mixtures Volkan.
Scalable Symbolic Model Order Reduction Yiyu Shi*, Lei He* and C. J. Richard Shi + *Electrical Engineering Department, UCLA + Electrical Engineering Department,
Compressive Sensing for Multimedia Communications in Wireless Sensor Networks By: Wael BarakatRabih Saliba EE381K-14 MDDSP Literary Survey Presentation.
Compressible priors for high-dimensional statistics Volkan Cevher LIONS/Laboratory for Information and Inference Systems
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
ALISSA M. STAFFORD MENTOR: ALEX CLONINGER DIRECTED READING PROJECT MAY 3, 2013 Compressive Sensing & Applications.
An Introduction to Compressive Sensing Speaker: Ying-Jou Chen Advisor: Jian-Jiun Ding.
Constrained adaptive sensing Mark A. Davenport Georgia Institute of Technology School of Electrical and Computer Engineering TexPoint fonts used in EMF.
Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz, Ahmed Salim, Walid Osamy Presenter : 張庭豪 International Journal of Computer.
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico Ankur Sharma Department of ECE Indian Institute of Technology,
An Introduction to Compressive Sensing
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
n-pixel Hubble image (cropped)
Singular Value Decomposition and its applications
Compressive Coded Aperture Video Reconstruction
Computing and Compressive Sensing in Wireless Sensor Networks
Outlier Processing via L1-Principal Subspaces
Basic Algorithms Christina Gallner
Presenter: Xudong Zhu Authors: Xudong Zhu, etc.
Nuclear Norm Heuristic for Rank Minimization
Sudocodes Fast measurement and reconstruction of sparse signals
CIS 700: “algorithms for Big Data”
Sudocodes Fast measurement and reconstruction of sparse signals
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, Chinmay Hegde )

Outline ■Introduction ■Compressive Sensing ■Beyond Sparse and Compressible Signals ■Model-Based Signal Recovery Algorithms ■Example: Wavelet Tree Model ■Example: Block-Sparse Signals and Signal Ensembles ■Conclusions 211/5/10

Introduction ■Shannon/Nyquist Sampling □Sampling rate must be 2x the Fourier bandwidth □Not always feasible ■Reduction of dimensionality by representing as sparse set of coefficients in a basis expansion □Sparse means that K << N coefficients are nonzero and need to be transmitted/stored/etc. ■Compressive Sensing can be used instead of Nyquist Sampling when the signal in known to be sparse or compressible 311/5/10

Background on Compressive Sensing Sparse Signals ■We can represent any signal in terms of coefficients of a basis set: ■A signal is K-Sparse iff K << N entries are nonzero ■Support of x (supp(x)) is a list of the indices for nonzero entries ■The set of all K-sparse signals is the union of the, K- dimensional subspaces aligned with the coordinate axes in □Denote this union of subspaces by 411/5/10

Background on Compressive Sensing Compressible Signals ■Many signals are not sparse but can be expressed as such □Called “Compressible Signals” ■Given a signal with coefficients that when sorted in order of decreasing magnitude decay according to power law: □Because of the rapid decay of the coefficients such signals can be approximated as K-sparse ▪Error for such approximations is given by: 511/5/10

Background on Compressive Sensing Compressible Signals ■Expressing a compressible signal as K-sparse is known as Transform Coding. □Record signal’s full N samples □Express in terms of basis functions □Discard all but K largest coefficients □Encode coefficients and their locations ■Transform Coding has drawbacks □Must start with full N samples □Must compute all N coefficients □Must encode locations of coefficients we keep 611/5/10

Background on Compressive Sensing Restricted Isometry Property (RIP) ■Compressive Sensing combines signal acquisition and compression by using a measurement matrix ■In order to recover a good estimate of our signal x from M compressive measurements our measurement matrix must satisfy the Restricted Isometry Property 711/5/10

Background on Compressive Sensing Recovery Algorithms ■We can conceive of an infinite amount of signal coefficient vectors which will produce the same set of compressive measurements. If we seek the sparsest x that satisfies y: We recover a K-sparse signal from M = 2K compressive measurements. This is a combinatorial NP-Complete problem and is not stable in the presence of noise. □Need to find another way to solve this problem 811/5/10

Background on Compressive Sensing Recovery Algorithms ■Convex Optimization □Linear program, polynomial time □Adaptations exist to handle noise ▪Basis Pursuit with Denoising (BPDN), Complexity-Based Regularization, and Dantzig Selector ■Greedy Search □Matching Pursuit, Orthogonal Matching Pursuit, StOMP, Iterative Hard Thresholding (IHT), CoSaMP, Subspace Pursuit (SP) ▪All use a best L-term approximation for the estimated signal 911/5/10

Background on Compressive Sensing Performance Bounds on Signal Recovery ■For compressive measurements □All l 1 techniques and CoSaMP, SP, IHT iterative techniques offer stable recovery with performance close to optimal K-term approximation □With random Φ all results hold with high probability ▪In a noise free setting these offer perfect recovery ▪In the presence of noise the mean-square error is given by: ▪For an s-compressible signal with noise of bounded norm the mean- sqaure error is: 1011/5/10

Beyond Sparse and Compressible Signals ■Coefficients of both natural and manmade signals often exhibit interdependency □We can model this structure in order to: ▪Reduce the degrees of freedom ▪Reduce the number of compressive measurements needed to reconstruct the signal 1111/5/10

Beyond Sparse and Compressible Signals Model-Sparse Signals 1211/5/10

Beyond Sparse and Compressible Signals Model-Based RIP ■If x is K-sparse we can relax RIP constraint on Φ. 1311/5/10

Beyond Sparse and Compressible Signals Model-Compressible Signals 1411/5/10

Beyond Sparse and Compressible Signals ■Nested Model Approximations and Residual Subspaces ■Restricted Amplification Property (RAmP) □The number of compressive measurements M required for a random matrix to be M K -RIP is determined by the number of canonical subspaces m K. This does not extend to model- compressible signals. □We can analyze the robustness by looking at the signal outside its K-term approximation and considering it noise 1511/5/10

Beyond Sparse and Compressible Signals ■Restricted Amplification Property (RAmP) □A matrix Φ has the (ε K,r)-RAmP for the residual subspaces R j,K of model M if: ■We can determine the number of measurements M required for a random measurement matrix Φ to have RAmP with high probability: 1611/5/10

Model-Based Signal Recovery Algorithms ■For greedy algorithms just replace the K-term approximation step with the corresponding K-term model-based approximation ■These algorithms have fewer subspaces to search so fewer measurements are required to obtain the same accuracy of conventional CS 1711/5/10

Model-Based Signal Recovery Algorithms Model-Based CoSaMP ■CoSaMP was chosen because: □It offers robust recovery on par with the best convex- optimization approaches □It has a simple iterative greedy structure which can be easily modified for the model-based case 1811/5/10

Model-Based Signal Recovery Algorithms Performance of Model-Sparse Signal Recovery 1911/5/10

Model-Based Signal Recovery Algorithms Performance of Model-Compressible Signal Recovery ■We use RAmP as a condition on our measurement matrix Φ to obtain a robustness guarantee for signal recovery with noisy measurements: 2011/5/10

Model-Based Signal Recovery Algorithms Robustness to Model Mismatch ■A model mismatch occurs when the model chosen does not exactly match the signal we are trying to recover. ■We start with the best case possibility: □Model-based CoSaMP (Sparsity mismatch): □(Compressibility mismatch): ■Worst Case: We end up requiring the same number of measurements required for conventional CS 2111/5/10

Model-Based Signal Recovery Algorithms Computational Complexity of Model-Based Recovery ■Model-based algorithms are different from the standard forms of the algorithms in two ways: □There is a reduction in the number of required measurements. This reduces the computational complexity. □K-term approximation can be implemented using a simple sorting algorithm (low cost implementation). 2211/5/10

Example: Wavelet Tree Model ■Wavelet coefficients can be naturally organized into a tree structure with the largest coefficients clustering together along the branches of the tree. □This motivated the authors towards a connected tree model for wavelet coefficients. ▪Previous work did not utilize bounds on the number of compressive measurements. 2311/5/10

Example: Wavelet Tree Model Tree-Sparse Signals ■The wavelet representation of a signal x is given by: ■Nested supports create a parent/child relationship between the wavelet coefficients at different scales. ■Discontinuities create larger coefficients which results in a chain from root to leaf. □This relationship has been exploited in many wavelet processing and compression algorithms. 2411/5/10

Example: Wavelet Tree Model Tree-Sparse Signals 2511/5/10

Example: Wavelet Tree Model Tree-Based Approximation ■The optimal approximation for tree-based signal recovery: □An efficient algorithm exists, Condensing Sort and Select Algorithm (CSSA). ▪CSSA solves by condensing nonmonotonic segments of the branches using iterative sort and average. □Subtree approximations coincide with K-term approximations when the wavelet coefficients are monotonically non-increasing along the tree branches out from the root. 2611/5/10

Example: Wavelet Tree Model Tree-Based Approximation ■CSSA solves by condensing nonmonotonic segments of the branches using iterative sort and average. □Condensed nodes are called supernodes □This can also be implemented as a greedy search among nodes ▪The algorithm calculates the average wavelet coefficient for the subtree rooted at that node ▪records the largest average among all the subtrees as the energy for that node ▪search for the unselected node with the largest energy and add the subtree corresponding to the node's energy to the estimated support as a supernode 2711/5/10

Example: Wavelet Tree Model Tree-Based Approximation 2811/5/10

Example: Wavelet Tree Model Tree-Compressible Signals ■Tree approximation classes contain signals with wavelet coefficients that have loose decay from coarse to fine scales. 2911/5/10

Example: Wavelet Tree Model Stable Tree-Based Recovery from Compressive Measurements 3011/5/10

Example: Wavelet Tree Model Experiments 3111/5/10

Example: Wavelet Tree Model Experiments ■Monte Carlo simulation study on impact of number of measurements M on the model-based and conventional recovery for a class of tree-sparse piece-wise polynomials ■Each point is from measuring normalized recovery error of 500 sample trials ■For each trial: □generate new piecewise-polynomial signal with five polynomial pieces of cubic degree and randomly placed discontinuities □compute K-term tree-approx using CSSA □measure resulting signal using matrix with i.i.d. Gaussian entries 3211/5/10

Example: Wavelet Tree Model Experiments 3311/5/10

Example: Wavelet Tree Model Experiments ■Generated sample piecewise-polynomial signals as before ■Computed K-term tree-approximation ■Computed M measurements of each approximation ■Added Gaussian noise of expected norm ■Recovered the signal using CoSaMP and model-based recovery ■Measured the error for each case 3411/5/10

Example: Wavelet Tree Model Experiments 3511/5/10

Example: Wavelet Tree Model Experiments 3611/5/10

Example: Block Sparse Signals and Signal Ensembles 3711/5/10 ■Locations of significant coefficients cluster in blocks under a specific sorting order ■This has been investigated in CS applications: □DNA microarrays □Magnetoencephalography ■There is a similar problem in CS for signal ensembles like sensor networks and MIMO communication □Several signals share a common coefficient support set □The signal can be re-shaped as single vector by concatenation then the coefficients rearranged so the vector has block sparsity

Example: Block Sparse Signals and Signal Ensembles 3811/5/10 ■Block-Sparse Signals ■Block-Based Approximation

Example: Block Sparse Signals and Signal Ensembles ■Block-Compressible Signals 3911/5/10

Example: Block Sparse Signals and Signal Ensembles ■Double Block-Based Recovery from Compressible Measurements □The same number of measurements is required for block-sparse and block-compressible signals. □The bound on the number of measurements required is: □The first term of this bound matches the order of the bound for conventional CS. □The second term represents a linear dependence on the size of the block J. ▪The number of measurements M = O(KJ+K*log(N/K)) ▫An improvement over conventional CS 4011/5/10

Example: Block Sparse Signals and Signal Ensembles ■Double Block-Based Recovery from Compressible Measurements □We can break an M x JN dense matrix in a distributed setting into J pieces of size M x N, calculate the CS at each sensor, then sum the results for the complete vector □According to our bound: for large values of J, the number of measurements required is lower than that required for recovery of each signal independently. 4111/5/10

Example: Block Sparse Signals and Signal Ensembles ■Experiments □Comparison of model-based recovery to CoSaMP for block- sparse signals. □The model-based procedures are several times faster than convex optimization based procedures. 4211/5/10

Example: Block Sparse Signals and Signal Ensembles 4311/5/10

Example: Block Sparse Signals and Signal Ensembles 4411/5/10

Conclusions ■Signal Models can produce significant performance gains over conventional CS ■Wavelet procedure offers considerable speed-up ■Block-sparse procedure can recover signals with fewer measurements than each sensor recovering the signals independently Future Work: □The authors have only considered models that are geometrically described as the union of subspaces. There may be potential to extend these models to more complex geometries. □It may be possible to integrate these models into other iterative algorithms 4511/5/10

Thank you! 4611/5/10