ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

An Introduction to Compressed Sensing Student : Shenghan TSAI Advisor : Hsuan-Jung Su and Pin-Hsun Lin Date : May 02,
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Bregman Iterative Algorithms for L1 Minimization with
Image acquisition using sparse (pseudo)-random matrices Piotr Indyk MIT.
Compressive Sensing IT530, Lecture Notes.
1 A Brief Review of Joint Source-Channel Coding CUBAN/BEATS Meeting 29th April, 2004 Fredrik Hekland Department of Electronics and Telecommunication NTNU.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Compressive Sampling (of Analog Signals) Moshe Mishali Yonina C. Eldar Technion – Israel Institute of Technology
Richard Baraniuk Rice University dsp.rice.edu/cs Lecture 2: Compressive Sampling for Analog Time Signals.
Beyond Nyquist: Compressed Sensing of Analog Signals
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
CS774. Markov Random Field : Theory and Application Lecture 04 Kyomin Jung KAIST Sep
“Random Projections on Smooth Manifolds” -A short summary
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Compressed Sensing meets Information Theory Dror Baron Duarte Wakin Sarvotham Baraniuk Guo Shamai.
Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Signal Processing.
A Single-letter Characterization of Optimal Noisy Compressed Sensing Dongning Guo Dror Baron Shlomo Shamai.
Compressive Signal Processing
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
Computing Sketches of Matrices Efficiently & (Privacy Preserving) Data Mining Petros Drineas Rensselaer Polytechnic Institute (joint.
Nonlinear Sampling. 2 Saturation in CCD sensors Dynamic range correction Optical devices High power amplifiers s(-t) Memoryless nonlinear distortion t=n.
Representation and Compression of Multi-Dimensional Piecewise Functions Dror Baron Signal Processing and Systems (SP&S) Seminar June 2009 Joint work with:
Compression with Side Information using Turbo Codes Anne Aaron and Bernd Girod Information Systems Laboratory Stanford University Data Compression Conference.
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
Hybrid Dense/Sparse Matrices in Compressed Sensing Reconstruction
The Role of Specialization in LDPC Codes Jeremy Thorpe Pizza Meeting Talk 2/12/03.
Topics in MMSE Estimation for Sparse Approximation Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
Noise, Information Theory, and Entropy
Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard.
Compressed Sensing Compressive Sampling
The Scaling Law of SNR-Monitoring in Dynamic Wireless Networks Soung Chang Liew Hongyi YaoXiaohang Li.
Noise, Information Theory, and Entropy
Analysis of Iterative Decoding
Compressive Sampling: A Brief Overview
Lensless Imaging Richard Baraniuk Rice University Ashok Veeraraghavan
Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference.
Cs: compressed sensing
10/6/2015 3:12 AM1 Data Encoding ─ Analog Data, Digital Signals (5.3) CSE 3213 Fall 2011.
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
Learning With Structured Sparsity
CMPT 365 Multimedia Systems
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
Compressive Sensing for Multimedia Communications in Wireless Sensor Networks By: Wael BarakatRabih Saliba EE381K-14 MDDSP Literary Survey Presentation.
Compressible priors for high-dimensional statistics Volkan Cevher LIONS/Laboratory for Information and Inference Systems
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
An Introduction to Compressive Sensing Speaker: Ying-Jou Chen Advisor: Jian-Jiun Ding.
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel.
Entropy Coding of Video Encoded by Compressive Sensing Yen-Ming Mark Lai, University of Maryland, College Park, MD
The Scaling Law of SNR-Monitoring in Dynamic Wireless Networks Soung Chang Liew Hongyi YaoXiaohang Li.
Limits On Wireless Communication In Fading Environment Using Multiple Antennas Presented By Fabian Rozario ECE Department Paper By G.J. Foschini and M.J.
Rate Distortion Theory. Introduction The description of an arbitrary real number requires an infinite number of bits, so a finite representation of a.
Network Topology Single-level Diversity Coding System (DCS) An information source is encoded by a number of encoders. There are a number of decoders, each.
Modulated Unit Norm Tight Frames for Compressed Sensing
The Johns Hopkins University
Highly Undersampled 0-norm Reconstruction
Outlier Processing via L1-Principal Subspaces
Sampling rate conversion by a rational factor
Lecture 4: CountSketch High Frequencies
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
Bounds for Optimal Compressed Sensing Matrices
Sudocodes Fast measurement and reconstruction of sparse signals
Introduction to Compressive Sensing Aswin Sankaranarayanan
CIS 700: “algorithms for Big Data”
Sudocodes Fast measurement and reconstruction of sparse signals
Subspace Expanders and Low Rank Matrix Recovery
Presentation transcript:

ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard Baraniuk

CS encoding Replace samples by more general encoder based on a few linear projections (inner products) Matrix vector multiplication measurements sparse signal # non-zeros

The CS revelation – Of the infinitely many solutions seek the one with smallest L 1 norm

If then perfect reconstruction w/ high probability [Candes et al.; Donoho] Linear programming The CS revelation –

Compressible signals Polynomial decay of signal components Recovery algorithms –reconstruction performance: –also requires –polynomial complexity (BPDN) [Candes et al.] Cannot reduce order of [Kashin,Gluskin] squared of best term approximation constant

Fundamental goal: minimize Compressed sensing aims to minimize resource consumption due to measurements Donoho: “Why go to so much effort to acquire all the data when most of what we get will be thrown away?”

Measurement reduction for sparse signals Ideal CS reconstruction of -sparse signal Of the infinitely many solutions seek sparsest one If M · K then w/ high probability this can’t be done If M ¸ K+1 then perfect reconstruction w/ high probability [Bresler et al.; Wakin et al.] But not robust and combinatorial complexity number of nonzero entries

Why is this a complicated problem?

Rich design space What performance metric to use? –Wainwright: determine support set of nonzero entries  this is distortion metric  but why let tiny nonzero entries spoil the fun? – metric?? What complexity class of reconstruction algorithms? –any algorithms? –polynomial complexity? –near-linear or better? How to account for imprecisions? –noise in measurements? –compressible signal model?

How many measurements do we need?

Measurement noise Measurement process is analog Analog systems add noise, non-linearities, etc. Assume Gaussian noise for ease of analysis

Setup Signal is iid Additive white Gaussian noise Noisy measurement process

Measurement and reconstruction quality Measurement signal to noise ratio Reconstruct using decoder mapping Reconstruction distortion metric Goal: minimize CS measurement rate

Measurement channel Model processas measurement channel Capacity of measurement channel Measurements are bits!

Main result Theorem: For a sparse signal with rate-distortion function, lower bound on measurement rate subject to measurement qualityand reconstruction distortionsatisfies Direct relationship to rate-distortion content

Main result Theorem: For a sparse signal with rate-distortion function, lower bound on measurement rate subject to measurement qualityand reconstruction distortionsatisfies Proof sketch: –each measurement providesbits –information content of sourcebits –source-channel separation for continuous amplitude sources –minimal number of measurements –Obtain measurement ratevia normalization by

Example Spike process - spikes of uniform amplitude Rate-distortion function Lower bound Numbers: –signal of length 10 7 –10 3 spikes –SNR=10 dB  –SNR=-20 dB 

Upper bound (achievable) in progress…

CS reconstruction meets channel coding

Why is reconstruction expensive? measurements sparse signal nonzero entries Culprit: dense, unstructured

Fast CS reconstruction measurements sparse signal nonzero entries LDPC measurement matrix (sparse) Only 0/1 in Each row of contains randomly placed 1’s Fast matrix multiplication  fast encoding and reconstruction

Ongoing work: CS using BP [Sarvotham et al.] Considering noisy CS signals Application of Belief Propagation –BP over real number field –sparsity is modeled as prior in graph Low complexity Provable reconstruction with noisy measurements using Success of LDPC+BP in channel coding carried over to CS!

Summary Determination of measurement rates in CS –measurements are bits: each measurement provides bits –lower bound on measurement rate –direct relationship to rate-distortion content Compressed sensing meets information theory Additional research directions –promising results with LDPC measurement matrices –upper bound (achievable) on number of measurements dsp.rice.edu/cs