Modulated Unit Norm Tight Frames for Compressed Sensing

Slides:



Advertisements
Similar presentations
IPIM, IST, José Bioucas, Convolution Operators Spectral Representation Bandlimited Signals/Systems Inverse Operator Null and Range Spaces Sampling,
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Compressive Sensing IT530, Lecture Notes.
11/11/02 IDR Workshop Dealing With Location Uncertainty in Images Hasan F. Ates Princeton University 11/11/02.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Extensions of wavelets
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
“Random Projections on Smooth Manifolds” -A short summary
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Random Convolution in Compressive Sampling Michael Fleyer.
Wavelet Transform A very brief look.
Introduction to Compressive Sensing
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
Methods of Image Compression by PHL Transform Dziech, Andrzej Slusarczyk, Przemyslaw Tibken, Bernd Journal of Intelligent and Robotic Systems Volume: 39,
Digital Image Processing Final Project Compression Using DFT, DCT, Hadamard and SVD Transforms Zvi Devir and Assaf Eden.
Orthogonal Transforms
Compressed Sensing Compressive Sampling
Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization and Shrinkage Rodrigo C. de Lamare* + and Raimundo Sampaio-Neto * + Communications.
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Linear Algebra and Image Processing
Compressive Sampling: A Brief Overview
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Compressed Sensing Based UWB System Peng Zhang Wireless Networking System Lab WiNSys.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Cs: compressed sensing
Heart Sound Background Noise Removal Haim Appleboim Biomedical Seminar February 2007.
Transforms. 5*sin (2  4t) Amplitude = 5 Frequency = 4 Hz seconds A sine wave.
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
A Novel Method for Burst Error Recovery of Images First Author: S. Talebi Second Author: F. Marvasti Affiliations: King’s College London
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
Wavelets and Multiresolution Processing (Wavelet Transforms)
Image transformations Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng Kung University.
The Scaling Law of SNR-Monitoring in Dynamic Wireless Networks Soung Chang Liew Hongyi YaoXiaohang Li.
Li-Wei Kang and Chun-Shien Lu Institute of Information Science, Academia Sinica Taipei, Taiwan, ROC {lwkang, April IEEE.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Chapter 13 Discrete Image Transforms
Compressive Sensing Techniques for Video Acquisition EE5359 Multimedia Processing December 8,2009 Madhu P. Krishnan.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 5 ECEC 453 Image Processing Architecture Lecture 5, 1/22/2004 Rate-Distortion Theory,
Date of download: 6/29/2016 Copyright © 2016 SPIE. All rights reserved. Potential imaging modes. (a) Each detector in a focal plane array measures the.
DISPLACED PHASE CENTER ANTENNA SAR IMAGING BASED ON COMPRESSED SENSING Yueguan Lin 1,2,3, Bingchen Zhang 1,2, Wen Hong 1,2 and Yirong Wu 1,2 1 National.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Compressive Coded Aperture Video Reconstruction
An Example of 1D Transform with Two Variables
Highly Undersampled 0-norm Reconstruction
Wavelets : Introduction and Examples
Outlier Processing via L1-Principal Subspaces
T. Chernyakova, A. Aberdam, E. Bar-Ilan, Y. C. Eldar
Learning With Dynamic Group Sparsity
Digital Image Procesing Discrete Walsh Trasform (DWT) in Image Processing Discrete Hadamard Trasform (DHT) in Image Processing DR TANIA STATHAKI READER.
Compressive Sensing Imaging
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
Linglong Dai, Jintao Wang, Zhaocheng Wang
Bounds for Optimal Compressed Sensing Matrices
Sudocodes Fast measurement and reconstruction of sparse signals
Parallelization of Sparse Coding & Dictionary Learning
Sparse and Redundant Representations and Their Applications in
Digital Image Processing Week IV
CIS 700: “algorithms for Big Data”
Sudocodes Fast measurement and reconstruction of sparse signals
Signal Processing on Graphs: Performance of Graph Structure Estimation
Unfolding with system identification
Digital Image Processing
Outline Sparse Reconstruction RIP Condition
Sebastian Semper1 and Florian Roemer1,2
Channel Estimation for Orthogonal Time Frequency Space (OTFS) Massive MIMO Good morning everyone! I am very glad to be here to share my work about channel.
Presentation transcript:

Modulated Unit Norm Tight Frames for Compressed Sensing Peng Zhang1, Lu Gan2, Sumei Sun3 and Cong Ling1 1 Department of Electrical and Electronic Engineering, Imperial College London 2 College of Engineering, Design and Physics Sciences Brunel University, United Kingdom 3 Institute for Infocomm Research, A*STAR, Singapore

Outline Background Proposed system Basics of compressed sensing; Structured random operators for compressed sensing; Proposed system Performance bounds; Connection with existing systems; Applications in convolutional compressed sensing Compressive imaging; Sparse channel estimation in OFDM; Conclusions

Principles of Compressed Sensing (CS) Sampling: linear, non-adaptive random projections [Candes-Romberg-Tao-2006]: y = x x: N×1 signal vector; y: M×1 sampling vector (M << N); : M×N measurement matrix; Sparsity: x has a sparse representation under a certain transform  (DCT, wavelet); f = x can be well approximated with only K (K < M) non-zero coefficients; Reconstruction: nonlinear optimization; l1 optimization; Iterative-based methods: OMP, subspace pursuit (SP) etc.; Now, let us a take a closer look at In the most general case, recovery of x from y is ill-posed as it is under-determined.

Principles of Compressed Sensing (CS) Restricted isometry property (RIP) [Candes-Romberg-Tao-2006]: An M×N matrix A= is said to satisfy the RIP with Parameters (K,) if where  represents the set of all length-N vectors with K non-zero coefficients. Now, let us a take a closer look at In the most general case, recovery of x from y is ill-posed as it is under-determined.

Fully random sampling operators : full random matrix with independent sub-Gaussian elements i,j follows the Gaussian or Bernoulli distributions; Optimal bound: Universal: applicable for any  Limitations: High computational cost in matrix multiplication; Huge buffer requirement; Difficult or even impossible to implement;

A wish-list for the sampling operator 

Randomly subsampled orthonormal system [Candes et al. -2006] S: Randomly sampling operator (choose M samples uniform at random): Q: NN Bounded unitary matrix with Example of Q: DCT, DFT, Walsh-Hadamard matrices 7

Structurally random matrices Do and Gan et al. 2008 [ICASSP], 2012 [TSP] Random Sampling (choose M out of N) Fast transform FFT, WHT, DCT Random Sign Flipping ±1  = S F D Fast implementation Universal Random sampling could be difficult! 8

Random Convolution (filtering) J. Tropp, J. Romberg, H. Rauhut, F. Krahmer et al. c x y R R : Deterministic sampling operator; c: random sequence; Lacks universality =I (Identity matrix) 

Outline Background Proposed system Examples of potential applications Basics of compressed sensing; Structured random operators for compressed sensing; Proposed system Performance bounds; Connection with existing systems; Examples of potential applications Compressive imaging; Sparse channel estimation in OFDM; Conclusions

Unit-norm tight frame An M×N matrix U corresponds to a unit norm tight frame if Each column vector has a unit norm: The rows of form an orthonormal family;

Examples of unit norm tight frame Partial FFT or WHT: Partial summation operator; Cascading of identity or Fourier matrices; U=RF or U=RW U= U=[I I … I] or U=[F F … F]

Bounded orthonormal matrix Proposed system The product A= can be written as Bounded orthonormal matrix Random Sign Flipping Unit norm tight frame ±1 A= U D B Many existing systems can be characterized by the above systems: random convolution, random demodulation, compressive multiplexing, random probing… 13

Performance bound Performance bound Main tools in the proof Suprema of chaos processes [Krahmer et al.- 2014] Variations and extensions The diagonal matrix D could be constructed from any sub-Gaussian variables; B could be a near-orthogonal (rectangular) matrix;

Example: Random demodulation Random demodulation [Tropp et al.-2010]

Random demodulation Sampling operator: =UD, U= Sparsifying transform : FFT matrix with column permutation Sparsifying transform Previous work [Tropp et al. 2010]: M≥ O(K log6N) Our bound: M ≥ O(K log2K log2N)

Outline Background Proposed system Basics of compressed sensing; Structured random operators for compressed sensing; Proposed system Performance bounds; Connection with existing systems; Applications in convolutional compressed sensing Compressive imaging; Sparse channel estimation in OFDM; Conclusions

Coded aperture imaging—(existing system) Romberg et al.-2008, Marcia et al.-2009 Low resolution detector array x Random mask D Lens F Lens FH Begin with Focus on our proposed After that, will be presented, followed by a brief conclusion =I Works poorly for natural images

Coded aperture imaging—(existing system) Spatially sparse, =I Fails for this one (spectrally sparse)

Proposed system Sampling Operator: =RFHDF : diagonal matrix made from the Golay sequence; Mask based on  Implementation: Double phase encoding [Rivenson et al. 2010]

Golay sequence Let a=[a0, a1, ..., aN − 1]T (an {1, -1}) and define If a is a Golay sequence, then for all z on the unit circle

Proposed system Sampling Operator: =RFHDF Sparsifying transform : : diagonal matrix made from the Golay sequence; U=RFH B=F  Sparsifying transform : Haar Wavelet, Fourier, (Block) DCT (popular for natural images) 

Simulation results: Compressive imaging Experimental setup Test images: 256256 Lena and Hall; Reconstruction: re-weighted BPDN [Carrillo et al.-2013] Sampling ratio M/N=0.25

Simulation Results: Convolutional CS (b) Reconstructed images of Lena. (a) Results from conventional compressive coded aperture imaging, SNR=11.02 dB; (b) Our proposed algorithm, SNR=29.56 dB;

Simulation Results (a) (b) Reconstructed images of Hall. (a) Results from conventional compressive coded aperture imaging, SNR=9.21 dB; (b) Our proposed system, SNR=24.62 dB;

Outline Background Proposed system Basics of compressed sensing; Structured random operators for compressed sensing; Proposed system Performance bounds; Connection with existing systems; Applications in convolutional compressed sensing Compressive imaging; Sparse channel estimation in OFDM; Conclusions

Sparse channel estimation for OFDM system Existing solutions: y Low rate ADC Meng et al.-2012: R: deterministic sampler; i: Variable with random phase High Peak-to-Average Power Ratio (PAPR) after the IDFT Li et al.-2014: R random sampler; i: Golay sequence; Difficult to implement random sampling

Proposed system : Golay sequence  Low PAPR p(t): random signal Random demodulation : Golay sequence  Low PAPR p(t): random signal Overall Sampling Operator: =UDFFH B=FFH U=

Experimental setup (OFDM) N=1024, M=64; Channel model ATTC (Advanced Television Technology Center) and the Grande Alliance DTV laboratory ensemble E model. Channel impulse response Input signal to noise ratio (SNR): 0dB to 30dB Reconstruction: subspace pursuit (Dai et al.-2009)

Simulation results for low rate OFDM channel estimation

Conclusions Proposed framework A==UDB U: unit norm tight frame, D: random diagonal matrix, B: bounded orthogonal matrix; M ≥ O(K log2K log2N) Improved performance bound for existing system Random demodulation, compressive multiplexing, random probing etc. Novel compressive sensing framework Compressive imaging; Sparse channel estimation for OFDM systems;