Compressed Sensing Compressive Sampling

Slides:



Advertisements
Similar presentations
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Advertisements

Image acquisition using sparse (pseudo)-random matrices Piotr Indyk MIT.
Compressive Sensing IT530, Lecture Notes.
Multi-Label Prediction via Compressed Sensing By Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang (NIPS 2009) Presented by: Lingbo Li ECE, Duke University.
Learning Measurement Matrices for Redundant Dictionaries Richard Baraniuk Rice University Chinmay Hegde MIT Aswin Sankaranarayanan CMU.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Signal Spaces.
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Extensions of wavelets
More MR Fingerprinting
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Visual Recognition Tutorial
“Random Projections on Smooth Manifolds” -A short summary
Chapter 5 Orthogonality
Sparse and Overcomplete Data Representation
Modern Sampling Methods Summary of Subspace Priors Spring, 2009.
1 Systems of Linear Equations Error Analysis and System Condition.
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Computing Sketches of Matrices Efficiently & (Privacy Preserving) Data Mining Petros Drineas Rensselaer Polytechnic Institute (joint.
Nonlinear Sampling. 2 Saturation in CCD sensors Dynamic range correction Optical devices High power amplifiers s(-t) Memoryless nonlinear distortion t=n.
Orthogonality and Least Squares
6.829 Computer Networks1 Compressed Sensing for Loss-Tolerant Audio Transport Clay, Elena, Hui.
Compressive sensing: Theory, Algorithms and Applications
1 Systems of Linear Equations Error Analysis and System Condition.
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
Alfredo Nava-Tudela John J. Benedetto, advisor
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Linear Algebra and Image Processing
Compressive Sensing IT530, Lecture Notes.
Compressive Sampling: A Brief Overview
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Game Theory Meets Compressed Sensing
+ Review of Linear Algebra Optimization 1/14/10 Recitation Sivaraman Balakrishnan.
Recovery of Clustered Sparse Signals from Compressive Measurements
Cs: compressed sensing
Simplex method (algebraic interpretation)
Introduction to Compressed Sensing and its applications
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Compressible priors for high-dimensional statistics Volkan Cevher LIONS/Laboratory for Information and Inference Systems
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Section 2.3 Properties of Solution Sets
CHAPTER 5 SIGNAL SPACE ANALYSIS
Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz, Ahmed Salim, Walid Osamy Presenter : 張庭豪 International Journal of Computer.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
CS654: Digital Image Analysis Lecture 11: Image Transforms.
Chapter 13 Discrete Image Transforms
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Lecture 11 Inner Product Spaces Last Time Change of Basis (Cont.) Length and Dot Product in R n Inner Product Spaces Elementary Linear Algebra R. Larsen.
Lecture 16: Image alignment
Biointelligence Laboratory, Seoul National University
Introduction to Transforms
Highly Undersampled 0-norm Reconstruction
Basic Algorithms Christina Gallner
Nuclear Norm Heuristic for Rank Minimization
Bounds for Optimal Compressed Sensing Matrices
Introduction to Compressive Sensing Aswin Sankaranarayanan
CIS 700: “algorithms for Big Data”
Outline Sparse Reconstruction RIP Condition
Subspace Expanders and Low Rank Matrix Recovery
Approximation of Functions
Presentation transcript:

Compressed Sensing Compressive Sampling Daniel Weller June 30, 2015

Introduction to Sampling Sampling is composed of two operations: Discretization: continuous-time to discrete-time Quantization: discrete-time to digital Discretization can be written as: Usually, we assume shift invariance: Question: What is s(-t) in classical DSP? s(-t) t = tm y[m] x(t)

Sampling and Reconstruction Given a set of samples {y[m]}, how do we reconstruct x(t)? Classical approach: Sampling period T Interpolate with a filter h(t) Another interpretation: {h(t-nT)} is a basis of xr(t). Inverse problem form with coefficients {αn}: We will mainly consider the finite M, N case. h(t) Impulse train y[m] xr(t) ×

Classical Approximation Theory Least squares problem: normal equations are For A’A to be positive definite, A must have at least N rows: need M ≥ N For infinite length signals, Shannon sampling theory: bandlimited signals Solution minimizes error power How about signals with structure? How many samples do we need to reconstruct a sine wave?

Compression and Approximation The more we know about a signal, the fewer samples we need to approximate it! This basic idea underlies signal compression methods like MP3, JPEG2000, others Peppers (uncompressed) Peppers with JPEG compression

Compressible Signals What kinds of structure are useful to us? Low dimensionality: Φ is N×K, where K << N Union of subspaces: Φ1, Φ2, … are each subspaces Sparsity: The set XK is the set of all K-sparse signals in the N-dimensional space. (not the same as Grassmann…) Others like finite rate of innovation also possible…

What signals are compressible? A signal is K-sparse if A signal is approximately K-sparse if is small enough. Peppers (grayscale) Peppers (10% coefficients) 10% coefficients  2% error

Sparsity Set of signals with p-norm ≤ K is called K-ball: Suppose we have a measurement y1 = a1’x. Where is x that minimizes the p-norm? p = 1 p = 2 p = ½ p = ¼

The Compressed Sensing Problem Finite measurements y generated by with M×N sensing matrix A, with M < N. If we know x is K-sparse (K << N), when is x determined uniquely by y? Null space condition: This is true when the null space of A contains no 2K-sparse vectors. Why? More formal conditions follow.

A Spark for Compressed Sensing Spark: minimum # of columns of A that are linearly dependent Theorem: x is unique iff spark(A) > 2K. What if x is compressible instead? We need to modify our condition to ensure the null space of A is not too compressible: This condition is related to the recovery guarantee

R.I.P., Compressed Sensing We assumed y = Ax, exactly, with no noise. The restricted isometry property (RIP) extends the idea to noisy recovery. An isometry preserves the norm, so RIP states that A preserves the norm across K-sparse x. RIP is necessary for stability with noise e: Here, δ2K ≥ 1-1/C2.

RIP Measurement Bounds If A satisfies RIP-2K with constant δ2K ≤ ½, Proof: First, construct a subset X of XK: Since A satisfies RIP-2K, These bounds allow us to state (via sphere-packing): Details are in MA Davenport et al., Compressed Sensing: Theory and Applications, YC Eldar and G Kutyniok, eds., Cambridge, 2015, pp. 45-47.

Mutual (In-)Coherence Coherence of a matrix A is the largest inner product between two different columns: It is possible to show spark(A) ≥ 1+1/μ(A). Thus, we have a coherence bound for exact recovery of K-sparse signals: Also A, with unit-norm columns, satisfies RIP-K with δK = (K-1)μ(A) for all K < 1/μ(A). Thus, the less coherent A is, the better RIP is.

Matrices for CS Some deterministic matrices have properties like minimum coherence or maximum spark. Equiangular tight frames (ETF) Vandermonde matrices Random matrices can satisfy these properties without the limitations of construction. iid Gaussian, Bernoulli matrices satisfy RIP. Such constructions are universal, in that RIP is satisfied irrespective of the signal basis. Mercedes-Benz ETF

Matrices for CS For image processing, iid random A may be extremely large. Instead, we can randomly subsample a deterministic sensing matrix. Fourier transform is used for MRI, some optical. The coherence, RIP bounds are not quite as good, and not universal. Some work (e.g., SparseMRI) empirically verifies the incoherence of a random sensing matrix. We can also construct dictionaries from data.

CS Reconstruction Formulation Consider the sparse recovery problem: Exact: Noisy: The convex relaxation yields a sparse solution: An unconstrained version also is popular: The matrix A may include a dictionary Φ. We will describe several standard approaches.

(Orthogonal) Matching Pursuit At each step, include the next column of A that best correlates with the residual. The columns of A must have unit norm. The ith step chooses an atom λi according to: If Ai is the collection of atoms λ1, …, λi, the new signal is then The new residual is This residual is orthogonal to Ai.

Iterative Hard Thresholding Another approach repeatedly thresholds the non-sparse coefficients. At each step, the normal residual A’(y-Axi-1) is added to xi-1, and thresholded to form xi. We can also view IHT as thresholding the separable quadratic approximation to

Convex Optimization The unconstrained version is Iterative soft thresholding is similar to IHT: Split Bregman iteration also is popular: This method is the same as ADMM (1 split).

Convex Optimization The constrained version (BPDN) is When ε = 0, and x is real, the problem is a LP. SPGL1 solves the related LASSO problem and by finding τ that maps to ε, solves BPDN. SPGL1 uses a linesearch-based projected gradient approach to solve the LASSO. Details are in E van den Berg and MP Friedlander, SIAM J. Sci. Comput., 31(2), pp. 890-912, 2008.