Short-course Compressive Sensing of Videos Venue CVPR 2012, Providence, RI, USA June 16, 2012 Organizers: Richard G. Baraniuk Mohit Gupta Aswin C. Sankaranarayanan.

Slides:



Advertisements
Similar presentations
Compressive Sensing IT530, Lecture Notes.
Advertisements

Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
Joint work with Irad Yavneh
Learning Measurement Matrices for Redundant Dictionaries Richard Baraniuk Rice University Chinmay Hegde MIT Aswin Sankaranarayanan CMU.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang
Learning sparse representations to restore, classify, and sense images and videos Guillermo Sapiro University of Minnesota Supported by NSF, NGA, NIH,
Extensions of wavelets
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
More MR Fingerprinting
Compressive Signal Processing Richard Baraniuk Rice University.
Learning With Dynamic Group Sparsity Junzhou Huang Xiaolei Huang Dimitris Metaxas Rutgers University Lehigh University Rutgers University.
Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator Volkan Cevher Laboratory for Information and Inference Systems – LIONS / EPFL.
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Sparse Representation and Compressed Sensing: Theory and Algorithms
“Random Projections on Smooth Manifolds” -A short summary
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Sparse and Overcomplete Data Representation
SRINKAGE FOR REDUNDANT REPRESENTATIONS ? Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel.
Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Signal Processing.
Compressed Sensing for Networked Information Processing Reza Malek-Madani, 311/ Computational Analysis Don Wagner, 311/ Resource Optimization Tristan Nguyen,
Image Denoising via Learned Dictionaries and Sparse Representations
Compressive Signal Processing
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
Recent Trends in Signal Representations and Their Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
3D Geometry for Computer Graphics
A Weighted Average of Sparse Several Representations is Better than the Sparsest One Alone Michael Elad The Computer Science Department The Technion –
Hybrid Dense/Sparse Matrices in Compressed Sensing Reconstruction
Compressed Sensing Compressive Sampling
Model-based Compressive Sensing
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Linear Algebra and Image Processing
Compressive Sensing A New Approach to Image Acquisition and Processing
Richard Baraniuk Rice University Model-based Sparsity.
Compressive Sampling: A Brief Overview
Sensing via Dimensionality Reduction Structured Sparsity Models Volkan Cevher
Compressive Sensing A New Approach to Signal Acquisition and Processing Richard Baraniuk Rice University.
Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference.
Recovery of Clustered Sparse Signals from Compressive Measurements
Cs: compressed sensing
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
“A fast method for Underdetermined Sparse Component Analysis (SCA) based on Iterative Detection- Estimation (IDE)” Arash Ali-Amini 1 Massoud BABAIE-ZADEH.
EE369C Final Project: Accelerated Flip Angle Sequences Jan 9, 2012 Jason Su.
Introduction to Compressed Sensing and its applications
Learning With Structured Sparsity
Source Localization on a budget Volkan Cevher Rice University Petros RichAnna Martin Lance.
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
SCALE Speech Communication with Adaptive LEarning Computational Methods for Structured Sparse Component Analysis of Convolutive Speech Mixtures Volkan.
Compressible priors for high-dimensional statistics Volkan Cevher LIONS/Laboratory for Information and Inference Systems
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
Image Priors and the Sparse-Land Model
SuperResolution (SR): “Classical” SR (model-based) Linear interpolation (with post-processing) Edge-directed interpolation (simple idea) Example-based.
Compressive Sensing Techniques for Video Acquisition EE5359 Multimedia Processing December 8,2009 Madhu P. Krishnan.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Compressive Coded Aperture Video Reconstruction
Jeremy Watt and Aggelos Katsaggelos Northwestern University
Highly Undersampled 0-norm Reconstruction
Learning With Dynamic Group Sparsity
Basic Algorithms Christina Gallner
Sudocodes Fast measurement and reconstruction of sparse signals
Introduction to Compressive Sensing Aswin Sankaranarayanan
Optimal sparse representations in general overcomplete bases
Sudocodes Fast measurement and reconstruction of sparse signals
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

Short-course Compressive Sensing of Videos Venue CVPR 2012, Providence, RI, USA June 16, 2012 Organizers: Richard G. Baraniuk Mohit Gupta Aswin C. Sankaranarayanan Ashok Veeraraghavan

Part 2: Compressive sensing Motivation, theory, recovery Linear inverse problems Sensing visual signals Compressive sensing – Theory – Hallmark – Recovery algorithms Model-based compressive sensing – Models specific to visual signals

Linear inverse problems

Many classic problems in computer can be posed as linear inverse problems Notation – Signal of interest – Observations – Measurement model Problem definition: given, recover measurement noise measurement matrix

Linear inverse problems Problem definition: given, recover Scenario 1 We can invert the system of equations Focus more on robustness to noise via signal priors

Linear inverse problems Problem definition: given, recover Scenario 2 Measurement matrix has a (N-M) dimensional null-space Solution is no longer unique Many interesting vision problem fall under this scenario Key quantity of concern: Under-sampling ratio M/N

Image super-resolution Low resolution input/observation 128x128 pixels

Image super-resolution 2x super-resolution

Image super-resolution 4x super-resolution

Image super-resolution Super-resolution factor D Under-sampling factor M/N = 1/D 2 General rule: The smaller the under-sampling, the more the unknowns and hence, the harder the super- resolution problem

Many other vision problems… Affine rank minimizaiton, Matrix completion, deconvolution, Robust PCA Image synthesis – Infilling, denoising, etc. Light transport – Reflectance fields, BRDFs, Direct global separation, Light transport matrices Sensing

Sensing visual signals

High-dimensional visual signals Human skin Sub-surface scattering Electron microscopy Tomography Reflection Refraction Fog Volumetric scattering

The plenoptic function Collection of all variations of light in a scene Adelson and Bergen (91) space (3D) angle (2D) spectrum (1D) time (1D) Different slices reveal different scene properties

space (3D) angle (2D) spectrum (1D) time (1D) Lytro light-field camera High-speed cameras The plenoptic function Hyper-spectral imaging

Sensing the plenoptic function High-dimensional – 1000 samples/dim == 10^21 dimensional signal – Greater than all the storage in the world Traditional theories of sensing fail us!

Resolution trade-off Key enabling factor: Spatial resolution is cheap! Commercial cameras have 10s of megapixels One idea is the we trade-off spatial resolution for resolution in some other axis

Spatio-angular tradeoff [Ng, 2005]

Spatio-angular tradeoff [Levoy et al. 2006]

Spatio-temporal tradeoff [Bub et al., 2010] Stagger pixel-shutter within each exposure

Spatio-temporal tradeoff [Bub et al., 2010] Rearrange to get high temporal resolution video at lower spatial- resolution

Resolution trade-off Very powerful and simple idea Drawbacks – Does not extend to non-visible spectrum 1 Megapixel SWIR camera costs k – Linear and global tradeoffs – With today’s technology, cannot obtain more than 10x for video without sacrificing spatial resolution completely

Compressive sensing

Sense by Sampling sample

Sense by Sampling sample too much data!

Sense then Compress compress decompress sample JPEG JPEG2000 …

Sparsity pixels large wavelet coefficients (blue = 0)

Sparsity pixels large wavelet coefficients (blue = 0) wideband signal samples large Gabor (TF) coefficients time frequency

Sparse signal: only K out of N coordinates nonzero Concise Signal Structure sorted index

Sparse signal: only K out of N coordinates nonzero – model: union of K-dimensional subspaces aligned w/ coordinate axes Concise Signal Structure sorted index

Sparse signal: only K out of N coordinates nonzero – model: union of K-dimensional subspaces Compressible signal:sorted coordinates decay rapidly with power-law sorted index power-law decay Concise Signal Structure

Sparse signal: only K out of N coordinates nonzero – model: union of K-dimensional subspaces Compressible signal:sorted coordinates decay rapidly with power-law – model: ball: sorted index power-law decay Concise Signal Structure

What’s Wrong with this Picture? Why go to all the work to acquire N samples only to discard all but K pieces of data? compress decompress sample

What’s Wrong with this Picture? linear processing linear signal model (bandlimited subspace) compress decompress sample nonlinear processing nonlinear signal model (union of subspaces)

Compressive Sensing Directly acquire “compressed” data via dimensionality reduction Replace samples by more general “measurements” compressive sensing recover

Signal is -sparse in basis/dictionary – WLOG assume sparse in space domain Sampling sparse signal nonzero entries

Signal is -sparse in basis/dictionary – WLOG assume sparse in space domain Sampling sparse signal nonzero entries measurements Sampling

Compressive Sampling When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss through linear dimensionality reduction measurements sparse signal nonzero entries

How Can It Work? Projection not full rank… … and so loses information in general Ex: Infinitely many ’s map to the same (null space)

How Can It Work? Projection not full rank… … and so loses information in general But we are only interested in sparse vectors columns

How Can It Work? Projection not full rank… … and so loses information in general But we are only interested in sparse vectors is effectively MxK columns

How Can It Work? Projection not full rank… … and so loses information in general But we are only interested in sparse vectors Design so that each of its MxK submatrices are full rank (ideally close to orthobasis) – Restricted Isometry Property (RIP) columns

Restricted Isometry Property (RIP) Preserve the structure of sparse/compressible signals K-dim subspaces

Restricted Isometry Property (RIP) “Stable embedding” RIP of order 2K implies: for all K-sparse x 1 and x 2 K-dim subspaces

RIP = Stable Embedding An information preserving projection preserves the geometry of the set of sparse signals RIP ensures that

How Can It Work? Projection not full rank… … and so loses information in general Design so that each of its MxK submatrices are full rank (RIP) Unfortunately, a combinatorial, NP-complete design problem columns

Insight from the 70’s [Kashin, Gluskin] Draw at random – iid Gaussian – iid Bernoulli … Then has the RIP with high probability provided columns

Randomized Sensing Measurements = random linear combinations of the entries of the signal No information loss for sparse vectors whp measurements sparse signal nonzero entries

CS Signal Recovery Goal: Recover signal from measurements Problem: Random projection not full rank (ill-posed inverse problem) Solution: Exploit the sparse/compressible geometry of acquired signal

CS Signal Recovery Random projection not full rank Recovery problem: given find Null space Search in null space for the “best” according to some criterion – ex: least squares (N-M)-dim hyperplane at random angle

Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Closed-form solution: Signal Recovery

Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Closed-form solution: Wrong answer! Signal Recovery

Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Closed-form solution: Wrong answer! Signal Recovery

Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Signal Recovery “find sparsest vector in translated nullspace”

Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Correct! But NP-Complete alg Signal Recovery “find sparsest vector in translated nullspace”

Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Convexify the optimization Signal Recovery Candes Romberg Tao Donoho

Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Convexify the optimization Correct! Polynomial time alg (linear programming) Signal Recovery

Compressive Sensing Signal recovery via optimization [Candes, Romberg, Tao; Donoho] random measurements sparse signal nonzero entries

Compressive Sensing Signal recovery via iterative greedy algorithm – (orthogonal) matching pursuit [Gilbert, Tropp] – iterated thresholding [Nowak, Figueiredo; Kingsbury, Reeves; Daubechies, Defrise, De Mol; Blumensath, Davies; …] – CoSaMP [Needell and Tropp] random measurements sparse signal nonzero entries

Greedy recovery algorithm #1 Consider the following problem Suppose we wanted to minimize just the cost, then steepest gradient descent works as But, the new estimate is no longer K-sparse

Iterated Hard Thresholding update signal estimate prune signal estimate (best K-term approx) update residual

Greedy recovery algorithm #2 Consider the following problem Can we recover the support ? sparse signal 1 sparse

Greedy recovery algorithm #2 Consider the following problem If then gives the support of x How to extend to K-sparse signals ? sparse signal 1 sparse

Greedy recovery algorithm #2 sparse signal K sparse residue: find atom: Add atom to support: Signal estimate (Least squares over support)

Orthogonal matching pursuit update residual Find atom with largest support Update signal estimate

Specialized solvers CoSAMP [Needell and Tropp, 2009] SPG_l1 [Friedlander, van der Berg, 2008] FPC [Hale, Yin, and Zhang, 2007] AMP [Donoho, Montanari and Maleki, 2010] many many others, see dsp.rice.edu/cs and

CS Hallmarks Stable – acquisition/recovery process is numerically stable Asymmetrical (most processing at decoder) – conventional:smart encoder, dumb decoder – CS: dumb encoder, smart decoder Democratic – each measurement carries the same amount of information – robust to measurement loss and quantization – “digital fountain” property Random measurements encrypted Universal – same random projections / hardware can be used for any sparse signal class (generic)

Universality Random measurements can be used for signals sparse in any basis

Universality Random measurements can be used for signals sparse in any basis

Universality Random measurements can be used for signals sparse in any basis sparse coefficient vector nonzero entries

Summary: CS Compressive sensing – randomized dimensionality reduction – exploits signal sparsity information – integrates sensing, compression, processing Why it works:with high probability, random projections preserve information in signals with concise geometric structures – sparse signals – compressible signals

Summary: CS Encoding: = random linear combinations of the entries of Decoding:Recover from via optimization measurements sparse signal nonzero entries

Image/Video specific signal models and recovery algorithms

Transform basis Recall Universality: Random measurements can be used for signals sparse in any basis DCT/FFT/Wavelets … – Fast transforms; very useful in large scale problems

Dictionary learning For many signal classes (ex: videos, light-fields), there are no obvious sparsifying transform basis Can we learn a sparsifying transform instead ? GOAL: Given training data learn a “dictionary” D, such that are sparse.

Dictionary learning GOAL: Given training data learn a “dictionary” D, such that are sparse.

Dictionary learning Non-convex constraint Bilinear in D and S

Dictionary learning Biconvex in D and S – Given D, the optimization problem is convex in s k – Given S, the optimization problem is a least squares problem K-SVD: Solve using alternate minimization techniques – Start with D = wavelet or DCT bases – Additional pruning steps to control size of the dictionary Aharon et al., TSP 2006

Dictionary learning Pros – Ability to handle arbitrary domains Cons – Learning dictionaries can be computationally intensive for high-dimensional problems; need for very large amount of data – Recovery algorithms may suffer due to lack of fast transforms

Models on image gradients Piecewise constant images – Sparse image gradients Natural image statistics – Heavy tailed distributions

Total variation prior TV norm – Sparse-gradient promoting norm Formulation of recovery problem

Total variation prior Optimization problem – Convex – Often, works “better” than transform basis methods Variants – 3D (video) – Anisotropic TV Code – TVAL3 – Many many others (see dsp.rice/cs)

Beyond sparsity Model-based CS

Beyond Sparse Models Sparse signal model captures simplistic primary structure wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images

Beyond Sparse Models Sparse signal model captures simplistic primary structure Modern compression/processing algorithms capture richer secondary coefficient structure wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images

Sparse Signals K-sparse signals comprise a particular set of K-dim subspaces

Structured-Sparse Signals A K-sparse signal model comprises a particular (reduced) set of K-dim subspaces [Blumensath and Davies] Fewer subspaces <> relaxed RIP <> stable recovery using fewer measurements M

Wavelet Sparse Typical of wavelet transforms of natural signals and images (piecewise smooth)

Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Typical of wavelet transforms of natural signals and images (piecewise smooth)

Wavelet Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree RIP: stable embedding K-planes

Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding [Blumensath and Davies] K-planes

Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding [Blumensath and Davies] Recovery: inject tree-sparse approx into IHT/CoSaMP

Recall: Iterated Thresholding update signal estimate prune signal estimate (best K-term approx) update residual

Iterated Model Thresholding update signal estimate prune signal estimate (best K-term model approx) update residual

Tree-Sparse Signal Recovery target signal CoSaMP, (RMSE=1.12) Tree-sparse CoSaMP (RMSE=0.037) signal length N=1024 random measurements M=80 L1-minimization (RMSE=0.751)

Clustered Signals targetIsing-model recovery CoSaMP recovery LP (FPC) recovery Probabilistic approach via graphical model Model clustering of significant pixels in space domain using Ising Markov Random Field Ising model approximation performed efficiently using graph cuts [Cevher, Duarte, Hegde, Baraniuk’08]

Part 2: Compressive sensing Motivation, theory, recovery Linear inverse problems Sensing visual signals Compressive sensing – Theory – Hallmark – Recovery algorithms Model-based compressive sensing – Models specific to visual signals