Download presentation
Presentation is loading. Please wait.
Published byChrista Slade Modified over 9 years ago
1
Short-course Compressive Sensing of Videos Venue CVPR 2012, Providence, RI, USA June 16, 2012 Organizers: Richard G. Baraniuk Mohit Gupta Aswin C. Sankaranarayanan Ashok Veeraraghavan
2
Part 2: Compressive sensing Motivation, theory, recovery Linear inverse problems Sensing visual signals Compressive sensing – Theory – Hallmark – Recovery algorithms Model-based compressive sensing – Models specific to visual signals
3
Linear inverse problems
4
Many classic problems in computer can be posed as linear inverse problems Notation – Signal of interest – Observations – Measurement model Problem definition: given, recover measurement noise measurement matrix
5
Linear inverse problems Problem definition: given, recover Scenario 1 We can invert the system of equations Focus more on robustness to noise via signal priors
6
Linear inverse problems Problem definition: given, recover Scenario 2 Measurement matrix has a (N-M) dimensional null-space Solution is no longer unique Many interesting vision problem fall under this scenario Key quantity of concern: Under-sampling ratio M/N
7
Image super-resolution Low resolution input/observation 128x128 pixels
8
Image super-resolution 2x super-resolution
9
Image super-resolution 4x super-resolution
10
Image super-resolution Super-resolution factor D Under-sampling factor M/N = 1/D 2 General rule: The smaller the under-sampling, the more the unknowns and hence, the harder the super- resolution problem
11
Many other vision problems… Affine rank minimizaiton, Matrix completion, deconvolution, Robust PCA Image synthesis – Infilling, denoising, etc. Light transport – Reflectance fields, BRDFs, Direct global separation, Light transport matrices Sensing
12
Sensing visual signals
13
High-dimensional visual signals Human skin Sub-surface scattering Electron microscopy Tomography Reflection Refraction Fog Volumetric scattering
14
The plenoptic function Collection of all variations of light in a scene Adelson and Bergen (91) space (3D) angle (2D) spectrum (1D) time (1D) Different slices reveal different scene properties
15
space (3D) angle (2D) spectrum (1D) time (1D) Lytro light-field camera High-speed cameras The plenoptic function Hyper-spectral imaging
16
Sensing the plenoptic function High-dimensional – 1000 samples/dim == 10^21 dimensional signal – Greater than all the storage in the world Traditional theories of sensing fail us!
17
Resolution trade-off Key enabling factor: Spatial resolution is cheap! Commercial cameras have 10s of megapixels One idea is the we trade-off spatial resolution for resolution in some other axis
18
Spatio-angular tradeoff [Ng, 2005]
19
Spatio-angular tradeoff [Levoy et al. 2006]
20
Spatio-temporal tradeoff [Bub et al., 2010] Stagger pixel-shutter within each exposure
21
Spatio-temporal tradeoff [Bub et al., 2010] Rearrange to get high temporal resolution video at lower spatial- resolution
22
Resolution trade-off Very powerful and simple idea Drawbacks – Does not extend to non-visible spectrum 1 Megapixel SWIR camera costs 50- 100k – Linear and global tradeoffs – With today’s technology, cannot obtain more than 10x for video without sacrificing spatial resolution completely
23
Compressive sensing
24
Sense by Sampling sample
25
Sense by Sampling sample too much data!
26
Sense then Compress compress decompress sample JPEG JPEG2000 …
27
Sparsity pixels large wavelet coefficients (blue = 0)
28
Sparsity pixels large wavelet coefficients (blue = 0) wideband signal samples large Gabor (TF) coefficients time frequency
29
Sparse signal: only K out of N coordinates nonzero Concise Signal Structure sorted index
30
Sparse signal: only K out of N coordinates nonzero – model: union of K-dimensional subspaces aligned w/ coordinate axes Concise Signal Structure sorted index
31
Sparse signal: only K out of N coordinates nonzero – model: union of K-dimensional subspaces Compressible signal:sorted coordinates decay rapidly with power-law sorted index power-law decay Concise Signal Structure
32
Sparse signal: only K out of N coordinates nonzero – model: union of K-dimensional subspaces Compressible signal:sorted coordinates decay rapidly with power-law – model: ball: sorted index power-law decay Concise Signal Structure
33
What’s Wrong with this Picture? Why go to all the work to acquire N samples only to discard all but K pieces of data? compress decompress sample
34
What’s Wrong with this Picture? linear processing linear signal model (bandlimited subspace) compress decompress sample nonlinear processing nonlinear signal model (union of subspaces)
35
Compressive Sensing Directly acquire “compressed” data via dimensionality reduction Replace samples by more general “measurements” compressive sensing recover
36
Signal is -sparse in basis/dictionary – WLOG assume sparse in space domain Sampling sparse signal nonzero entries
37
Signal is -sparse in basis/dictionary – WLOG assume sparse in space domain Sampling sparse signal nonzero entries measurements Sampling
38
Compressive Sampling When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss through linear dimensionality reduction measurements sparse signal nonzero entries
39
How Can It Work? Projection not full rank… … and so loses information in general Ex: Infinitely many ’s map to the same (null space)
40
How Can It Work? Projection not full rank… … and so loses information in general But we are only interested in sparse vectors columns
41
How Can It Work? Projection not full rank… … and so loses information in general But we are only interested in sparse vectors is effectively MxK columns
42
How Can It Work? Projection not full rank… … and so loses information in general But we are only interested in sparse vectors Design so that each of its MxK submatrices are full rank (ideally close to orthobasis) – Restricted Isometry Property (RIP) columns
43
Restricted Isometry Property (RIP) Preserve the structure of sparse/compressible signals K-dim subspaces
44
Restricted Isometry Property (RIP) “Stable embedding” RIP of order 2K implies: for all K-sparse x 1 and x 2 K-dim subspaces
45
RIP = Stable Embedding An information preserving projection preserves the geometry of the set of sparse signals RIP ensures that
46
How Can It Work? Projection not full rank… … and so loses information in general Design so that each of its MxK submatrices are full rank (RIP) Unfortunately, a combinatorial, NP-complete design problem columns
47
Insight from the 70’s [Kashin, Gluskin] Draw at random – iid Gaussian – iid Bernoulli … Then has the RIP with high probability provided columns
48
Randomized Sensing Measurements = random linear combinations of the entries of the signal No information loss for sparse vectors whp measurements sparse signal nonzero entries
49
CS Signal Recovery Goal: Recover signal from measurements Problem: Random projection not full rank (ill-posed inverse problem) Solution: Exploit the sparse/compressible geometry of acquired signal
50
CS Signal Recovery Random projection not full rank Recovery problem: given find Null space Search in null space for the “best” according to some criterion – ex: least squares (N-M)-dim hyperplane at random angle
51
Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Closed-form solution: Signal Recovery
52
Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Closed-form solution: Wrong answer! Signal Recovery
53
Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Closed-form solution: Wrong answer! Signal Recovery
54
Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Signal Recovery “find sparsest vector in translated nullspace”
55
Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Correct! But NP-Complete alg Signal Recovery “find sparsest vector in translated nullspace”
56
Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Convexify the optimization Signal Recovery Candes Romberg Tao Donoho
57
Recovery:given (ill-posed inverse problem) find (sparse) Optimization: Convexify the optimization Correct! Polynomial time alg (linear programming) Signal Recovery
58
Compressive Sensing Signal recovery via optimization [Candes, Romberg, Tao; Donoho] random measurements sparse signal nonzero entries
59
Compressive Sensing Signal recovery via iterative greedy algorithm – (orthogonal) matching pursuit [Gilbert, Tropp] – iterated thresholding [Nowak, Figueiredo; Kingsbury, Reeves; Daubechies, Defrise, De Mol; Blumensath, Davies; …] – CoSaMP [Needell and Tropp] random measurements sparse signal nonzero entries
60
Greedy recovery algorithm #1 Consider the following problem Suppose we wanted to minimize just the cost, then steepest gradient descent works as But, the new estimate is no longer K-sparse
61
Iterated Hard Thresholding update signal estimate prune signal estimate (best K-term approx) update residual
62
Greedy recovery algorithm #2 Consider the following problem Can we recover the support ? sparse signal 1 sparse
63
Greedy recovery algorithm #2 Consider the following problem If then gives the support of x How to extend to K-sparse signals ? sparse signal 1 sparse
64
Greedy recovery algorithm #2 sparse signal K sparse residue: find atom: Add atom to support: Signal estimate (Least squares over support)
65
Orthogonal matching pursuit update residual Find atom with largest support Update signal estimate
66
Specialized solvers CoSAMP [Needell and Tropp, 2009] SPG_l1 [Friedlander, van der Berg, 2008] http://www.cs.ubc.ca/labs/scl/spgl1/ FPC [Hale, Yin, and Zhang, 2007] http://www.caam.rice.edu/~optimization/L1/fpc/ AMP [Donoho, Montanari and Maleki, 2010] many many others, see dsp.rice.edu/cs and https://sites.google.com/site/igorcarron2/cscodes
67
CS Hallmarks Stable – acquisition/recovery process is numerically stable Asymmetrical (most processing at decoder) – conventional:smart encoder, dumb decoder – CS: dumb encoder, smart decoder Democratic – each measurement carries the same amount of information – robust to measurement loss and quantization – “digital fountain” property Random measurements encrypted Universal – same random projections / hardware can be used for any sparse signal class (generic)
68
Universality Random measurements can be used for signals sparse in any basis
69
Universality Random measurements can be used for signals sparse in any basis
70
Universality Random measurements can be used for signals sparse in any basis sparse coefficient vector nonzero entries
71
Summary: CS Compressive sensing – randomized dimensionality reduction – exploits signal sparsity information – integrates sensing, compression, processing Why it works:with high probability, random projections preserve information in signals with concise geometric structures – sparse signals – compressible signals
72
Summary: CS Encoding: = random linear combinations of the entries of Decoding:Recover from via optimization measurements sparse signal nonzero entries
73
Image/Video specific signal models and recovery algorithms
74
Transform basis Recall Universality: Random measurements can be used for signals sparse in any basis DCT/FFT/Wavelets … – Fast transforms; very useful in large scale problems
75
Dictionary learning For many signal classes (ex: videos, light-fields), there are no obvious sparsifying transform basis Can we learn a sparsifying transform instead ? GOAL: Given training data learn a “dictionary” D, such that are sparse.
76
Dictionary learning GOAL: Given training data learn a “dictionary” D, such that are sparse.
77
Dictionary learning Non-convex constraint Bilinear in D and S
78
Dictionary learning Biconvex in D and S – Given D, the optimization problem is convex in s k – Given S, the optimization problem is a least squares problem K-SVD: Solve using alternate minimization techniques – Start with D = wavelet or DCT bases – Additional pruning steps to control size of the dictionary Aharon et al., TSP 2006
79
Dictionary learning Pros – Ability to handle arbitrary domains Cons – Learning dictionaries can be computationally intensive for high-dimensional problems; need for very large amount of data – Recovery algorithms may suffer due to lack of fast transforms
80
Models on image gradients Piecewise constant images – Sparse image gradients Natural image statistics – Heavy tailed distributions
81
Total variation prior TV norm – Sparse-gradient promoting norm Formulation of recovery problem
82
Total variation prior Optimization problem – Convex – Often, works “better” than transform basis methods Variants – 3D (video) – Anisotropic TV Code – TVAL3 – Many many others (see dsp.rice/cs)
83
Beyond sparsity Model-based CS
84
Beyond Sparse Models Sparse signal model captures simplistic primary structure wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images
85
Beyond Sparse Models Sparse signal model captures simplistic primary structure Modern compression/processing algorithms capture richer secondary coefficient structure wavelets: natural images Gabor atoms: chirps/tones pixels: background subtracted images
86
Sparse Signals K-sparse signals comprise a particular set of K-dim subspaces
87
Structured-Sparse Signals A K-sparse signal model comprises a particular (reduced) set of K-dim subspaces [Blumensath and Davies] Fewer subspaces <> relaxed RIP <> stable recovery using fewer measurements M
88
Wavelet Sparse Typical of wavelet transforms of natural signals and images (piecewise smooth)
89
Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Typical of wavelet transforms of natural signals and images (piecewise smooth)
90
Wavelet Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree RIP: stable embedding K-planes
91
Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding [Blumensath and Davies] K-planes
92
Tree-Sparse Model: K-sparse coefficients +significant coefficients lie on a rooted subtree Tree-RIP: stable embedding [Blumensath and Davies] Recovery: inject tree-sparse approx into IHT/CoSaMP
93
Recall: Iterated Thresholding update signal estimate prune signal estimate (best K-term approx) update residual
94
Iterated Model Thresholding update signal estimate prune signal estimate (best K-term model approx) update residual
95
Tree-Sparse Signal Recovery target signal CoSaMP, (RMSE=1.12) Tree-sparse CoSaMP (RMSE=0.037) signal length N=1024 random measurements M=80 L1-minimization (RMSE=0.751)
96
Clustered Signals targetIsing-model recovery CoSaMP recovery LP (FPC) recovery Probabilistic approach via graphical model Model clustering of significant pixels in space domain using Ising Markov Random Field Ising model approximation performed efficiently using graph cuts [Cevher, Duarte, Hegde, Baraniuk’08]
97
Part 2: Compressive sensing Motivation, theory, recovery Linear inverse problems Sensing visual signals Compressive sensing – Theory – Hallmark – Recovery algorithms Model-based compressive sensing – Models specific to visual signals
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.