Image acquisition using sparse (pseudo)-random matrices Piotr Indyk MIT.

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

SPM – introduction & orientation introduction to the SPM software and resources introduction to the SPM software and resources.
Overview of SPM p <0.05 Statistical parametric map (SPM)
Junzhou Huang, Shaoting Zhang, Dimitris Metaxas CBIM, Dept. Computer Science, Rutgers University Efficient MR Image Reconstruction for Compressed MR Imaging.
Caltech October 2005 Compressive Sensing Tutorial PART 2
Sparse Recovery Using Sparse (Random) Matrices Piotr Indyk MIT Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff, Martin Strauss and Milan Ruzic.
On the Power of Adaptivity in Sparse Recovery Piotr Indyk MIT Joint work with Eric Price and David Woodruff, 2011.
Sparse Recovery Using Sparse (Random) Matrices Piotr Indyk MIT Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff, Martin Strauss and Milan Ruzic.
Compressive Sensing IT530, Lecture Notes.
Sparse Recovery (Using Sparse Matrices)
Signal processing’s free lunch The accurate recovery of undersampled signals.
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
“Random Projections on Smooth Manifolds” -A short summary
Computer Vision Introduction to Image formats, reading and writing images, and image environments Image filtering.
Random Convolution in Compressive Sampling Michael Fleyer.
Chapter 8. Linear Systems with Random Inputs 1 0. Introduction 1. Linear system fundamentals 2. Random signal response of linear systems Spectral.
Sketching and Embedding are Equivalent for Norms Alexandr Andoni (Simons Inst. / Columbia) Robert Krauthgamer (Weizmann Inst.) Ilya Razenshteyn (MIT, now.
Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard.
Compressed Sensing Compressive Sampling
Compressive Sampling: A Brief Overview
Sparse Fourier Transform
CS448f: Image Processing For Photography and Vision The Gradient Domain.
Game Theory Meets Compressed Sensing
Recovery of Clustered Sparse Signals from Compressive Measurements
Cs: compressed sensing
4.5 Solving Systems using Matrix Equations and Inverses.
Digital Image Processing CCS331 Image Interpolation 1.
Introduction to Compressive Sensing
CS654: Digital Image Analysis Lecture 12: Separable Transforms.
Sketching via Hashing: from Heavy Hitters to Compressive Sensing to Sparse Fourier Transform Piotr Indyk MIT.
Streaming Algorithms Piotr Indyk MIT. Data Streams A data stream is a sequence of data that is too large to be stored in available memory Examples: –Network.
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
IGARSS 2011 Esteban Aguilera Compressed Sensing for Polarimetric SAR Tomography E. Aguilera, M. Nannini and A. Reigber.
MITSUBISHI ELECTRIC RESEARCH LABORATORIES Cambridge, Massachusetts High resolution SAR imaging using random pulse timing Dehong Liu IGARSS’ 2011 Vancouver,
Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality Piotr Indyk, Rajeev Motwani The 30 th annual ACM symposium on theory of computing.
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.
Worksheet Answers Matrix worksheet And Matrices Review.
Sketching and Embedding are Equivalent for Norms Alexandr Andoni (Columbia) Robert Krauthgamer (Weizmann Inst) Ilya Razenshteyn (MIT) 1.
CS654: Digital Image Analysis Lecture 11: Image Transforms.
Terahertz Imaging with Compressed Sensing and Phase Retrieval Wai Lam Chan Matthew Moravec Daniel Mittleman Richard Baraniuk Department of Electrical and.
3.8B Solving Systems using Matrix Equations and Inverses.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
By Poornima Balakrishna Rajesh Ganesan George Mason University A Comparison of Classical Wavelet with Diffusion Wavelets.
n-pixel Hubble image (cropped)
Sparse RecoveryAlgorithmResults  Original signal x = x k + u, where x k has k large coefficients and u is noise.  Acquire measurements Ax = y. If |x|=n,
Super-resolution MRI Using Finite Rate of Innovation Curves Greg Ongie*, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa.
Compressive Sensing for Attitude Determination Rishi Gupta, Piotr Indyk, Eric Price, Yaron Rachlin May 12, 2011 Draper Laboratory.
PreCalculus Section 14.3 Solve linear equations using matrices
Compressive Coded Aperture Video Reconstruction
Matrix Addition and Scalar Multiplication
Matrix Multiplication
M ? n m
Jeremy Watt and Aggelos Katsaggelos Northwestern University
Computing and Compressive Sensing in Wireless Sensor Networks
Lecture 13 Compressive sensing
Highly Undersampled 0-norm Reconstruction
Lecture 22: Linearity Testing Sparse Fourier Transform
Lecture 15 Sparse Recovery Using Sparse Matrices
An Introduction to Maple
Sketching and Embedding are Equivalent for Norms
Random feature for sparse signal classification
Computational Plenoptic Imaging
Multiplicative Inverses of Matrices and Matrix Equations
UNIT 5. Linear Systems with Random Inputs
Aishwarya sreenivasan 15 December 2006.
 = N  N matrix multiplication N = 3 matrix N = 3 matrix N = 3 matrix
Sudocodes Fast measurement and reconstruction of sparse signals
APPLICATIONS OF LINEAR ALGEBRA IN INFORMATION TECHNOLOGY
Learning-Based Low-Rank Approximations
Presentation transcript:

Image acquisition using sparse (pseudo)-random matrices Piotr Indyk MIT

Outline Sparse recovery using sparse random matrices Using sparse matrices for image acquisition

Sparse recovery (approximation theory, statistical model selection, information-based complexity, learning Fourier coeffs, linear sketching, finite rate of innovation, compressed sensing...) Setup: –Data/signal in n-dimensional space : x –Goal: compress x into a “sketch” or “measurement” Ax, where A is a m x n matrix, m << n Stable recovery: want to recover an “approximation” x* of x –Sparsity parameter k –Informally: want to recover largest k coordinates of x –Formally: l p /l q guarantee. I.e. want x* such that ||x*-x|| p  C(k) min x’ ||x’-x|| q over all x’ that are k-sparse (at most k non-zero entries) Want: –Good compression (small m=m(k,n)) –Efficient algorithms for encoding and recovery Useful for compressed sensing of signals, data stream algorithms, genetic experiment pooling etc etc…. = A x Ax

Constructing matrix A “Most” matrices A work –Dense matrices: Compressed sensing –Sparse matrices: Streaming algorithms Coding theory (LDPC) “Traditional” tradeoffs: –Sparse: computationally more efficient, explicit –Dense: shorter sketches, i.e. k log(n/k) [Candes-Romberg-Tao’04] Recent results: efficient and measurement-optimal

Algorithms for Sparse Matrices Sketching/streaming [Charikar-Chen-Colton’02, Estan-Varghese’03, Cormode-Muthukrishnan’04] LDPC-like : [Xu-Hassibi’07, Indyk’08, Jafarpour-Xu-Hassibi-Calderbank’08] L1 minimization: [Berinde-Gilbert-Indyk-Karloff-Strauss’08,Wang- Wainwright-Ramchandran’08] Message passing: [Sarvotham-Baron-Baraniuk’06,’08, Lu-Montanari- Prabhakar’08, Akcakaya-Tarokh’11] Matching pursuit*: [Indyk-Ruzic’08, Berinde-Indyk-Ruzic’08, Berinde- Indyk’09, Gilbert-Lu-Porat-Strauss’10] Surveys: –A. Gilbert, P. Indyk, Proceedings of IEEE, June 2010 –V. Cevher, P. Indyk, L. Carin, and R. Baraniuk. IEEE Signal Processing Magazine, 26, * Near-linear time (or better), O(k log(n/k)), stable recovery (l1/l1 guarantee)

Restricted Isometry Property (RIP) [Candes-Tao’04]  is k-sparse  ||  || 2  ||A  || 2  C ||  || 2 Holds w.h.p. for: –Random Gaussian/Bernoulli: m=O(k log (n/k)) –Random Fourier: m=O(k log O(1) n) Consider m x n 0-1 matrices with d ones per column Do they satisfy RIP ? –No, unless m=  (k 2 ) [Chandar’07] However, they can satisfy the following RIP-1 property [Berinde-Gilbert-Indyk-Karloff-Strauss’08]:  is k-sparse  d (1-  ) ||  || 1  ||A  || 1  d||  || 1 Sufficient (and necessary) condition: the underlying graph is a ( k, d(1-  /2) )-expander (a random matrix with d=k log (n/k) and m=O(k log (n/k)/  ^2) is good w.h.p) dense vs. sparse n m d S N(S)

Experiments [Berinde-Indyk’09] 256x256 SSMP is ran with S=10000,T=20. SMP is ran for 100 iterations. Matrix sparsity is d=8.

Random sparse matrices for image acquisition Implementable using random lens approach [R. Fergus, A. Torralba, W. T. Freeman’06] –Random design tricky to deal with Alternative: make random matrices less random using “folding” [Gupta-Indyk-Price- Rachlin’11]

Folding

Architectures Optically multiplexed imaging –Replicate d times –Split the beam CMOS [Uttam-Goodman-Neifeld- Kim-John-Kim-Brady’09]

Application: star tracking

Star tracker –Find some/most stars –Lookup the database Images are sparse  compressive star tracker

How do we recover from folded measurements ? Option I: generic approach –Use any algorithm to recover (most of) the stars (if needed, “pretend” the matrix is random) –Lookup the database Option II: Algorithm tailored to folded measurements of “geometric objects” –O(k log k n) measurements and similar recovery time (under some assumptions) –Uses O(log k n) folds

Results

Conclusions Sparse recovery using sparse matrices Natural architectures

APPENDIX

References Survey: A. Gilbert, P. Indyk, “Sparse recovery using sparse matrices”, Proceedings of IEEE, June Courses: –“Streaming, sketching, and sub-linear space algorithms”, Fall’07 –“Sub-linear algorithms” (with Ronitt Rubinfeld), Fall’10 Blogs: –Nuit blanche: nuit-blanche.blogspot.com/

Appendix

Application I: Monitoring Network Traffic Data Streams Router routs packets –Where do they come from ? –Where do they go to ? Ideally, would like to maintain a traffic matrix x[.,.] –Easy to update: given a (src,dst) packet, increment x src,dst –Requires way too much space! (2 32 x 2 32 entries) –Need to compress x, increment easily Using linear compression we can: –Maintain sketch Ax under increments to x, since A(x+  ) = Ax + A  –Recover x* from Ax source destination x

l ∞ /l 1 implies l 1 /l 1 Algorithm: –Assume we have x* s.t. ||x-x*|| ∞ ≤ C Err 1 /k. –Let vector x’ consist of k largest (in magnitude) elements of x* Analysis –Let S (or S* ) be the set of k largest in magnitude coordinates of x (or x* ) –Note that ||x* S || ≤ ||x* S* || 1 –We have ||x-x’|| 1 ≤ ||x|| 1 - ||x S* || 1 + ||x S* -x* S* || 1 ≤ ||x|| 1 - ||x* S* || 1 + 2||x S* -x* S* || 1 ≤ ||x|| 1 - ||x* S || 1 + 2||x S* -x* S* || 1 ≤ ||x|| 1 - ||x S || 1 + ||x* S -x S || 1 + 2||x S* -x* S* || 1 ≤ Err + 3  /k * k ≤ (1+3  )Err

Application I: Monitoring Network Traffic Data Streams Router routs packets –Where do they come from ? –Where do they go to ? Ideally, would like to maintain a traffic matrix x[.,.] –Easy to update: given a (src,dst) packet, increment x src,dst –Requires way too much space! (2 32 x 2 32 entries) –Need to compress x, increment easily Using linear compression we can: –Maintain sketch Ax under increments to x, since A(x+  ) = Ax + A  –Recover x* from Ax source destination x

Applications, c td. Single pixel camera [Wakin, Laska, Duarte, Baron, Sarvotham, Takhar, Kelly, Baraniuk’06] Pooling Experiments [Kainkaryam, Woolf’08], [Hassibi et al’07], [Dai- Sheikh, Milenkovic, Baraniuk], [Shental-Amir- Zuk’09],[Erlich-Shental-Amir-Zuk’09]

Experiments 256x256 SSMP is ran with S=10000,T=20. SMP is ran for 100 iterations. Matrix sparsity is d=8.

SSMP: Running time Algorithm: –x*=0 –Repeat T times For each i=1…n compute* z i that achieves D i =min z ||A(x*+ze i )-b|| 1 and store D i in a heap Repeat S=O(k) times –Pick i,z that yield the best gain –Update x* = x*+ze i –Recompute and store D i’ for all i’ such that N(i) and N(i’) intersect Sparsify x* (set all but k largest entries of x* to 0) Running time: T [ n(d+log n) + k nd/m*d (d+log n)] = T [ n(d+log n) + nd (d+log n)] = T [ nd (d+log n)] A i x-x* Ax-Ax*

Prior and New Results PaperRand. / Det. Sketch length Encode time Column sparsity Recovery timeApprox

Results PaperR/ D Sketch lengthEncode time Column sparsity Recovery timeApprox [CCF’02], [CM’06] Rk log nn log nlog nn log nl2 / l2 Rk log c nn log c nlog c nk log c nl2 / l2 [CM’04]Rk log nn log nlog nn log nl1 / l1 Rk log c nn log c nlog c nk log c nl1 / l1 [CRT’04] [RV’05] Dk log(n/k)nk log(n/k)k log(n/k)ncnc l2 / l1 Dk log c nn log nk log c nncnc l2 / l1 [GSTV’06] [GSTV’07] Dk log c nn log c nlog c nk log c nl1 / l1 Dk log c nn log c nk log c nk 2 log c nl2 / l1 [BGIKS’08]Dk log(n/k)n log(n/k)log(n/k)ncnc l1 / l1 [GLR’08]Dk logn logloglogn kn 1-a n 1-a ncnc l2 / l1 [NV’07], [DM’08], [NT’08], [BD’08], [GK’09], … Dk log(n/k)nk log(n/k)k log(n/k)nk log(n/k) * logl2 / l1 Dk log c nn log nk log c nn log n * logl2 / l1 [IR’08], [BIR’08],[BI’09]Dk log(n/k)n log(n/k)log(n/k)n log(n/k)* logl1 / l1 Excellent Scale: Very GoodGoodFair [GLSP’09]Rk log(n/k)n log c nlog c nk log c nl2 / l1 Caveats: (1) most “dominated” results not shown (2) only results for general vectors x are displayed