Lecture 13 Compressive sensing

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Numerical Linear Algebra in the Streaming Model Ken Clarkson - IBM David Woodruff - IBM.
Numerical Linear Algebra in the Streaming Model
Subspace Embeddings for the L1 norm with Applications Christian Sohler David Woodruff TU Dortmund IBM Almaden.
Fast Johnson-Lindenstrauss Transform(s) Nir Ailon Edo Liberty, Bernard Chazelle Bertinoro Workshop on Sublinear Algorithms May 2011.
An Introduction to Compressed Sensing Student : Shenghan TSAI Advisor : Hsuan-Jung Su and Pin-Hsun Lin Date : May 02,
Sparse Recovery Using Sparse (Random) Matrices Piotr Indyk MIT Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff, Martin Strauss and Milan Ruzic.
On the Power of Adaptivity in Sparse Recovery Piotr Indyk MIT Joint work with Eric Price and David Woodruff, 2011.
Image acquisition using sparse (pseudo)-random matrices Piotr Indyk MIT.
Sparse Recovery (Using Sparse Matrices)
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Heavy Hitters Piotr Indyk MIT. Last Few Lectures Recap (last few lectures) –Update a vector x –Maintain a linear sketch –Can compute L p norm of x (in.
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
“Random Projections on Smooth Manifolds” -A short summary
Kunal Talwar MSR SVC [Dwork, McSherry, Talwar, STOC 2007] TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A AA A.
Approximate Nearest Neighbors and the Fast Johnson-Lindenstrauss Transform Nir Ailon, Bernard Chazelle (Princeton University)
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
Sketching for M-Estimators: A Unified Approach to Robust Regression Kenneth Clarkson David Woodruff IBM Almaden.
Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard.
Compressed Sensing Compressive Sampling
How Robust are Linear Sketches to Adaptive Inputs? Moritz Hardt, David P. Woodruff IBM Research Almaden.
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Compressive Sampling: A Brief Overview
Game Theory Meets Compressed Sensing
Recovery of Clustered Sparse Signals from Compressive Measurements
Cs: compressed sensing
Introduction to Compressive Sensing
Sketching via Hashing: from Heavy Hitters to Compressive Sensing to Sparse Fourier Transform Piotr Indyk MIT.
Learning With Structured Sparsity
Compressive Sensing for Multimedia Communications in Wireless Sensor Networks By: Wael BarakatRabih Saliba EE381K-14 MDDSP Literary Survey Presentation.
Compressible priors for high-dimensional statistics Volkan Cevher LIONS/Laboratory for Information and Inference Systems
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
ALISSA M. STAFFORD MENTOR: ALEX CLONINGER DIRECTED READING PROJECT MAY 3, 2013 Compressive Sensing & Applications.
An Introduction to Compressive Sensing Speaker: Ying-Jou Chen Advisor: Jian-Jiun Ding.
A Compact Survey of Compressed Sensing Graham Cormode
The Scaling Law of SNR-Monitoring in Dynamic Wireless Networks Soung Chang Liew Hongyi YaoXiaohang Li.
One algorithm to rule them all One join query at a time Atri Rudra University at Buffalo.
Low Rank Approximation and Regression in Input Sparsity Time David Woodruff IBM Almaden Joint work with Ken Clarkson (IBM Almaden)
An Introduction to Compressive Sensing
Compressive Sensing Techniques for Video Acquisition EE5359 Multimedia Processing December 8,2009 Madhu P. Krishnan.
Sparse RecoveryAlgorithmResults  Original signal x = x k + u, where x k has k large coefficients and u is noise.  Acquire measurements Ax = y. If |x|=n,
SketchVisor: Robust Network Measurement for Software Packet Processing
Zhu Han University of Houston Thanks for Professor Dan Wang’s slides
Lecture 22: Linearity Testing Sparse Fourier Transform
Lecture 15 Sparse Recovery Using Sparse Matrices
Generalization and adaptivity in stochastic convex optimization
Approximate Matchings in Dynamic Graph Streams
COMS E F15 Lecture 2: Median trick + Chernoff, Distinct Count, Impossibility Results Left to the title, a presenter can insert his/her own image.
Lecture 4: CountSketch High Frequencies
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
Nuclear Norm Heuristic for Rank Minimization
Overview Massive data sets Streaming algorithms Regression
The Communication Complexity of Distributed Set-Joins
Bounds for Optimal Compressed Sensing Matrices
Sudocodes Fast measurement and reconstruction of sparse signals
Introduction to Compressive Sensing Aswin Sankaranarayanan
CSCI B609: “Foundations of Data Science”
Sparse and Redundant Representations and Their Applications in
Dimension versus Distortion a.k.a. Euclidean Dimension Reduction
Aishwarya sreenivasan 15 December 2006.
Lecture 6: Counting triangles Dynamic graphs & sampling
CIS 700: “algorithms for Big Data”
Sudocodes Fast measurement and reconstruction of sparse signals
President’s Day Lecture: Advanced Nearest Neighbor Search
Outline Sparse Reconstruction RIP Condition
Subspace Expanders and Low Rank Matrix Recovery
Presentation transcript:

Lecture 13 Compressive sensing

Compressed Sensing A.k.a. compressive sensing or compressive sampling [Candes-Romberg-Tao’04; Donoho’04] Signal acquisition/processing framework: Want to acquire a signal x=[x1…xn] Acquisition proceeds by computing Ax of dimension m<<n (see next slide why) From Ax we want to recover an approximation x* of x Note: x* does not have to be k-sparse Method: solve the following program: minimize ||x*||1 subject to Ax*=Ax

Signal acquisition Measurement: Image x reflected by a mirror a (pixels randomly off and on) The reflected rays are aggregated using lens The sensor receives ax Measurement process repeated k times → sensor receives Ax Now we want to recover the image from the measurements

Solving the program Recovery: This is a linear program: minimize ||x*||1 subject to Ax*=Ax This is a linear program: minimize ∑i ti subject to -ti ≤ x*i ≤ ti Ax*=Ax Can solve in nΘ(1) time

Intuition LP: On the first sight, somewhat mysterious minimize ||x*||1 subject to Ax*=Ax On the first sight, somewhat mysterious But the approach has a long history in signal processing, statistics, etc. Intuition: The actual goal is to minimize ||x*||0 But this comes down to the same thing (if A is “nice”) The choice of L1 is crucial (L2 does not work) Ax*=Ax n=2, m=1, k=1

||x*-x||1 ≤ C ||xtail(k)||1 Analysis Theorem: If each entry from A is i.i.d. as N(0,1), and m=Θ(klog(n/k)), then with probability at least 2/3 we have that, for any x, the output x* of of the LP satisfies ||x*-x||1 ≤ C ||xtail(k)||1 where ||xtail(k)||1=mink-sparse x’ ||x-x’||1 , also denoted by Err1k(x) . Notes: N(0,1) not crucial – any distribution satisfying JL will do Can actually prove a stronger guarantee, so-called “L2/L1” Comparison to “Count-Median” (like Count-Min, but for general x) m Recovery time Failure probability Guarantee Count-Median k log n n log n (or faster) 1/n |xi*-xi| ≤ C ||xtail(k)||1/k L1 minimization k log (n/k) poly(n) “Deterministic” ||x*-x||1 ≤ C ||xtail(k)||1

Empirical comparison L1 minimization more measurement-efficient Sketching can be more time-efficient (sublinear in in)

Restricted Isometry Property* A matrix A satisfies (k,δ)-RIP if for any k-sparse vector x we have (1-δ) ||x||2 ≤ ||Ax||2 ≤ (1+δ) ||x||2 Theorem 1: If each entry of A is i.i.d. as N(0,1), and m=Θ(k log(n/k)), then A satisfies (k,1/3)-RIP w.h.p. Theorem 2: (4k,1/3)-RIP implies that if we solve minimize ||x*||1 subject to Ax*=Ax then ||x*-x||1 ≤ C ||xtail(k)||1 *Introduced in Lecture 9

Proof of Theorem 1 Suffices to consider ||x||2=1 We will take a union bound over k-subsets T of {1..n} such that Supp(x)=T There are (n/k)O(k) such sets For each such T, we can focus on x’=xT and A’=AT, where x’ is k-dimensional and A’ is m by k Altogether: need to show that with probability at least 1-(k/n)O(k), for any x’ on a k-dimensional unit ball B, we have 2/3 ≤ ||A’x’||2 ≤ 4/3 *We follow the presentation in [Baraniuk-Davenport-DeVore-Wakin’07] * x’ A’ A x

Unit ball preservation An ε-net N of B is a subset of B such that for any x’B there is x”N s.t. ||x’-x”||2 < ε Lect. 5: there exists an ε-net N for B of size (1/ε)Θ(k) We set ε=1/7 By JL we know that for all x’N we have 7/8 ≤ ||A’x’||2 ≤ 8/7 with probability 1-e-Θ(m) To take care of all x’B, we set x’=b0x0+b1x1+…, s.t. All xjN bj <1/7j We get ||A’x’||2 ≤∑j ||Axj||/7j ≤ 8/7 ∑j ||xj||/7j =8/7*7/6=4/3 Other direction analogous Altogether, this gives us (1/3,k)-RIP with high probability x’ x0 Δ …and recurse on Δ

Proof of Theorem 2 See notes

(1-δ) ||x||2 ≤ ||Ax||2 ≤ (1+δ) ||x||2 Recap A matrix A satisfies (k,δ)-RIP if for any k-sparse vector x we have (1-δ) ||x||2 ≤ ||Ax||2 ≤ (1+δ) ||x||2 Theorem 1: If each entry of A is i.i.d. as N(0,1), and m=Θ(k log(n/k)), then A satisfies (k,1/3)-RIP w.h.p. Theorem 2: (4k,1/3)-RIP implies that if we solve minimize ||x*||1 subject to Ax*=Ax then ||x*-x||1 ≤ C ||xtail(k)||1