CIS 700: “algorithms for Big Data”

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Eigen Decomposition and Singular Value Decomposition
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
An Introduction to Compressed Sensing Student : Shenghan TSAI Advisor : Hsuan-Jung Su and Pin-Hsun Lin Date : May 02,
Compressive Sensing IT530, Lecture Notes.
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
Exact or stable image\signal reconstruction from incomplete information Project guide: Dr. Pradeep Sen UNM (Abq) Submitted by: Nitesh Agarwal IIT Roorkee.
Random Convolution in Compressive Sampling Michael Fleyer.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Alfredo Nava-Tudela John J. Benedetto, advisor
Compressed Sensing Compressive Sampling
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
KKT Practice and Second Order Conditions from Nash and Sofer
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Compressive Sampling: A Brief Overview
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Game Theory Meets Compressed Sensing
Cs: compressed sensing
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Method of Least Squares. Least Squares Method of Least Squares:  Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The.
Linear algebra: matrix Eigen-value Problems
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
Orthogonalization via Deflation By Achiya Dax Hydrological Service Jerusalem, Israel
ALISSA M. STAFFORD MENTOR: ALEX CLONINGER DIRECTED READING PROJECT MAY 3, 2013 Compressive Sensing & Applications.
Compressive Sampling Jan Pei Wu. Formalism The observation y is linearly related with signal x: y=Ax Generally we need to have the number of observation.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Lecture 16: Image alignment
Lap Chi Lau we will only use slides 4 to 19
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Review of Matrix Operations
Topics in Algorithms Lap Chi Lau.
Computational Optimization
Systems of First Order Linear Equations
Basic Algorithms Christina Gallner
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Presenter: Xia Li.
Nuclear Norm Heuristic for Rank Minimization
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
CIS 700: “algorithms for Big Data”
Chap 3. The simplex method
Some useful linear algebra
CSCI B609: “Foundations of Data Science”
CSCI B609: “Foundations of Data Science”
A Motivating Application: Sensor Array Signal Processing
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
CS485/685 Computer Vision Dr. George Bebis
Linear sketching over
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Bounds for Optimal Compressed Sensing Matrices
§1-3 Solution of a Dynamical Equation
Optimal sparse representations in general overcomplete bases
Linear sketching with parities
CSCI B609: “Foundations of Data Science”
Rank-Sparsity Incoherence for Matrix Decomposition
I.4 Polyhedral Theory (NW)
CSCI B609: “Foundations of Data Science”
Linear Algebra Lecture 32.
Flow Feasibility Problems
I.4 Polyhedral Theory.
Subject :- Applied Mathematics
Chapter 2. Simplex method
Outline Sparse Reconstruction RIP Condition
Subspace Expanders and Low Rank Matrix Recovery
Presentation transcript:

CIS 700: “algorithms for Big Data” Lecture 9: Compressed Sensing Slides at http://grigory.us/big-data-class.html Grigory Yaroslavtsev http://grigory.us

Compressed Sensing Given a sparse signal 𝑥∈ ℝ 𝑛 can we recover it from a small number of measurements? Goal: design 𝐴∈ ℝ 𝑑×𝑛 which allows to recover any 𝑠-sparse 𝑥∈ ℝ 𝑛 from 𝐴𝑥. 𝐴 = matrix of i.i.d. Gaussians 𝑁(0,1) Application: signals are usually sparse in some Fourier domain

Reconstruction Reconstruction: min 𝑥 0 , subject to: 𝐴𝑥=𝑏 Uniqueness: If there are two 𝑠-sparse solutions 𝑥 1 , 𝑥 2 : 𝐴 𝑥 1 − 𝑥 2 =0 then 𝐴 has 2𝑠 linearly dependent columns If 𝑑=Ω( 𝑠 2 ) and 𝐴 is Gaussian then unlikely to have linearly dependent columns 𝑥 0 not convex, NP-hard to reconstruct 𝑥 0 → 𝑥 1 : min 𝑥 1 , subject to: 𝐴𝑥=𝑏 When does this give sparse solutions?

Subgradient min 𝑥 1 , subject to: 𝐴𝑥=𝑏 𝑥 1 is convex but not differentiable Subgradient 𝛻f: equal to gradient where 𝑓 is differentiable any linear lower bound where 𝑓 is not differentiable ∀ 𝑥 0 ,Δ𝑥: 𝑓 𝑥 0 +Δ𝑥 ≥𝑓 𝑥 0 + 𝛻𝑓 𝑇 Δ𝑥 Subgradient for 𝑥 1 : 𝛻 𝑥 1 𝑖 =𝑠𝑖𝑔𝑛( 𝑥 𝑖 ) if 𝑥 𝑖 ≠0 𝛻 𝑥 1 𝑖 ∈ −1,1 if 𝑥 𝑖 =0 For all Δ𝑥 such that 𝐴Δ𝑥=0 satisfies 𝛻 𝑇 Δ𝑥≥0 Sufficient: ∃𝑤 such that 𝛻= 𝐴 𝑇 𝑤 so 𝛻 𝑇 Δ𝑥= 𝑤𝐴Δ𝑥=0

Exact Reconstruction Property Subgradient Thm. If 𝐴 𝑥 0 =𝑏 and there exists a subgradient 𝛻 for 𝑥 1 such that 𝛻= 𝐴 𝑇 𝑤 and columns of 𝐴 corresponding to 𝑥 0 are linearly independent then 𝑥 0 minimizes 𝑥 1 and is unique. (Minimum): Assume 𝐴𝑦=𝑏. Will show 𝑦 1 ≥ 𝑥 0 1 𝑧=𝑦− 𝑥 0 ⇒𝐴𝑧=𝐴𝑦−𝐴 𝑥 0 =0 𝛻 𝑇 𝑧=0⇒ 𝑦 1 = 𝑥 0 +𝑧 ≥ 𝑥 0 + 𝛻 𝑇 𝑧= 𝑥 0 1

Exact Reconstruction Property (Uniqueness): assume 𝑥 0 is another minimum 𝛻 at 𝑥 0 is also a subgradient at 𝑥 0 ∀Δ𝑥:𝐴Δ𝑥=0: 𝑥 0 +Δ𝑥 1 = 𝑥 0 + 𝑥 0 − 𝑥 0 +Δ𝑥 ≥ 𝑥 0 1 + 𝛻 𝑇 ( 𝑥 0 − 𝑥 0 +Δ𝑥) = 𝑥 0 1 + 𝛻 𝑇 𝑥 0 − 𝑥 0 + 𝛻 T Δ𝑥 𝛻 𝑇 𝑥 0 − 𝑥 0 = 𝑤 𝑇 𝐴 𝑥 0 − 𝑥 0 = 𝑤 𝑇 𝑏−𝑏 =0 𝑥 0 +Δ𝑥 1 ≥ 𝑥 0 1 + 𝛻 T Δ𝑥 𝛻 i =sign (x 0 i )=sign ( 𝑥 0 i ) if either is non-zero, otherwise equal to 0 ⇒ 𝑥 0 and 𝑥 0 have same sparsity pattern By linear independence of columns of 𝐴: 𝑥 0 = 𝑥 0

Restricted Isometry Property Matrix 𝐴 satisfies restricted isometry property (RIP), if for any 𝑠-sparse 𝑥 there exists 𝛿 𝑠 : 1 − 𝛿 𝑠 𝑥 2 2 ≤ 𝐴𝑥 2 2 ≤ 1+ 𝛿 𝑠 𝑥 2 2 Exact isometry: all eigenvalues are ±1 for orthogonal 𝑥,𝑦: 𝑥 𝑇 𝐴 𝑇 𝐴𝑦=0 Let 𝐴 𝑆 be the set of columns of 𝐴 in set 𝑆 Lem: If 𝐴 satisfies RIP and 𝛿 𝑠 1 + 𝑠 2 ≤ 𝛿 𝑠 1 + 𝛿 𝑠 2 : For 𝑆 of size 𝑠 singular values of 𝐴 𝑆 in [1− 𝛿 𝑠 ,1+ 𝛿 𝑠 ] For any orthogonal 𝑥,𝑦 with supports of size 𝑠 1 , 𝑠 2 : 𝑥 𝑇 𝐴 𝑇 𝐴𝑦 ≤ 𝑥 | 𝑦 |( 𝛿 𝑠 1 + 𝛿 𝑠 2 )

Restricted Isometry Property Lem: If 𝐴 satisfies RIP and 𝛿 𝑠 1 + 𝑠 2 ≤ 𝛿 𝑠 1 + 𝛿 𝑠 2 : For 𝑆 of size 𝑠 singular values of 𝐴 𝑆 in [1− 𝛿 𝑠 ,1+ 𝛿 𝑠 ] For any orthogonal 𝑥,𝑦 with supports of size 𝑠 1 , 𝑠 2 : 𝑥 𝑇 𝐴 𝑇 𝐴𝑦 ≤3/2 𝑥 𝑦 ( 𝛿 𝑠 1 + 𝛿 𝑠 2 ) W.l.o.g 𝑥 = 𝑦 =1 so 𝑥+𝑦 2 =2 2 1− 𝛿 𝑠 1 + 𝑠 2 ≤ 𝐴 𝑥+𝑦 2 ≤ 2 1+ 𝛿 𝑠 1 + 𝑠 2 2 1− 𝛿 𝑠 1 + 𝛿 𝑠 2 ≤ 𝐴 𝑥+𝑦 2 ≤ 2 1+ 𝛿 𝑠 1 + 𝛿 𝑠 2 1 − 𝛿 𝑠 1 ≤ 𝐴𝑥 2 ≤(1+ 𝛿 𝑠 1 ) 1 − 𝛿 𝑠 2 ≤ 𝐴𝑦 2 ≤(1+ 𝛿 𝑠 2 )

Restricted Isometry Property 2 𝑥 𝑇 𝐴 𝑇 𝐴𝑦 = 𝑥+𝑦 𝑇 𝐴 𝑇 𝐴 𝑥+𝑦 − 𝑥 𝑇 𝐴 𝑇 𝐴𝑥− 𝑦 𝑇 𝐴 𝑇 𝐴𝑦 = 𝐴 𝑥+𝑦 2 − 𝐴𝑥 2 − 𝐴𝑦 2 2 𝑥 𝑇 𝐴 𝑇 𝐴𝑦≤2 1+ 𝛿 𝑠 1 + 𝛿 𝑠 2 − 1 − 𝛿 𝑠 1 − 1 − 𝛿 𝑠 2 =3( 𝛿 𝑠 1 + 𝛿 𝑠 2 ) 𝑥 𝑇 𝐴 𝑇 𝐴𝑦≤ 3 2 𝑥 ⋅ 𝑦 ⋅( 𝛿 𝑠 1 + 𝛿 𝑠 2 )

Reconstruction from RIP Thm. If 𝐴 satisfies RIP with 𝛿 𝑠+1 ≤ 1 10 𝑠 and 𝑥 0 is 𝑠-sparse and satisfies 𝐴 𝑥 0 =𝑏. Then a 𝛻( ⋅ 1 ) exists at 𝑥 0 which satisfies conditions of the “subgradient theorem”. Implies that 𝑥 0 is the unique minimum 1-norm solution to 𝐴𝑥=𝑏. 𝑆= 𝑖 𝑥 0 𝑖 ≠0 , 𝑆 = 𝑖 𝑥 0 𝑖 =0 Find subgradient 𝑢 search for 𝑤: 𝑢= 𝐴 𝑇 𝑤 for 𝑖∈𝑆: 𝑢 𝑖 =𝑠𝑖𝑔𝑛 𝑥 0 2-norm of the coordinates in 𝑆 is minimized

Reconstruction from RIP Let 𝑧 be a vector with support 𝑆: 𝑧 𝑖 =sign (𝑥 0 𝑖 ) Let 𝑤= 𝐴 𝑆 𝐴 𝑆 𝑇 𝐴 𝑆 −1 𝑧 𝐴 𝑆 has independent columns by RIP For coordinates in 𝑆: 𝐴 𝑇 𝑤 𝑆 = 𝐴 𝑆 𝑇 𝐴 𝑆 𝐴 𝑆 𝑇 𝐴 𝑆 −1 𝑧=𝑧 For coordinates in 𝑆 : 𝐴 𝑇 𝑤 𝑆 = 𝐴 𝑆 𝑇 𝐴 𝑆 𝐴 𝑆 𝑇 𝐴 𝑆 −1 𝑧 Eigenvalues of 𝐴 𝑆 𝑇 𝐴 𝑆 are in 1− 𝛿 𝑠 2 , 1+ 𝛿 𝑠 2 || 𝐴 𝑆 𝑇 𝐴 𝑆 −1 ||≤ 1 1− 𝛿 𝑠 2 , let 𝑝= 𝐴 𝑆 𝑇 𝐴 𝑆 −1 𝑧, 𝑝 ≤ 𝑠 1− 𝛿 𝑠 2 𝐴 𝑆 𝑝=𝐴𝑞 where 𝑞 has all coordinates in 𝑆 equal 0 For 𝑗∈ 𝑆 : 𝐴 𝑇 𝑤 𝑗 = 𝑒 𝑗 𝑇 𝐴 𝑇 𝐴𝑞 so | 𝐴 𝑇 𝑤 𝑗 |≤ 3 2 𝛿 𝑠 + 𝛿 1 𝑠 1− 𝛿 𝑠 2 ≤ 3 2 𝛿 𝑠+1 𝑠 1− 𝛿 𝑠 2 ≤ 1 2