Download presentation
Presentation is loading. Please wait.
1
Outline Sparse Reconstruction RIP Condition
Sparse Reconstruction Algorithms
2
Linear Equations with Noise
Equations and Noise Equations: y = Ax + n Size of A: typically large Efficient Solution Using Gaussian elimination Using other numerical methods More Knowledge on Equation Vector x is sparse, very few nonzero elements of x Can utilize such structure for an efficient solution?
3
Sparsity in the Real World
Sparsity in the original domain Radar signal, the sparse object reflection: sparsity in the time Signal detection from sparse directions Sparsity in the transform domain DCT coefficients of real images: negligible coefficients in the higher frequency components Narrow band interference, for the OFDM, in the frequency domain …
4
Sparse Reconstruction
How to Model Sparsity Zero norm: ||x||0 Sparse vector x: small ||x||0 Sparse Reconstruction Problem Formulation Minimize the 0-norm under measurement constraints Noiseless: min ||x||0 s.t. Ax = b Noisy: min ||x||0 s.t. ||Ax – b||2 < ε Solve the 0-Norm Minimization Problem
5
Sparse Reconstruction
Definition: Support of x - number of nonzero elements ||x||0 = |supp(x)| = |{i|xi ≠ 0}| x, K-sparse: ||x||0 ≤ K L0 Norm Highly Nonconvex L0 Norm Optimization Highly Non-convex Transform to a Convex Optimization Problem
6
L1 Norm Relaxation Transform Objective Function ||x||0 to ||x||1
L1 Norm Minimization Problem Noiseless: min ||x||1 s.t. Ax = b Noisy: min ||x||1 s.t. ||Ax – b||2 < ε L1 Norm: Convex Optimization Numerous Non-differential Points High computation complexity using brute-force L1- norm convex solution
7
L2 Norm Relaxation Transform Objective Function ||x||0 to ||x||2
Transformed L2 Problem Formulations min ||Ax – b||2, s.t. ||x||1 < q min ||Ax – b||2 + λ||x||1: the sparsity grows with the value of λ Advantage and Disadvantage Advantage: easy to optimize, differentiable at all points if no L1 terms Disadvantage: a lot of small non-zero elements, ||x||0 not really minimized
8
Restricted Isometry Property (RIP)
Conditions on L1 Equivalence to L0 RIP condition The orthogonality of different columns of matrix A RIP Condition of Order K (1 - δ)||x||2 ≤ ||Ax|| ≤ (1 + δ)||x||2, for ||x||0 ≤ K completely orthogonal: δ = 0; The Non-orthogonality of Matrix A δK = inf{δ|(1 - δ)||x||2 ≤ ||Ax|| ≤ (1 + δ)||x||2, for all x in RK} completely orthogonal: δK = 0;
9
Restricted Isometry Property (RIP)
Available Sufficient conditions on the L1 equivalence to L0: One of the following three conditions is satisfied Condition 1: δK + δ2K + δ3K < 1 Condition 2: δ2K < sqrt(2) – 1 Condition 3: δK < 0.307 Intuition of RIP: for completely δK = 0 Unitary matrix A What happened for unitary matrix A? L0, L1, …
10
Heuristic Solution to L1 Minimization
Greedy Algorithm Successively finding the element that maximizes the correlation between basis and measurements “maximizes the correlation”? Correlation Maximization Maximize the correlation <rn, an>, residue rn Update the residue directly update: rn+1 = rn - <rn, an>an projection-based update: rn+1 = y – Anx, x = (AnHAn)-1AnHy
11
Heuristic Solution Matching Pursuit Orthogonal Matching Pursuit
Each iteration, the residue: rn = b – Ax The column an, maximizing <rn, an>, is selected Update rn+1 = rn - <rn, an>an Orthogonal Matching Pursuit An = [a1a2…an], x = (AnHAn)-1AnHy Update the residue: rn+1 = y - Anx
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.