Optimal sparse representations in general overcomplete bases

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

Factorial Mixture of Gaussians and the Marginal Independence Model Ricardo Silva Joint work-in-progress with Zoubin Ghahramani.
Solving LP Models Improving Search Special Form of Improving Search
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
L1 sparse reconstruction of sharp point set surfaces
Compressive Sensing IT530, Lecture Notes.
Joint work with Irad Yavneh
Pixel Recovery via Minimization in the Wavelet Domain Ivan W. Selesnick, Richard Van Slyke, and Onur G. Guleryuz *: Polytechnic University, Brooklyn, NY.
Globally Optimal Estimates for Geometric Reconstruction Problems Tom Gilat, Adi Lakritz Advanced Topics in Computer Vision Seminar Faculty of Mathematics.
Exact or stable image\signal reconstruction from incomplete information Project guide: Dr. Pradeep Sen UNM (Abq) Submitted by: Nitesh Agarwal IIT Roorkee.
Extensions of wavelets
More MR Fingerprinting
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
1 Applications on Signal Recovering Miguel Argáez Carlos A. Quintero Computational Science Program El Paso, Texas, USA April 16, 2009.
ECE 472/572 - Digital Image Processing Lecture 8 - Image Restoration – Linear, Position-Invariant Degradations 10/10/11.
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Sparse and Overcomplete Data Representation
1 Transfer Learning Algorithms for Image Classification Ariadna Quattoni MIT, CSAIL Advisors: Michael Collins Trevor Darrell.
Image Denoising via Learned Dictionaries and Sparse Representations
Image Denoising with K-SVD Priyam Chatterjee EE 264 – Image Processing & Reconstruction Instructor : Prof. Peyman Milanfar Spring 2007.
Recent Trends in Signal Representations and Their Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
Alfredo Nava-Tudela John J. Benedetto, advisor
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Cs: compressed sensing
“A fast method for Underdetermined Sparse Component Analysis (SCA) based on Iterative Detection- Estimation (IDE)” Arash Ali-Amini 1 Massoud BABAIE-ZADEH.
Basis Expansions and Regularization Part II. Outline Review of Splines Wavelet Smoothing Reproducing Kernel Hilbert Spaces.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
Ariadna Quattoni Xavier Carreras An Efficient Projection for l 1,∞ Regularization Michael Collins Trevor Darrell MIT CSAIL.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Linear Programming Chapter 1 Introduction.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
Using Neumann Series to Solve Inverse Problems in Imaging Christopher Kumar Anand.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Computation of the solutions of nonlinear polynomial systems
Jeremy Watt and Aggelos Katsaggelos Northwestern University
The Design and Analysis of Algorithms
Müjdat Çetin Stochastic Systems Group, M.I.T.
The minimum cost flow problem
Chapter 1. Introduction Mathematical Programming (Optimization) Problem: min/max
Basic Algorithms Christina Gallner
Sparse and Redundant Representations and Their Applications in
Nuclear Norm Heuristic for Rank Minimization
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chap 3. The simplex method
Bin Sort, Radix Sort, Sparse Arrays, and Stack-based Depth-First Search CSE 373, Copyright S. Tanimoto, 2002 Bin Sort, Radix.
A Motivating Application: Sensor Array Signal Processing
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Part 3. Linear Programming
Lecture 4: Econometric Foundations
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Bin Sort, Radix Sort, Sparse Arrays, and Stack-based Depth-First Search CSE 373, Copyright S. Tanimoto, 2001 Bin Sort, Radix.
Rank-Sparsity Incoherence for Matrix Decomposition
CS5321 Numerical Optimization
I.4 Polyhedral Theory (NW)
Improving K-SVD Denoising by Post-Processing its Method-Noise
Typical Types of Degradation: Motion Blur.
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
I.4 Polyhedral Theory.
CIS 700: “algorithms for Big Data”
Simplex method (algebraic interpretation)
NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Outline Sparse Reconstruction RIP Condition
Sebastian Semper1 and Florian Roemer1,2
Presentation transcript:

Optimal sparse representations in general overcomplete bases Dmitry M. Malioutov, Müjdat Çetin, and Alan S. Willsky LIDS, MIT This work was supported by the Army Research Office under Grant DAAD19-00-1-0466, and the Office of Naval Research under Grant N00014-00-1-0089. May 20, 2004

Outline of the presentation Sparse signal reconstruction problem l0, l1, and lp measures of sparsity Uniqueness and equivalence conditions for l0, l1, and lp Sign patterns of exact solutions Sparsity under a transformation Numerical optimization of l1 and lp objective functions

Underdetermined Linear Inverse Problems Basic problem: find an estimate of x , where Underdetermined -- non-uniqueness of solutions Additional information/constraints needed for a unique solution A typical approach is the min-norm solution: What if we know x is sparse (i.e. has few non-zero elements)? [ ]

A motivating application: sensor array source localization Goal: Estimate directions of arrival of acoustic sources using a microphone array Data collection setup Underlying “sparse” spatial spectrum x Forward Inverse

Sparsity constraints Prefer the sparsest solution: Number of non-zero elements in x Can be viewed as finding a sparse representation of the signal y in an overcomplete dictionary A Intractable combinatorial optimization problem Are there tractable alternatives that might produce the same result? Empirical observation: l1- norm and lp-norm based techniques produce solutions that look sparse l1 cost function can be optimized by linear programming!

l1 and lp-norms and sparsity – an example A sparse signal 2.000 2.000 2.000 A non-sparse signal 0.3382 3.5549 84.8104 See lp_norm_example.m, which produces these plots as well as plots of lp-norm vs p for these two signals Goal: Rigorous characterization of the l1, lp - sparsity link For these two signals x1 and x2 we have Ax1=Ax2 where A is a 16x128 DFT operator

l0 uniqueness conditions Prefer the sparsest solution: Let When is ? Number of non-zero elements in x Definition: The index of ambiguity K(A) of A is the largest integer such that any set of K(A) columns of A is linearly independent. Thm. 1: What can we say about more tractable formulations like l1 ? Unique l0 solution

l0 uniqueness conditions (continued) The measure K(A) is not continuous in entries of A New Measure of well-separatedness of an overcomplete basis: Definition: Maximum absolute dot product of columns Thm. 2: Our proof is based on the optimality of the regular simplex for line packing

l1 equivalence conditions Consider the l1 problem: Can we ever hope to get ? Thm. 3(*): is sparse enough  exact solution by l1 optimization Can solve a combinatorial optimization problem by convex optimization! l1 solution = l0 solution ! (*) Donoho and Huo proved this for pairs of orthogoanl bases. We extended this result to general overcomplete bases. Independently, Donoho and Elad, Gribonval and Nielsen, and Fuchs made this extension.

lp (p ≤ 1) equivalence conditions Consider the lp problem: How about ? Definition: Thm. 4: lp solution = l0 solution ! Smaller p Smaller p  more non-zero elements tolerated As p0 we recover the l0 condition, namely

Sign patterns of exact solutions Additional characterization is possible when the sufficient conditions for equivalence of l0 and l1 problems are not met. The support and the sign pattern of an l0-optimal solution x determine whether the solution is also l1-optimal. A is 10x40. Left: correct l1 solutions. Right: wrong l1 solutions.

Sparsity under a transformation Consider a more general problem: where 0<p·1, and D is a given full-row rank linear mapping Let N = Null(D), and let F = A N. Project y onto 1) range space of F and 2) its orthogonal complement Define z = Dx, then the problem reduces to:

Sparsity under a transformation (continued) Example: total variation (TV) reconstruction of a piecewise-constant signal. D is a 39x40 pairwise difference opearator, A is 10x40. l2 blurs the edges l1 recovers the original signal

Numerical optimization for l1 l1 problem: Solution by linear programming: Now the problem becomes :

Numerical optimization: noisy complex data Handling noise: For some applications the data is complex – we use second order cone programming (SOC) Efficient solution by an interior point implementation

Numerical optimization for lp half-quadratic regularization Noisy lp formulation: Smooth approximation to lp: Iterative half-quadratic regularization algorithm:

Applications Source localization: Radar imaging: Other applications: subset and feature selection, denoising, object recognition