A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.

Slides:



Advertisements
Similar presentations
1 Transportation problem The transportation problem seeks the determination of a minimum cost transportation plan for a single commodity from a number.
Advertisements

MMSE Estimation for Sparse Representation Modeling
Joint work with Irad Yavneh
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
K-SVD Dictionary-Learning for Analysis Sparse Models
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
1 Micha Feigin, Danny Feldman, Nir Sochen
1.5 Elementary Matrices and a Method for Finding
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
1.2 Row Reduction and Echelon Forms
Linear Equations in Linear Algebra
2 2.3 © 2012 Pearson Education, Inc. Matrix Algebra CHARACTERIZATIONS OF INVERTIBLE MATRICES.
Sparse & Redundant Signal Representation, and its Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
Dictionary-Learning for the Analysis Sparse Model Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
Sparse and Overcomplete Data Representation
Mathematics and Image Analysis, MIA'06
Image Denoising via Learned Dictionaries and Sparse Representations
An Introduction to Sparse Representation and the K-SVD Algorithm
Optimized Projection Directions for Compressed Sensing Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa.
Image Denoising with K-SVD Priyam Chatterjee EE 264 – Image Processing & Reconstruction Instructor : Prof. Peyman Milanfar Spring 2007.
Matrix sparsification and the sparse null space problem Lee-Ad GottliebWeizmann Institute Tyler NeylonBynomial Inc. TexPoint fonts used in EMF. Read the.
Computing Sketches of Matrices Efficiently & (Privacy Preserving) Data Mining Petros Drineas Rensselaer Polytechnic Institute (joint.
3D Geometry for Computer Graphics
Recent Trends in Signal Representations and Their Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
Image Decomposition and Inpainting By Sparse & Redundant Representations Michael Elad The Computer Science Department The Technion – Israel Institute of.
1 © 2012 Pearson Education, Inc. Matrix Algebra THE INVERSE OF A MATRIX.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
A Weighted Average of Sparse Several Representations is Better than the Sparsest One Alone Michael Elad The Computer Science Department The Technion –
Topics in MMSE Estimation for Sparse Approximation Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
Chapter 10 Real Inner Products and Least-Square (cont.)
Compressed Sensing Compressive Sampling
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Chapter 5: The Orthogonality and Least Squares
Cs: compressed sensing
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
SVD: Singular Value Decomposition
Orthogonality and Least Squares
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
5.5 Row Space, Column Space, and Nullspace
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Sparse & Redundant Representation Modeling of Images Problem Solving Session 1: Greedy Pursuit Algorithms By: Matan Protter Sparse & Redundant Representation.
A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 3 Iterative Method for Nonlinear problems EE692 Parallel and Distribution.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
1 1.2 Linear Equations in Linear Algebra Row Reduction and Echelon Forms © 2016 Pearson Education, Ltd.
(iii) Simplex method - I D Nagesh Kumar, IISc Water Resources Planning and Management: M3L3 Linear Programming and Applications.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Chap 10. Sensitivity Analysis
CHARACTERIZATIONS OF INVERTIBLE MATRICES
Sparse and Redundant Representations and Their Applications in
Linear Equations in Linear Algebra
Sudocodes Fast measurement and reconstruction of sparse signals
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
I.4 Polyhedral Theory (NW)
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Improving K-SVD Denoising by Post-Processing its Method-Noise
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
I.4 Polyhedral Theory.
Linear Equations in Linear Algebra
CHARACTERIZATIONS OF INVERTIBLE MATRICES
Presentation transcript:

A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel

A Non-Negative Sparse Solution to Ax=b is Unique 2/17 Overview n k  We are given an underdetermined linear system of equations Ax=b (k>n) with a full-rank A.  There are infinitely many possible solutions in the set S={x| Ax=b}.  What happens when we demand positivity x≥0? Surely we should have S + ={x| Ax=b, x ≥0} S.  Our result: For a specific type of matrices A, if a sparse enough solution is found, we get that S + is a singleton (i.e. there is only one solution).  In such a case, the regularized problem gets to the same solution, regardless of the choice of the regularization f(x) (e.g., L 0, L 1, L 2, L ∞, entropy, etc.). In this talk we shall briefly explain how this result is obtained, and discuss some of its implications

A Non-Negative Sparse Solution to Ax=b is Unique 3/17 Preliminaries

A Non-Negative Sparse Solution to Ax=b is Unique 4/17 Stage 1: Coherence Measures  Consider the Gram matrix G=A T A:  Prior work on L 0 -L 1 equivalence relies on a mutual-coherence measure defined by (a i – the i-th column of A)  In our work we need a slightly different measure:.  Note that this one-sided measure is weaker (i.e. μ (A)≤ ρ (A)), but necessary for our analysis.  Both behave like for random A with (0,1)-normal and i.i.d. entries. ATAT = A ATAATA

A Non-Negative Sparse Solution to Ax=b is Unique 5/17 Stage 2: The Null-Space of A It is relatively easy to show that In words: a vector in the null- space of A cannot have arbitrarily large entries relative to its (L 1 ) length. The smaller the coherence, the stronger this limit becomes. i

A Non-Negative Sparse Solution to Ax=b is Unique 6/17 z Stage 3: An Equivalence Theorem Consider the following problem with a concave semi-monotonic increasing function : A feasible solution x (i.e. Ax=b) to this problem is the unique global optimum if it is sparse enough: Then Note: Similar (but somewhat different!) results appear elsewhere [Donoho & Elad `03] [Gribonval & Nielsen `03, `04] [Escoda, Granai and Vandergheynst `04].

A Non-Negative Sparse Solution to Ax=b is Unique 7/17 The Main Result

A Non-Negative Sparse Solution to Ax=b is Unique 8/17 Stage 1: Limitations on A  So far we considered a general matrix A.  From now on, we shall assume that A satisfies the condition: The span of the rows in A intersects the positive orthant.  O + includes  All the positive matrices A,  All matrices having at least one strictly positive (or negative) row.  Note: If then so does the product P · A · Q for any invertible matrix P and any diagonal and strictly positive matrix Q.

A Non-Negative Sparse Solution to Ax=b is Unique 9/17  Suppose that we found h such that.  Thus,.  Using the element-wise positive scale mapping we get a system of the form:  Implication: we got a linear system of equations for which we also know that every solution for it must sum to Const.  If x≥0, the additional requirement is equivalent to. This brings us to the next result … Stage 2: Canonization of Ax=b

A Non-Negative Sparse Solution to Ax=b is Unique 10/17 Given a system of linear equations Ax=b with, we consider the set of non-negative solutions,. Assume that a canonization Dz=b is performed by finding a suitable h and computing the diagonal positive matrix W: Stage 3: The Main Result n k If a sparse solution is found such that then S + is a singleton. Then

A Non-Negative Sparse Solution to Ax=b is Unique 11/17 Sketch of the Proof Find h such that Canonize the system to become Dz=b using and … any solution of Dz=b satisfies Non-negativity of the solutions implies that There is a one-to-one mapping between solutions of Dz=b and Ax=b By Theorem 1, z is the unique global minimizer of Q 1 Consider Q 1, the optimization problem We are given a sparse solution z≥0, satisfying No other solution can give the same L 1 length The set S + contains only z

A Non-Negative Sparse Solution to Ax=b is Unique 12/17 Some Thoughts

A Non-Negative Sparse Solution to Ax=b is Unique 13/17 How About Pre-Coherencing?  The above result is true to A or any variant of it that is obtained by A’=PAQ (P invertible, Q diagonal and positive).  Thus, we better evaluate the coherence for a “better-conditioned” matrix A (after canonization), with the smallest possible coherence:  One trivial option is P that nulls the mean of the columns.  Better choices of P and Q can be found numerically [Elad `07]. ρ C h

A Non-Negative Sparse Solution to Ax=b is Unique 14/17 Solve If we are interested in a sparse result (which apparently may be unique), we could:  Trust the uniqueness and regularize in whatever method we want.  Solve an L 1 -regularized problem:.  Use a greedy algorithm (e.g. OMP):  Find one atom at a time by minimizing the residual,  Positivity is enforced both in: Checking which atom to choose, The LS step after choosing an atom.  OMP is guaranteed to find the sparse result of, if it sparse enough [Tropp `06], [Donoho, Elad, Temlyakov, `06]. SKIP?

A Non-Negative Sparse Solution to Ax=b is Unique 15/17 OMP and Pre-Coherencing?  As opposed to the L 1 approach, pre-cohenercing helps the OMP and improves its performance.  Experiment:  Random non-negative matrix A of size 100  200,  Generate 40 random positive solutions x with varying cardinalities,  Check average performance.  L 1 performs better BUT takes much longer ( ).  OMP may be further improved by better pre-coherencing.

A Non-Negative Sparse Solution to Ax=b is Unique 16/17 Relation to Compressed Sensing? n k  A Signal Model: b belongs to a signal family that have a (very) sparse and non-negative representation over the dictionary A.

A Non-Negative Sparse Solution to Ax=b is Unique 17/17 Relation to Compressed Sensing?  A Signal Model: b belongs to a signal family that have a (very) sparse and non-negative representation over the dictionary A.  CS Measurement: Instead of measuring b we measure a projected version of it Rb=d.

A Non-Negative Sparse Solution to Ax=b is Unique 18/17 Relation to Compressed Sensing?  A Signal Model: b belongs to a signal family that have a (very) sparse and non-negative representation over the dictionary A.  CS Measurement: Instead of measuring b we measure a projected version of it Rb=d.  CS Reconstruction: We seek the sparsest & non-negative solution of the system RAx=d – the scenario describe in this work!!  Our Result: We know that if the (non-negative) representation x was sparse enough to begin with, ANY method that solves this system necessarily finds it exactly.  Little Bit of Bad News: We require too strong sparsity for this claim to be true. Thus, further work is required to strengthen this result.

A Non-Negative Sparse Solution to Ax=b is Unique 19/17 Conclusions  Non-negative sparse and redundant representation models are useful in analysis of multi-spectral imaging, astronomical imaging, …  In our work we show that when a sparse representation exists, it may be the only one possible.  This explains various regularization methods (entropy, L 2 and even L ∞ ) that were found to lead to a sparse outcome.  Future work topics:  Average performance (replacing the presented worst-case)?  Influence of noise (approximation instead of representation)?  Better pre-coherencing?  Show applications?