Alfredo Nava-Tudela John J. Benedetto, advisor

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Eigen Decomposition and Singular Value Decomposition
Eigen Decomposition and Singular Value Decomposition
Compressive Sensing IT530, Lecture Notes.
Multi-Label Prediction via Compressed Sensing By Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang (NIPS 2009) Presented by: Lingbo Li ECE, Duke University.
Joint work with Irad Yavneh
Sparse Representations and the Basis Pursuit Algorithm* Michael Elad The Computer Science Department – Scientific Computing & Computational mathematics.
Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang
Extensions of wavelets
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
Infinite Horizon Problems
Sparse & Redundant Signal Representation, and its Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
Sparse and Overcomplete Data Representation
Mathematics and Image Analysis, MIA'06
Advanced Computer Architecture Lab University of Michigan Quantum Operations and Quantum Noise Dan Ernst Quantum Noise and Quantum Operations Dan Ernst.
An Introduction to Sparse Representation and the K-SVD Algorithm
Optimized Projection Directions for Compressed Sensing Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa.
Introduction to Compressive Sensing
Singular Value Decomposition
Matrix sparsification and the sparse null space problem Lee-Ad GottliebWeizmann Institute Tyler NeylonBynomial Inc. TexPoint fonts used in EMF. Read the.
Analysis of the Basis Pustuit Algorithm and Applications*
Recent Trends in Signal Representations and Their Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
Image Decomposition and Inpainting By Sparse & Redundant Representations Michael Elad The Computer Science Department The Technion – Israel Institute of.
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
Compressed Sensing Compressive Sampling
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
1 Information Retrieval through Various Approximate Matrix Decompositions Kathryn Linehan Advisor: Dr. Dianne O’Leary.
Non Negative Matrix Factorization
Cs: compressed sensing
“A fast method for Underdetermined Sparse Component Analysis (SCA) based on Iterative Detection- Estimation (IDE)” Arash Ali-Amini 1 Massoud BABAIE-ZADEH.
Basis Expansions and Regularization Part II. Outline Review of Splines Wavelet Smoothing Reproducing Kernel Hilbert Spaces.
Compressive Sampling Jan Pei Wu. Formalism The observation y is linearly related with signal x: y=Ax Generally we need to have the number of observation.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
4.6: Rank. Definition: Let A be an mxn matrix. Then each row of A has n entries and can therefore be associated with a vector in The set of all linear.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
A Weighted Average of Sparse Representations is Better than the Sparsest One Alone Michael Elad and Irad Yavneh SIAM Conference on Imaging Science ’08.
1.7 Linear Independence. in R n is said to be linearly independent if has only the trivial solution. in R n is said to be linearly dependent if there.
Ch 6 Vector Spaces. Vector Space Axioms X,Y,Z elements of  and α, β elements of  Def of vector addition Def of multiplication of scalar and vector These.
2.5 – Determinants and Multiplicative Inverses of Matrices.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
Compressive Sensing Techniques for Video Acquisition EE5359 Multimedia Processing December 8,2009 Madhu P. Krishnan.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 5 ECEC 453 Image Processing Architecture Lecture 5, 1/22/2004 Rate-Distortion Theory,
REVIEW Linear Combinations Given vectors and given scalars
Singular Value Decomposition and its applications
Linear Equations in Linear Algebra
Linear Algebra Lecture 19.
Basic Algorithms Christina Gallner
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Nuclear Norm Heuristic for Rank Minimization
Integer transform and Triangular matrix scheme
A Motivating Application: Sensor Array Signal Processing
Linear Equations in Linear Algebra
4.6: Rank.
Sparse and Redundant Representations and Their Applications in
Optimal sparse representations in general overcomplete bases
Sparse and Redundant Representations and Their Applications in
What can we know from RREF?
Sparse and Redundant Representations and Their Applications in
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
CIS 700: “algorithms for Big Data”
Linear Algebra Lecture 16.
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

Alfredo Nava-Tudela John J. Benedetto, advisor Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images Alfredo Nava-Tudela John J. Benedetto, advisor AMSC 663

Outline Problem: Find “sparse solutions” of Ax = b Why do we care? An application in signal processing: compression Working definition of “sparse solution” Why things eventually work, for some cases How do we find sparse solutions? OMP algorithm Case study: What happens when we combine the DCT and DWT? Conclusions/Recap Project timeline AMSC 663

Problem Let A be an n by m matrix with n < m, and rank(A)=n. We want to solve Ax = b, where b is a data or signal vector, and x is the solution with the fewest number of non-zero entries possible, that is, the “sparsest” one. Observations: - A is underdetermined and, since rank(A)=n, there is an infinite number of solutions. Good! - How do we find the “sparsest” solution? What does this mean in practice? Is there a unique sparsest solution? AMSC 663

231 kb, uncompressed, 320x240x3x8 bit But, why do we care? We live in a digital media era. Bandwidths are different and matter: cellphone vs DSL vs Gigabit, etc. 231 kb, uncompressed, 320x240x3x8 bit 74 kb, compressed 3.24:1 JPEG AMSC 663

But, why do we care? 512 x 512 Pixels, 75:1, 10.6 Kbyte 24-Bit RGB, Ever increasing size of files to transmit demand better compression algorithms. At the core of the JPEG and JPEG2000 standards, the DCT and the DWT. Transform encoding 512 x 512 Pixels, 24-Bit RGB, Size 786 Kbyte 75:1, 10.6 Kbyte JPEG2000 AMSC 663

“Sparsity” equals compression Both JPEG and JPEG2000 achieve their compression mainly because at their core one finds a linear transform (DCT and DWT, respectively) that reduces the number of non-zero entries required to represent the data, within an acceptable error. We can then think of signal compression in terms of our problem Ax = b, x is sparse, b is dense, store x! AMSC 663

Working definitions of “sparse” Convenient to introduce the l0 “norm”: ||x||0 = # {k : xk ≠ 0} (P0): minx ||x||0 subject to ||Ax - b||2 = 0 (P0e): minx ||x||0 subject to ||Ax - b||2 < e Observation: Solving (P0) is NP-hard, bummer. We can cast our problem as a minimization problem AMSC 663

Some theoretical results Definition: The spark of a matrix A is the minimum number of linearly dependent columns of A. We write spark(A) to represent this number. Theorem: If there is a solution x to Ax = b, and ||x||0 < spark(A) / 2, then x is the sparsest solution. That is, if y ≠ x also solves the equation, then ||x||0 < ||y||0. Observation: Computing spark(A) is combinatorial, therefore hard. Alternative? Address the question of uniqueness AMSC 663

More theoretical results Definition: The mutual coherence of a matrix A is the number Lemma: spark(A) ≥ 1+1/mu(A). Theorem: If x solves Ax = b, and ||x||0 < (1+mu(A)-1)/2, then x is the sparsest solution, as before. Observation: mu(A) is a lot easier and faster to compute, but 1+1/mu(A) far worse bound than spark(A), in general. Mutual coherence gives the worse case scenario criterion for uniqueness Mention that mutual coherence also plays a role in proving convergence of algorithms, but won’t see this here AMSC 663

Finding sparse solutions:OMP Orthogonal Matching Pursuit algorithm: OMP is one of a family of algorithms called “greedy algorithms”. Goal: implement, validate and test. Matlab. There are implementations available online against which to compare my implementation. OMP known to converge when ||x||0 < (1+1/mu(A))/2. AMSC 663

Revisiting compression Propose to study the compression properties of the matrix A = [DCT,DWT] and compare it with the compression properties of DCT or DWT alone. Study the behavior of OMP for this problem. Many wavelet options available to try, e.g., reversible 5/3 or floating-point 9/7 Daubechies, as in JPEG2000. Interested in compression vs error graph properties. Matrix A can typically be the concatenation of two basis AMSC 663

Conclusions/Recapitulation Finding sparse solutions to the linear system of equations Ax = b, when A is an n by m full rank matrix and n < m, is of interest to the signal processing community. There are simple criteria to assert the uniqueness of a given sparse solution. There are algorithms to find sparse solutions, e.g., OMP; and their convergence can be guaranteed when there are “sufficiently sparse” solutions. - Studies on the performance of OMP missing when A is the concatenation of unitary matrices. AMSC 663

Project timeline Oct. 15, 2010: Complete project proposal. Oct. 18 - Nov. 5: Implement OMP. Nov. 8 - Nov. 26: Validate OMP. Nov. 29 - Dec. 3: Write mid-year report. Dec. 6 - Dec. 10: Prepare mid-year oral presentation. Some time after that, give mid-year oral presentation. Jan. 24 - Feb. 11: Testing of OMP. Reproduce paper results. Jan. 14 - Apr. 8: Testing of OMP on A = [DCT,DWT]. Apr. 11 - Apr. 22: Write final report. Apr. 25 - Apr. 29: Prepare final oral presentation. Some time after that, give final oral presentation. AMSC 663

References A. M. Bruckstein, D. L. Donoho, and M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images, SIAM Review, 51 (2009), pp. 34–81. S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, 1998. B. K. Natarajan, Sparse approximate solutions to linear systems, SIAM Journal on Computing, 24 (1995), pp. 227-234. G. W. Stewart, Introduction to Matrix Computations, Academic Press, 1973. D. S. Taubman and M. W. Mercellin, JPEG 2000: Image Compression Funda- mentals, Standards and Practice, Kluwer Academic Publishers, 2001. G. K. Wallace, The JPEG still picture compression standard, Communications of the ACM, 34 (1991), pp. 30-44. http://www.stanford.edu/class/ee398a/handouts/lectures/08-JPEG.pdf AMSC 663