Rank-Sparsity Incoherence for Matrix Decomposition

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

Eigen Decomposition and Singular Value Decomposition
C&O 355 Mathematical Programming Fall 2010 Lecture 9
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
T HE POWER OF C ONVEX R ELAXATION : N EAR - OPTIMAL MATRIX COMPLETION E MMANUEL J. C ANDES AND T ERENCE T AO M ARCH, 2009 Presenter: Shujie Hou February,
Extremum Properties of Orthogonal Quotients Matrices By Achiya Dax Hydrological Service, Jerusalem, Israel
Exploiting Sparse Markov and Covariance Structure in Multiresolution Models Presenter: Zhe Chen ECE / CMR Tennessee Technological University October 22,
The Structure of Polyhedra Gabriel Indik March 2006 CAS 746 – Advanced Topics in Combinatorial Optimization.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Dan Witzner Hansen  Groups?  Improvements – what is missing?
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Sampling algorithms for l 2 regression and applications Michael W. Mahoney Yahoo Research (Joint work with P. Drineas.
Chebyshev Estimator Presented by: Orr Srour. References Yonina Eldar, Amir Beck and Marc Teboulle, "A Minimax Chebyshev Estimator for Bounded Error Estimation"
Preference Analysis Joachim Giesen and Eva Schuberth May 24, 2006.
Orthogonality and Least Squares
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Linear Algebra Lecture 25.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Cs: compressed sensing
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
Blind Calibration of Sensor Networks Laura Balzano (UCLA) Robert Nowak (UW-Madison) This work was supported.
Chapter 3 Vector Spaces. The operations of addition and scalar multiplication are used in many contexts in mathematics. Regardless of the context, however,
Orthogonality and Least Squares
Latent Semantic Indexing: A probabilistic Analysis Christos Papadimitriou Prabhakar Raghavan, Hisao Tamaki, Santosh Vempala.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Orthogonalization via Deflation By Achiya Dax Hydrological Service Jerusalem, Israel
Section 2.3 Properties of Solution Sets
A Note on Rectangular Quotients By Achiya Dax Hydrological Service Jerusalem, Israel
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
E XACT MATRIX C OMPLETION VIA CONVEX OPTIMIZATION E MMANUEL J. C ANDES AND B ENJAMIN R ECHT M AY 2008 Presenter: Shujie Hou January, 28 th,2011 Department.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Matrices, Vectors, Determinants.
2.1 Matrix Operations 2. Matrix Algebra. j -th column i -th row Diagonal entries Diagonal matrix : a square matrix whose nondiagonal entries are zero.
6 6.5 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares LEAST-SQUARES PROBLEMS.
Commuting birth-and-death processes Caroline Uhler Department of Statistics UC Berkeley (joint work with Steven N. Evans and Bernd Sturmfels) MSRI Workshop.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
7.3 Linear Systems of Equations. Gauss Elimination
Quantum Coherence and Quantum Entanglement
Non-additive Security Games
Matrix Completion from a few entries
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization Presenter: Xia Li.
Structure from motion Input: Output: (Tomasi and Kanade)
Nuclear Norm Heuristic for Rank Minimization
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Multiple-view Reconstruction from Points and Lines
Orthogonality and Least Squares
Singular Value Decomposition SVD
Optimal sparse representations in general overcomplete bases
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Linear Algebra Lecture 21.
Symmetric Matrices and Quadratic Forms
I.4 Polyhedral Theory (NW)
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
I.4 Polyhedral Theory.
CIS 700: “algorithms for Big Data”
Structure from motion Input: Output: (Tomasi and Kanade)
NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS
Orthogonality and Least Squares
Symmetric Matrices and Quadratic Forms
CSE 203B: Convex Optimization Week 2 Discuss Session
Subspace Expanders and Low Rank Matrix Recovery
Presentation transcript:

Rank-Sparsity Incoherence for Matrix Decomposition Reading Group Rank-Sparsity Incoherence for Matrix Decomposition (MIT EECS: Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo, and Alan S. Willsky, in 2009) Presenter: Zhe Chen ECE / CMR Tennessee Technological University February 18, 2011

Outline Overview Introduction Rank-Sparsity Incoherence Exact Decomposition Using Semidefinite Programming Simulation Results One-Sentence Summary 2/25/2019

Overview Given a matrix formed by adding an unknown sparse matrix to an unknown low-rank matrix. The problem is to decompose the given matrix into its sparse and low-rank components. A notion of rank-sparsity incoherence is developed. Sufficient conditions for exact recovery are given. When the sparse and low-rank matrices are drawn from certain natural random ensembles, the sufficient conditions for exact recovery are satisfied with high probability. 2/25/2019

Outline Overview Introduction Rank-Sparsity Incoherence Exact Decomposition Using Semidefinite Programming Simulation Results One-Sentence Summary 2/25/2019

The Problem Indeed, there are a number of scenarios in which a unique splitting of C into “low-rank” and “sparse” parts may not exist. Conditions should be met. 2/25/2019

Identifiability Problems (1) 1. If the low-rank matrix is very sparse, impose certain conditions on the row/column spaces of the low-rank matrix: For a matrix M let T(M) be the tangent space at M with respect to the variety of all matrices with rank less than or equal to rank(M) Here is the spectral norm being small implies that M cannot be very sparse 2/25/2019

Identifiability Problems (2) 2. If the sparse matrix has all its support concentrated in one row/column, impose conditions on the sparsity pattern of the sparse matrix: For a matrix M let be the tangent space at M with respect to the variety of all matrices with number of non-zero entries less than or equal to |support(M)| being small implies singular values are not too large 2/25/2019

Rank-sparsity incoherence However, for a given matrix M, it is impossible for both quantities and to be small simultaneously. The authors develop a notion of rank-sparsity incoherence, an uncertainty principle between the sparsity pattern of a matrix and its row/column spaces. 2/25/2019

Optimization Formulation In general solving the decomposition problem is NP-hard Convex relaxation is employed here The following optimization formulation is proposed to recover A* and B* given C = A* + B*: is a trade-off parameter is the decomposed 2/25/2019

Conditions This paper provides a simple deterministic condition for exact recovery. The conditions only depend on the row/column spaces of the low-rank matrix B* and the support of the sparse matrix A*. 2/25/2019

Outline Overview Introduction Rank-Sparsity Incoherence Exact Decomposition Using Semidefinite Programming Simulation Results One-Sentence Summary 2/25/2019

Tangent-Space Identifiability (1) The algebraic variety of rank-constrained matrices is defined as: For any matrix , the tangent space T(M) with respect to P(rank(M)) at M is the span of all matrices with either the same row-space as M or the same column-space as M. Let Then: 2/25/2019

Tangent-Space Identifiability (2) The variety of support-constrained matrix is defined as For any matrix , the tangent space with respect to S(|support(M)|) at M is given by 2/25/2019

Tangent-Space Identifiability (3) A necessary and sufficient condition for unique recovery is: That is, the subspaces and have a trivial intersection. (Proof is omitted here.) 2/25/2019

Rank-Sparsity Uncertainty Principle For any matrix both and cannot be simultaneously small. Note that Proposition 1 is for different matrices, while Theorem 1 is for the same matrix. (Proof is omitted here.) 2/25/2019

Outline Overview Introduction Rank-Sparsity Incoherence Exact Decomposition Using Semidefinite Programming Simulation Results One-Sentence Summary 2/25/2019

Optimality Condition (1) Notations The orthogonal projection onto the space is denoted , which simply sets to zero those entries with support not inside support(A*). The subspace orthogonal to is denoted . The projection onto is denoted . The orthogonal projection onto the space is denoted The space orthogonal to is denoted , and the corresponding projection is denoted . 2/25/2019

Optimality Condition (2) (Proof is omitted here.) 2/25/2019

Sufficient Conditions Based on and Given matrices A* and B* with , and from Proposition 1, condition (1) of Proposition 2 is satisfied. If a slightly stronger condition holds, there exists a dual Q that satisfies the requirements of condition (2) of Proposition 2. (Proof is omitted here.) 2/25/2019

Sparse and Low-Rank Matrices with (Proof is omitted here.) (Proof is omitted here.) 2/25/2019

Sparse and Low-Rank Matrices with (Proof is omitted here.) This is a result with deterministic sufficient conditions on exact decomposability. 2/25/2019

Decomposing Random Sparse and Low-Rank Matrices (1) Sparse and low-rank matrices drawn from certain natural random ensembles satisfy the sufficient conditions of Corollary 3 with high probability. Random sparsity model (Proof is omitted here.) 2/25/2019

Decomposing Random Sparse and Low-Rank Matrices (2) Random orthogonal model Consider low-rank matrices in which the singular vectors are chosen uniformly at random from the set of all partial isometries Low-rank matrices drawn from such a model have incoherent row/column spaces. (Proof is omitted here.) 2/25/2019

Decomposing Random Sparse and Low-Rank Matrices (3) Applying these two results in conjunction with Corollary 3, we have that sparse and low-rank matrices drawn from the random sparsity model and the random orthogonal model can be uniquely decomposed with high probability. (Proof is omitted here.) 2/25/2019

Outline Overview Introduction Rank-Sparsity Incoherence Exact Decomposition Using Semidefinite Programming Simulation Results One-Sentence Summary 2/25/2019

Simulation Results (1) 2/25/2019

Simulation Results (2) Define: Declare success in recovering (A*, B*) if . Exact recovery is possible for a range of Modify (1.3) with 2/25/2019

Simulation Results (3) Compute the difference between solutions for some t and as follows: Generate a random that is 25-sparse and a random with rank = 2 If a reasonable guess for t (or ) is not available, one could solve (5.2) for a range of t and choose a solution corresponding to the “middle” range in which is stable and near zero. 2/25/2019

Simulation Results (4) 2/25/2019

Outline Overview Introduction Rank-Sparsity Incoherence Exact Decomposition Using Semidefinite Programming Simulation Results One-Sentence Summary 2/25/2019

One-Sentence Summary Sufficient conditions on sparse and low-rank matrices are provided so that the SDP exactly recovers such matrices. 2/25/2019

Thank you! 2/25/2019