Download presentation
Presentation is loading. Please wait.
Published byBridget Ray Modified over 9 years ago
1
E XACT MATRIX C OMPLETION VIA CONVEX OPTIMIZATION E MMANUEL J. C ANDES AND B ENJAMIN R ECHT M AY 2008 Presenter: Shujie Hou January, 28 th,2011 Department of Electrical and Computer Engineering Cognitive Radio Institute Tennessee Technological university
2
S OME A VAILABLE C ODES http://perception.csl.illinois.edu/matrix- rank/sample_code.html#RPCA http://svt.caltech.edu/code.html http://lmafit.blogs.rice.edu/ http://www.stanford.edu/~raghuram/optspace/cod e.html http://people.ee.duke.edu/~lcarin/BCS.html
3
O UTLINE The problem statement Examples of impossible recovery Algorithms Main theorems Proof Experimental results Discussion
4
PROBLEM C ONSIDERED The problem of low-rank matrix completion: Recovery of a data matrix from a sampling of its entries. A matrix with rows and columns Only observing a number of of its entries which is much smaller than. Can the partially observed matrix be recovered and under what kind of conditions such a matrix can be exactly recovered?
5
E XAMPLES OF IMPOSSIBLE RECOVERY This matrix can not be recovered unless all of the entries are given. Reason: for most sampling sets, only observing all of zeros. Not all of the matrices can be completed from a sample of their entries.
6
E XAMPLES OF IMPOSSIBLE R ECOVERY The observation does not include samples from first row, the first component could never be guessed out. Not all the sampling set can be used to complete the matrix.
7
S HORT CONCLUSION One can not recover all low-rank matrices from any set of sampled entries. Can one recover most matrices from almost all sampling sets of cardinality ? The two theorems given later will tell that this is possible for most low-rank matrices under some specific conditions.
8
A LGORITHM Intuitively, (NP-hard problem) Alternatively, considering a heuristic optimization in which Is (1.5) reasonable or to what extent it is equivalent to rank minimization? Locations of observed entries Sum of singular values
9
T HE FIRST T HEOREM There is unique low-rank matrix consistent with the observed entries. The heuristic model (1.5) is equivalent to the above NP-hard formulation. Talk later
10
R ANDOM O RTHOGONAL M ODEL Generic low-rank matrix SVD (singular value decomposition ) of a matrix
11
A DEFINITION The subspace with low coherence is the special interest of this paper. Singular vectors with low coherence is “ spread out.”(not sparse) It can guarantee that the sampling set cannot really be a zero set.
12
T WO ASSUMPTIONS
13
M AIN RESULTS Theorem1.1 is a special case of theorem 1.3. If only a few matrices satisfy the conditions, it will also make the theorem 1.3 of little practical use.
14
T HE CONDITIONS OF THEOREM 1.3 The random orthogonal model obeys the two assumptions A0 and A1 with large probability.
15
T HE PROOF OF THE LEMMAS
16
R EVIEW OF T HEOREM 1.3
17
T HE PROOF The author employs the tools of subgradient (in distributions (generalized functional) and duality ( in optimization theory) and tools in asymptotic geometric analysis to prove the existence and uniqueness of the theorem (1.3). The proof is from page. 15-42. The details won’t be discussed here.
18
D UALITY (1) In optimization theory, the duality principle states that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. (Wikipedia) Concept in constraint optimization
19
D UALITY (2) The largest singular value
20
D UALITY (3) in which Orthogonal complement
21
D UALITY (4) Ensure Y is the subgradient of nuclear norm at X0 Equivalent to the operator Property of injectivity
22
A RCHITECTURE OF THE PROOF (1) The candidate Y which vanishes on the complement of the will be the solution to the optimization model of
23
A RCHITECTURE OF THE PROOF (2) The candidate Y which vanishes on the complement of the will be the solution to The first part of the statement 1. Hopefully, the small Frobenius norm will indicate the small spectral norm as well. Prove the first statement
24
A RCHITECTURE OF THE PROOF (3) Ready to prove the second property: injectivity of Property of the orthogonal projection If is a one-to-one linear mapping, then The solution of the model:
25
A RCHITECTURE OF THE PROOF (4) Bernoulli model of the sampling set
26
A RCHITECTURE OF THE PROOF (5)
27
A RCHITECTURE OF THE P ROOF (6)
28
C ONNECTIONS TO TRACE HEURISTIC When the matrix variable is symmetric and positive semidefinite: Which is equivalent to
29
E XTENSIONS The matrix completion can be extended to multitask and multiclass learning problems in machine learning.
30
N UMERICAL EXPERIMENTS
32
N UMERICAL RESULTS
33
D ISCUSSIONS Under suitable conditions, the matrix can be completed for a small number of the sampled entries. The required number of the sample entries is on the order of.
34
Questions? Thank you!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.