Sparse and low-rank recovery problems in signal processing and machine learning Jeremy Watt and Aggelos Katsaggelos Northwestern University Department of EECS
Part 2: Quick and dirty optimization techniques
Big picture – a story of 2’s 2 excellent greedy algorithms: narrow in problem type, broad in scale Sparse Least Squares - OMP Dictionary Learning - KSVD 2 common smooth reformulations: broad in problem type, narrow in scale Positive negative split The epigraph trick These greedy approaches provide large scale solutions to specific problems. The reformulations provide small-medium sized solutions for a wider array of sparse and low rank problems. Knowing how to rewrite/reformulate problems is half the battle in optimization.
Greedy methods: Smooth reformulations:
Greedy approaches to sparse Least Squares problems Orthogonal Matching Pursuit
Models for small and sparse recovery Ccombinatorially difficult
Orthogonal Matching Pursuit Greedy method for approximately solving or
Orthogonal Matching Pursuit (OMP) Intuitive algorithm1 Effective in applications Extremely efficient Good theoretical recovery guarantees2 CoSAMP another excellent greedy algorithm for the problem3
KSVD: a greedy algorithm for Dictionary Learning KSVD very effective and uses OMP in X-step Has been applied to many image processing tasks3
Smooth reformulation tricks for sparse problems Positive-negative split4,5
Basis pursuit (e.g. compressive sensing)
Pos/Neg decomposition
Basis pursuit (e.g. compressive sensing) pos/neg split Note that the dimension of the space has double – but now we solve a standard LP for which there are tons of solvers online.
Basis pursuit Standard Linear Program Linear objective Linear constraints Note that if you don’t know how to solve LPs there are tons of standard toolboxes available – CVX being the easiest to use. Standard Linear Program
Basis Pursuit – phrased as an LP pos/neg split Note that the dimension of the space has double – but now we solve a standard LP for which there are tons of solvers online.
The Lasso Standard Quadratic Program pos/neg split Note that the dimension of the space has double – but now we solve a standard LP for which there are tons of solvers online. Standard Quadratic Program
Smooth reformulation tricks for sparse problems The epigraph trick6
Absolute Deviations Standard Quadratic Program epigraph trick Note that the dimension of the space has double – but now we solve a standard LP for which there are tons of solvers online. Standard Quadratic Program
Absolute Deviations Note that the dimension of the space has double – but now we solve a standard LP for which there are tons of solvers online.
Absolute Deviations epigraph trick Another standard LP
“Medium sized problems” Many solvers online for solving reformulations CVX11 highly recommended for MATLAB users Most reformulations solved via Interior Programming 9 Practically limited to tens of thousand variables Some nice extensions for basic sparse recovery problems have been developed6
Final thought Nuclear norm reformulations work analogously Reformulations as Second Order Cone Programs (SOCPs)7,10
References Tropp, Joel A., and Anna C. Gilbert. "Signal recovery from random measurements via orthogonal matching pursuit." Information Theory, IEEE Transactions on 53.12 (2007): 4655-4666. Needell, Deanna, and Joel A. Tropp. "CoSaMP: Iterative signal recovery from incomplete and inaccurate samples." Applied and Computational Harmonic Analysis 26.3 (2009): 301-321. Aharon, Michal, Michael Elad, and Alfred Bruckstein. "K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation." IEEE TRANSACTIONS ON SIGNAL PROCESSING 54.11 (2006): 4311. Figueiredo, Mário AT, Robert D. Nowak, and Stephen J. Wright. "Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems." Selected Topics in Signal Processing, IEEE Journal of 1.4 (2007): 586-597.
References Tibshirani, Robert, et al. "Sparsity and smoothness via the fused lasso." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67.1 (2005): 91-108. Candes, Emmanuel J., Michael B. Wakin, and Stephen P. Boyd. "Enhancing sparsity by reweighted ℓ 1 minimization." Journal of Fourier Analysis and Applications 14.5-6 (2008): 877-905. Kim, Seung-Jean, et al. "An Interior-Point Method for Large-Scale l_1-Regularized Least Squares." IEEE Journal of Selected Topics in Signal Processing 1 (2007): 606-617. Liu, Zhang, and Lieven Vandenberghe. "Interior-point method for nuclear norm approximation with application to system identification." SIAM Journal on Matrix Analysis and Applications 31.3 (2009): 1235-1256. Boyd, Stephen Poythress, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. Lobo, Miguel Sousa, et al. "Applications of second-order cone programming." Linear algebra and its applications 284.1 (1998): 193-228. Grant, Michael, Stephen Boyd, and Yinyu Ye. "CVX: Matlab software for disciplined convex programming." (2008).