Game Theory Meets Compressed Sensing

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Nonnegative Matrix Factorization with Sparseness Constraints S. Race MA591R.
Bregman Iterative Algorithms for L1 Minimization with
Image acquisition using sparse (pseudo)-random matrices Piotr Indyk MIT.
Compressive Sensing IT530, Lecture Notes.
Multi-Label Prediction via Compressed Sensing By Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang (NIPS 2009) Presented by: Lingbo Li ECE, Duke University.
Online Performance Guarantees for Sparse Recovery Raja Giryes ICASSP 2011 Volkan Cevher.
ICCV 2007 tutorial Part III Message-passing algorithms for energy minimization Vladimir Kolmogorov University College London.
Submodular Dictionary Selection for Sparse Representation Volkan Cevher Laboratory for Information and Inference Systems - LIONS.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang
Approximate Message Passing for Bilinear Models
Extensions of wavelets
More MR Fingerprinting
An Introduction to Sparse Coding, Sparse Sensing, and Optimization Speaker: Wei-Lun Chao Date: Nov. 23, 2011 DISP Lab, Graduate Institute of Communication.
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
Dictionary-Learning for the Analysis Sparse Model Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
“Random Projections on Smooth Manifolds” -A short summary
Distributed Message Passing for Large Scale Graphical Models Alexander Schwing Tamir Hazan Marc Pollefeys Raquel Urtasun CVPR2011.
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Quantization and compressed sensing Dmitri Minkin.
A Single-letter Characterization of Optimal Noisy Compressed Sensing Dongning Guo Dror Baron Shlomo Shamai.
Avoiding Communication in Sparse Iterative Solvers Erin Carson Nick Knight CS294, Fall 2011.
Random Convolution in Compressive Sampling Michael Fleyer.
Introduction to Compressive Sensing
© 2005, it - instituto de telecomunicações. Todos os direitos reservados. Gerhard Maierbacher Scalable Coding Solutions for Wireless Sensor Networks IT.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
Hybrid Dense/Sparse Matrices in Compressed Sensing Reconstruction
The Role of Specialization in LDPC Codes Jeremy Thorpe Pizza Meeting Talk 2/12/03.
Topics in MMSE Estimation for Sparse Approximation Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
CS774. Markov Random Field : Theory and Application Lecture 10 Kyomin Jung KAIST Oct
Compressed Sensing Compressive Sampling
Model-based Compressive Sensing
An ALPS’ view of Sparse Recovery Volkan Cevher Laboratory for Information and Inference Systems - LIONS
Compressive Sampling: A Brief Overview
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
OPTIMIZATION WITH PARITY CONSTRAINTS: FROM BINARY CODES TO DISCRETE INTEGRATION Stefano Ermon*, Carla P. Gomes*, Ashish Sabharwal +, and Bart Selman* *Cornell.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Cs: compressed sensing
Introduction to Compressive Sensing
Learning With Structured Sparsity
RESOURCES, TRADE-OFFS, AND LIMITATIONS Group 5 8/27/2014.
Source Localization on a budget Volkan Cevher Rice University Petros RichAnna Martin Lance.
The Secrecy of Compressed Sensing Measurements Yaron Rachlin & Dror Baron TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
Model-Based Compressive Sensing Presenter: Jason David Bonior ECE / CMR Tennessee Technological University November 5, 2010 Reading Group (Richard G. Baraniuk,
Efficient and Numerically Stable Sparse Learning Sihong Xie 1, Wei Fan 2, Olivier Verscheure 2, and Jiangtao Ren 3 1 University of Illinois at Chicago,
Compressible priors for high-dimensional statistics Volkan Cevher LIONS/Laboratory for Information and Inference Systems
Andrea Montanari and Ruediger Urbanke TIFR Tuesday, January 6th, 2008 Phase Transitions in Coding, Communications, and Inference.
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
Direct Robust Matrix Factorization Liang Xiong, Xi Chen, Jeff Schneider Presented by xxx School of Computer Science Carnegie Mellon University.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Sparse RecoveryAlgorithmResults  Original signal x = x k + u, where x k has k large coefficients and u is noise.  Acquire measurements Ax = y. If |x|=n,
Compressive Coded Aperture Video Reconstruction
Information Complexity Lower Bounds
Jeremy Watt and Aggelos Katsaggelos Northwestern University
Multiplicative updates for L1-regularized regression
Lecture 15 Sparse Recovery Using Sparse Matrices
Basic Algorithms Christina Gallner
Nuclear Norm Heuristic for Rank Minimization
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
Bounds for Optimal Compressed Sensing Matrices
Sparselet Models for Efficient Multiclass Object Detection
CIS 700: “algorithms for Big Data”
Sudocodes Fast measurement and reconstruction of sparse signals
Outline Sparse Reconstruction RIP Condition
Subspace Expanders and Low Rank Matrix Recovery
Derivatives and Gradients
Presentation transcript:

Game Theory Meets Compressed Sensing Sina Jafarpour Princeton University Based on joint work with: Volkan Cevher Robert Calderbank Rob Schapire

Compressed Sensing Main tasks: Design a sensing matrix Design a reconstruction algorithm Objectives: Reconstruct a k-sparse signal from efficiently The algorithm should recover without knowing its support Successful reconstruction from as few number of measurements as possible! Robustness against noise Model selection and sparse approximation

Model Selection and Sparse Approximation Measurement vector: : data-domain noise : measurement noise Model Selection Goal: Given successfully recover the support of Sparse Approximation Goal: Given find a vector with

Applications Data streaming Biomedical imaging Digital communication Destination Source Data streaming Packet routing Sparsity in canonical basis Biomedical imaging MRI Sparsity in Fourier basis Digital communication Multiuser Detection

Restricted Isometry Property Definition: A matrix satisfies if for every -sparse vector : Tractable Compressed Sensing: [CRT’06/Don’06] If is , then for the solution of satisfies the guarantee Basis Pursuit optimization

Problems with RIP Matrices iid Gaussian/Bernoulli matrices: No efficient algorithm for verifying RIP memory requirement Inefficient computation (matrix-vector multiplication) Partial Fourier/Hadamard ensembles: Sub-optimal number of measurements: Algebraic structures:

Deterministic Compressed Sensing via Expander Graphs

Expander Graphs Existence: A random bipartite graph with is expander. Explicit construction of expander graphs also exist [GUV’08]. Sensing matrix: Normalized adjacency matrix of an expander graph

Expander-Based Compressed Sensing: Message Passing Algorithms Theorem [JXHC’09] : If is a normalized expander then given a message passing algorithm after iterations recovers . Decoding in error-free case: Based on expander codes [Sipser and Spielman’96] [BIR’09] (SSMP): A similar message passing algorithm with running time and guarantee Decoding when error exists: Pros: Decoding time is almost linear in Message-passing methods are easy to implement Cons: Poor practical performance

Basis Pursuit Algorithm [BGIKS’08]: Let be the normalized adjacency of a expander graph, then the solution of the basis pursuit optimization Satisfies the guarantee Pros: Better practical performance Cons: Decoding time is cubic in Sub-gradient methods are hard to implement

Game Theory Meets Compressed Sensing Goal: an efficient approximation algorithm for BP To solve BP, it is sufficient to be able to efficiently solve How? By doing binary search over Plan: approximately solve efficiently By reformulating it as a zero-sum game Approximately solving the game-value Recall BP Optimization:

Game-Theoretic Reformulation of the Problem 1: Consider the following 1-dimensional example 2: Fix (say ) and observe that Maximum Interval Discrete set (2 elements)

Bregman Divergence Examples: Euclidean distance: with Relative-entropy: with

GAME Algorithm: Gradient Projection Alice: Conservative Alice: Bob: Updates Alice: Updates Conservative Alice: Gradient Projection Alice: Greedy Alice: Finds a with Greedy Bob: Finds an s.t. Finds an s.t. Lemma [JCS’11] is 1-sparse and can be computed in

Game-value Approximation Guarantee GAME can be viewed as a repurposing of the Multiplicative Update Algorithm [Freund&Schapire’99] After iterations finds a -sparse vector with Tradeoff between sparsity and approximation error As : In expander-based CS we are only interested in the approximation error (Basis Pursuit Theorem)

Empirical Performance BP MP

Empirical Convergence Rate 3 Slope: -1

Comparison with Nesterov Optimization Nesterov: A general iterative approximation algorithm for solving non-smooth optimization problems Nesterov requires iterations (similar to GAME) Each iteration requires solving 3 smooth optimization problem Much more complicated than one adjoint operation (GAME) Nesterov’s approximation guarantee: GAME’s theoretical guarantee

Experimental Comparison GAME

Random vs. Expander-Based Compressed Sensing Matrix Number of Measurements Encoding Time Recovery Guarantee Explicit Construction Gaussian No Fourier Expander (MesPas) Yes (GAME) Our Contribution Fair Best Good

Take-Home Message: Random vs. Deterministic Gerhard Richter The value of a deterministic matrix: US $ 27,191,525 Piet Mondrian The value of a random matrix: US $ 3,703,500

Thank You!