A Fast Algorithm for Structured Low-Rank Matrix Completion with Applications to Compressed Sensing MRI Greg Ongie*, Mathews Jacob Computational Biomedical.

Slides:



Advertisements
Similar presentations
L1 sparse reconstruction of sharp point set surfaces
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Compressive Sensing IT530, Lecture Notes.
11/11/02 IDR Workshop Dealing With Location Uncertainty in Images Hasan F. Ates Princeton University 11/11/02.
Learning Measurement Matrices for Redundant Dictionaries Richard Baraniuk Rice University Chinmay Hegde MIT Aswin Sankaranarayanan CMU.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Exact or stable image\signal reconstruction from incomplete information Project guide: Dr. Pradeep Sen UNM (Abq) Submitted by: Nitesh Agarwal IIT Roorkee.
Wangmeng Zuo, Deyu Meng, Lei Zhang, Xiangchu Feng, David Zhang
Shape From Light Field meets Robust PCA
More MR Fingerprinting
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
Compressed sensing Carlos Becker, Guillaume Lemaître & Peter Rennert
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University.
Magnetic resonance is an imaging modality that does not involve patient irradiation or significant side effects, while maintaining a high spatial resolution.
Volkan Cevher, Marco F. Duarte, and Richard G. Baraniuk European Signal Processing Conference 2008.
Modern Sampling Methods Summary of Subspace Priors Spring, 2009.
Random Convolution in Compressive Sampling Michael Fleyer.
Markus Strohmeier Sparse MRI: The Application of
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University.
Image deblurring Seminar inverse problems April 18th 2007 Willem Dijkstra.
Multiscale transforms : wavelets, ridgelets, curvelets, etc.
Compressed Sensing for Chemical Shift-Based Water-Fat Separation Doneva M., Bornert P., Eggers H., Mertins A., Pauly J., and Lustig M., Magnetic Resonance.
Compressed Sensing Compressive Sampling
Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization and Shrinkage Rodrigo C. de Lamare* + and Raimundo Sampaio-Neto * + Communications.
Linear Algebra and Image Processing
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
1 Decentralized Jointly Sparse Optimization by Reweighted Lq Minimization Qing Ling Department of Automation University of Science and Technology of China.
Parallel Imaging Reconstruction
Jason P. Stockmann 1 and R. Todd Constable 1,2 Yale University, Department of Biomedical Engineering 1, Department of Diagnostic Radiology 2, New Haven,
1 Unveiling Anomalies in Large-scale Networks via Sparsity and Low Rank Morteza Mardani, Gonzalo Mateos and Georgios Giannakis ECE Department, University.
Cs: compressed sensing
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
EE369C Final Project: Accelerated Flip Angle Sequences Jan 9, 2012 Jason Su.
 Karthik Gurumoorthy  Ajit Rajwade  Arunava Banerjee  Anand Rangarajan Department of CISE University of Florida 1.
1 Sparsity Control for Robust Principal Component Analysis Gonzalo Mateos and Georgios B. Giannakis ECE Department, University of Minnesota Acknowledgments:
Medical Image Analysis Image Reconstruction Figures come from the textbook: Medical Image Analysis, by Atam P. Dhawan, IEEE Press, 2003.
Contact: Beyond Low Rank + Sparse: Multi-scale Low Rank Reconstruction for Dynamic Contrast Enhanced Imaging Frank Ong1, Tao Zhang2,
Center for Evolutionary Functional Genomics Large-Scale Sparse Logistic Regression Jieping Ye Arizona State University Joint work with Jun Liu and Jianhui.
1 Complex Images k’k’ k”k” k0k0 -k0-k0 branch cut   k 0 pole C1C1 C0C0 from the Sommerfeld identity, the complex exponentials must be a function.
Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel.
Direct Robust Matrix Factorization Liang Xiong, Xi Chen, Jeff Schneider Presented by xxx School of Computer Science Carnegie Mellon University.
Inference of Poisson Count Processes using Low-rank Tensor Data Juan Andrés Bazerque, Gonzalo Mateos, and Georgios B. Giannakis May 29, 2013 SPiNCOM, University.
Rank Minimization for Subspace Tracking from Incomplete Data
SuperResolution (SR): “Classical” SR (model-based) Linear interpolation (with post-processing) Edge-directed interpolation (simple idea) Example-based.
ICCV 2007 National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences Half Quadratic Analysis for Mean Shift: with Extension.
Using Neumann Series to Solve Inverse Problems in Imaging Christopher Kumar Anand.
Off-the-Grid Compressive Imaging: Recovery of Piecewise Constant Images from Few Fourier Samples Greg Ongie PhD Candidate Department of Applied Math and.
A Fast Algorithm for Structured Low-Rank Matrix Recovery with Applications to Undersampled MRI Reconstruction Greg Ongie*, Mathews Jacob Computational.
FAST DYNAMIC MAGNETIC RESONANCE IMAGING USING LINEAR DYNAMICAL SYSTEM MODEL Vimal Singh, Ahmed H. Tewfik The University of Texas at Austin 1.
Higher Degree Total Variation for 3-D Image Recovery Greg Ongie*, Yue Hu, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa.
Fast Dynamic magnetic resonance imaging using linear dynamical system model Vimal Singh, Ahmed H. Tewfik The University of Texas at Austin 1.
Super-resolution MRI Using Finite Rate of Innovation Curves Greg Ongie*, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa.
Jinbo Bi Joint work with Tingyang Xu, Chi-Ming Chen, Jason Johannesen
Compressive Coded Aperture Video Reconstruction
Jinbo Bi Joint work with Jiangwen Sun, Jin Lu, and Tingyang Xu
Jeremy Watt and Aggelos Katsaggelos Northwestern University
Zhu Han University of Houston Thanks for Dr. Mingyi Hong’s slides
Highly Undersampled 0-norm Reconstruction
T. Chernyakova, A. Aberdam, E. Bar-Ilan, Y. C. Eldar
Proposed (MoDL-SToRM)
School of Electronic Engineering, Xidian University, Xi’an, China
NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
Compressive Sensing Imaging
Optimal sparse representations in general overcomplete bases
Recursively Adapted Radial Basis Function Networks and its Relationship to Resource Allocating Networks and Online Kernel Learning Weifeng Liu, Puskal.
INFONET Seminar Application Group
Non-Negative Matrix Factorization
Sebastian Semper1 and Florian Roemer1,2
Presentation transcript:

A Fast Algorithm for Structured Low-Rank Matrix Completion with Applications to Compressed Sensing MRI Greg Ongie*, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa, Iowa City, Iowa. SIAM Conference on Imaging Science, 2016 Albuquerque, NM So the title is “Super-resolution MRI Using Finite Rate of Innovation Curves”, but maybe a better title would be Super-resolution Imaging, since the theory and algorithms have potentially wide applicability.

Motivation: MRI Reconstruction “k-space” Main Problem: Reconstruct image from Fourier domain samples World’s most expensive Fourier transform. Related: Computed Tomography, Florescence Microscopy Motivation: Finite rate of innovation (FRI) Pan et al. (2014), “Sampling Curves with FRI” Tang et al. (2013), “CS Off the Grid”; Candes (2014).

Example: TV-minimization Compressed Sensing MRI Reconstruction recovery posed in discrete image domain smoothness/sparsity regularization penalty Example: TV-minimization Rel. Error = 5% 25% k-space

Drawbacks to TV Minimization Discretization effects: Lose sparsity when discretizing to grid Unable to exploit structured sparsity: Images have smooth, connected edges Sensitive to k-space sampling pattern [Krahmer & Ward, 2014]

Off-the-Grid alternative to TV [O. & Jacob, ISBI 2015], [O Off-the-Grid alternative to TV [O. & Jacob, ISBI 2015], [O. & Jacob, SampTA 2015] Continuous domain piecewise constant image model Model edge set as zero-set of a 2-D band-limited function image edge set “Finite-rate-of-innovation curve” [Pan et al., IEEE TIP 2014]

2-D PWC functions satisfy an annihilation relation spatial domain multiplication Fourier domain annihilating filter convolution In line with Mumphord-Shah functional used to recover PWC functions for all frequencies Annihilation relation:

built from k-space samples Matrix representation of annihilation 2-D convolution matrix built from k-space samples vector of filter coefficients apply weights Cartesian k-space samples 2(#shifts) x (filter size) Thm: rank of this matrix is at most … Upper bound for rank…choose more samples, exact condition.

Basis of algorithms: Annihilation matrix is low-rank Prop: If the level-set function is bandlimited to and the assumed filter support then Fourier domain Put low-rank result on this slide. Matrix is low-rank Spatial domain

Basis of algorithms: Annihilation matrix is low-rank Prop: If the level-set function is bandlimited to and the assumed filter support then Fourier domain Assumed filter: 33x25 Samples: 65x49 Rank 300 Example: Shepp-Logan Put low-rank result on this slide. Matrix is low-rank

Pose recovery from missing k-space data as structured low-rank matrix completion problem Put accepted at SampTA 2015 on bottom. Delete first equation. New k-space penalty. Convex. Extends easily to higher derivatives. Work for any dimension. Pro: apply to other inverse problems. Use as a generic prior as alternative to TV or HDTV. Nuclear norm. The naïve way of implementing is slow, and we have a fast way of implementing it. Some of current theoretical results are in the SampTA paper available on our website. or

Pose recovery from missing k-space data as structured low-rank matrix completion problem Toeplitz 1-D Example: Lift Put accepted at SampTA 2015 on bottom. Delete first equation. New k-space penalty. Convex. Extends easily to higher derivatives. Work for any dimension. Pro: apply to other inverse problems. Use as a generic prior as alternative to TV or HDTV. Nuclear norm. The naïve way of implementing is slow, and we have a fast way of implementing it. Some of current theoretical results are in the SampTA paper available on our website. Missing data or

Pose recovery from missing k-space data as structured low-rank matrix completion problem Toeplitz 1-D Example: Complete matrix By minimizing rank Put accepted at SampTA 2015 on bottom. Delete first equation. New k-space penalty. Convex. Extends easily to higher derivatives. Work for any dimension. Pro: apply to other inverse problems. Use as a generic prior as alternative to TV or HDTV. Nuclear norm. The naïve way of implementing is slow, and we have a fast way of implementing it. Some of current theoretical results are in the SampTA paper available on our website. or

Pose recovery from missing k-space data as structured low-rank matrix completion problem Toeplitz 1-D Example: Complete matrix By minimizing rank Put accepted at SampTA 2015 on bottom. Delete first equation. New k-space penalty. Convex. Extends easily to higher derivatives. Work for any dimension. Pro: apply to other inverse problems. Use as a generic prior as alternative to TV or HDTV. Nuclear norm. The naïve way of implementing is slow, and we have a fast way of implementing it. Some of current theoretical results are in the SampTA paper available on our website. Project or

Fully sampled TV (SNR=17.8dB) Proposed (SNR=19.0dB) 50% k-space samples (random uniform) error error (Retrospective undersampled 4-coil data compressed to single virtual coil)

Emerging Trend: Fourier domain low-rank priors for MRI reconstruction Exploit linear relationships in Fourier domain to predict missing k-space samples. Patch-based, low-rank penalty imposed in k-space. Closely tied to off-the-grid image models Structured Low-rank Matrix Recovery undersampled k-space recovered k-space

Emerging Trend: Fourier domain low-rank priors for MRI reconstruction SAKE [Shin et al., MRM 2014] Image model: Smooth coil sensitivity maps (parallel imaging) LORAKS [Haldar, TMI 2014] Image model: Support limited & smooth phase ALOHA [Jin et al., ISBI 2015] Image model: Transform sparse/Finite-rate-of-innovation Off-the-grid models [O. & Jacob, ISBI 2015], [O. & Jacob, SampTA 2015] Structured Low-rank Matrix Recovery undersampled k-space recovered k-space

~108 x 105 Cannot Hold in Memory! Main challenge: Computational complexity 2-D 3-D Image: 256x256 Filter: 32x32 256x256x32 32x32x10 ~106 x 1000 ~108 x 105 Cannot Hold in Memory!

Outline Prior Art Proposed Algorithm Applications

Cadzow methods/Alternating projections [“SAKE,” Shin et al Cadzow methods/Alternating projections [“SAKE,” Shin et al., 2014], [“LORAKS,” Haldar, 2014] No convex relaxations. Use rank estimate.

Cadzow methods/Alternating projections [“SAKE,” Shin et al Cadzow methods/Alternating projections [“SAKE,” Shin et al., 2014], [“LORAKS,” Haldar, 2014] Alternating projection algorithm (Cadzow) Project onto space of rank r matrices -Compute truncated SVD: Project onto space of structured matrices -Average along “diagonals”

Cadzow methods/Alternating projections [“SAKE,” Shin et al Cadzow methods/Alternating projections [“SAKE,” Shin et al., 2014], [“LORAKS,” Haldar, 2014] Drawbacks Highly non-convex Need estimate of rank bound r Complexity grows with r Naïve approach needs to store large matrix

Nuclear norm minimization ADMM = Singular value thresholding (SVT) Singular value thresholding step -compute full SVD of X! Solve linear least squares problem -analytic solution or CG solve

Nuclear norm minimization “U,V factorization trick” Low-rank factors

Nuclear norm minimization with U,V factorization [O Nuclear norm minimization with U,V factorization [O.& Jacob, SampTA 2015], [“ALOHA”, Jin et al., ISBI 2015] UV factorization approach Singular value thresholding step -compute full SVD of X! SVD-free  fast matrix inversion steps Solve linear least squares problem -analytic solution or CG solve

Nuclear norm minimization with U,V factorization [O Nuclear norm minimization with U,V factorization [O.& Jacob, SampTA 2015], [“ALOHA”, Jin et al., ISBI 2015] Drawbacks Big memory footprint—not feasible for 3-D U,V trick is non-convex

None of current approaches exploit structure of matrix liftings (e. g None of current approaches exploit structure of matrix liftings (e.g. Hankel/Toeplitz) Can we exploit this structure to give a more efficient algorithm?

Outline Prior Art Proposed Algorithm Applications

Proposed Approach: Adapt IRLS algorithm for nuclear norm minimization IRLS: Iterative Reweighted Least Squares Proposed for low-rank matrix completion in [Fornasier, Rauhut, & Ward, 2011], [Mohan & Fazel, 2012] Solves: Idea: Alternate

Original IRLS: To recover low-rank matrix X, iterate

Original IRLS: To recover low-rank matrix X, iterate We adapt to structured case:

Original IRLS: To recover low-rank matrix X, iterate We adapt to structured case: Without modification, this approach is still slow!

Idea 1: Embed Toeplitz lifting in circulant matrix *Fast matrix-vector products with by FFTs

Idea 2: Approximate the matrix lifting Pad with extra rows *Fast computation of by FFTs

Simplifications: Weight matrix update Explicit form: Build Gram matrix with two FFTs—no matrix product Computational cost: One eigen-decomposition of small Gram matrix svd( ) eig( )

Simplifications: Least squares subproblem Convolution with single filter Sum-of-squares average: Fast solution by CG iterations

Proposed GIRAF algorithm GIRAF = Generic Iterative Reweighted Annihilating Filter Adapt IRLS algorithm +simplifications based on structure GIRAF algorithm Update annihilating filter -Small eigen-decomposition Least-squares annihilation -Solve with CG

GIRAF complexity similar to iterative reweighted TV minimization IRLS TV-minimization GIRAF algorithm Edge weights Image Edge weights Image Local update: Least-squares problem Global update: Least-squares problem w/small SVD

Convergence speed of GIRAF Scaling with filter size Table: iterations/CPU time to reach convergence tolerance of NMSE < 10-4. CS-recovery from 50% random k-space samples Image size: 256x256 Filter size: 15x15 (C-LORAKS spatial sparsity penalty) CS-recovery from 50% random k-space samples Image size: 256x256 (gradient sparsity penalty)

Outline Prior Art Proposed Algorithm Applications

GIRAF enables larger filter sizes  improved compressed sensing recovery +1 dB improvement

GIRAF enables extensions to multi-dimensional imaging: Dynamic MRI Fully sampled GIRAF Fourier sparsity TV error images [Balachandrasekaran, O., & Jacob, Submitted to ICIP 2016]

GIRAF enables extensions to multi-dimensional imaging: Diffusion Weighted Imaging Correction of ghosting artifacts In DWI using annihilating filter framework and GIRAF [Mani et al., ISMRM 2016]

Summary Emerging trend: Powerful Fourier domain low-rank penalties for MRI reconstruction State-of-the-art, but computational challenging Current algs. work directly with big “lifted” matrices New GIRAF algorithm for structured low-rank matrix formulations in MRI Solves “lifted” problem in “unlifted” domain No need to create and store large matrices Improves recovery & enables new applications Larger filter sizes  improved CS recovery Multi-dimensional imaging (DMRI, DWI, MRSI) New framework for off-the-grid image recovery. --Piecewise polynomial --n-D --Sampling guarantees Two step recovery scheme --Fast algorithm --Robust mask estimation --Better performance than standard TV Novel Fourier Domain structured/low-rank prior. (show final result) --off-the-grid --convex -- include identifying pictures.

References Acknowledgements Krahmer, F., & Ward, R. (2014) Stable and robust sampling strategies for compressive imaging. Image Processing, IEEE Transactions on, 23(2): 612-622. Pan, H., Blu, T., & Dragotti, P. L. (2014). Sampling curves with finite rate of innovation. Signal Processing, IEEE Transactions on, 62(2), 458-471. Shin, P. J., Larson, P. E., Ohliger, M. A., Elad, M., Pauly, J. M., Vigneron, D. B., & Lustig, M. (2014). Calibrationless parallel imaging reconstruction based on structured low‐rank matrix completion. Magnetic resonance in medicine, 72(4), 959-970. Haldar, J. P. (2014). Low-Rank Modeling of Local-Space Neighborhoods (LORAKS) for Constrained MRI. Medical Imaging, IEEE Transactions on, 33(3), 668-681 Jin, K. H., Lee, D., & Ye, J. C. (2015, April). A novel k-space annihilating filter method for unification between compressed sensing and parallel MRI. In Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on (pp. 327-330). IEEE. Ongie, G., & Jacob, M. (2015). Super-resolution MRI Using Finite Rate of Innovation Curves. Proceedings of ISBI 2015, New York, NY. Ongie, G. & Jacob, M. (2015). Recovery of Piecewise Smooth Images from Few Fourier Samples. Proceedings of SampTA 2015, Washington D.C. Ongie, G. & Jacob, M. (2015). Off-the-grid Recovery of Piecewise Constant Images from Few Fourier Samples. Arxiv.org preprint. Fornasier, M., Rauhut, H., & Ward, R. (2011). Low-rank matrix recovery via iteratively reweighted least squares minimization. SIAM Journal on Optimization, 21(4), 1614-1640. Mohan, K, and Maryam F. (2012). Iterative reweighted algorithms for matrix rank minimization." The Journal of Machine Learning Research 13.1 3441-3473. Acknowledgements Supported by grants: NSF CCF-0844812, NSF CCF-1116067, NIH 1R21HL109710-01A1, ACS RSG-11-267-01-CCE, and ONR-N000141310202.

Thank you! Questions? GIRAF algorithm for structured low-rank matrix recovery formulations in MRI Exploit convolution structure to simplify IRLS algorithm Do not need to explicitly form large lifted matrix Solves problem in original domain