1 Morteza Mardani, Gonzalo Mateos and Georgios Giannakis ECE Department, University of Minnesota Acknowledgment: AFOSR MURI grant no. FA9550-10-1-0567.

Slides:



Advertisements
Similar presentations
Beyond Streams and Graphs: Dynamic Tensor Analysis
Advertisements

1 Closed-Form MSE Performance of the Distributed LMS Algorithm Gonzalo Mateos, Ioannis Schizas and Georgios B. Giannakis ECE Department, University of.
Distributed Nuclear Norm Minimization for Matrix Completion
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Dimension reduction (2) Projection pursuit ICA NCA Partial Least Squares Blais. “The role of the environment in synaptic plasticity…..” (1998) Liao et.
Sparse Modeling for Finding Representative Objects Ehsan Elhamifar Guillermo Sapiro Ren´e Vidal Johns Hopkins University University of Minnesota Johns.
1 Maarten De Vos SISTA – SCD - BIOMED K.U.Leuven On the combination of ICA and CPA Maarten De Vos Dimitri Nion Sabine Van Huffel Lieven De Lathauwer.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
1 Easy Does It: User Parameter Free Dense and Sparse Methods for Spectral Estimation Jian Li Department of Electrical and Computer Engineering University.
Brian Baingana, Gonzalo Mateos and Georgios B. Giannakis Dynamic Structural Equation Models for Tracking Cascades over Social Networks Acknowledgments:
ECE Department Rice University dsp.rice.edu/cs Measurements and Bits: Compressed Sensing meets Information Theory Shriram Sarvotham Dror Baron Richard.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Robust Network Compressive Sensing Lili Qiu UT Austin NSF Workshop Nov. 12, 2014.
1 In-Network PCA and Anomaly Detection Ling Huang* XuanLong Nguyen* Minos Garofalakis § Michael Jordan* Anthony Joseph* Nina Taft § *UC Berkeley § Intel.
1 Distortion-Rate for Non-Distributed and Distributed Estimation with WSNs Presenter: Ioannis D. Schizas May 5, 2005 EE8510 Project May 5, 2005 EE8510.
QUASI MAXIMUM LIKELIHOOD BLIND DECONVOLUTION QUASI MAXIMUM LIKELIHOOD BLIND DECONVOLUTION Alexander Bronstein.
Tomo-gravity Yin ZhangMatthew Roughan Nick DuffieldAlbert Greenberg “A Northern NJ Research Lab” ACM.
Online Dictionary Learning for Sparse Coding International Conference on Machine Learning, 2009 Julien Mairal, Francis Bach, Jean Ponce and Guillermo Sapiro.
1 Sparsity Control for Robustness and Social Data Analysis Gonzalo Mateos ECE Department, University of Minnesota Acknowledgments: Profs. Georgios B. Giannakis,
Brian Baingana, Gonzalo Mateos and Georgios B. Giannakis A Proximal Gradient Algorithm for Tracking Cascades over Networks Acknowledgments: NSF ECCS Grant.
Introduction to Adaptive Digital Filters Algorithms
How to do backpropagation in a brain
1 Unveiling Anomalies in Large-scale Networks via Sparsity and Low Rank Morteza Mardani, Gonzalo Mateos and Georgios Giannakis ECE Department, University.
Matrix Completion IT530 Lecture Notes.
1 Exact Recovery of Low-Rank Plus Compressed Sparse Matrices Morteza Mardani, Gonzalo Mateos and Georgios Giannakis ECE Department, University of Minnesota.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Non Negative Matrix Factorization
1 Jorge Nocedal Northwestern University With S. Hansen, R. Byrd and Y. Singer IPAM, UCLA, Feb 2014 A Stochastic Quasi-Newton Method for Large-Scale Learning.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
EE369C Final Project: Accelerated Flip Angle Sequences Jan 9, 2012 Jason Su.
ESA living planet symposium 2010 ESA living planet symposium 28 June – 2 July 2010, Bergen, Norway GOCE data analysis: realization of the invariants approach.
1 Sparsity Control for Robust Principal Component Analysis Gonzalo Mateos and Georgios B. Giannakis ECE Department, University of Minnesota Acknowledgments:
Online Learning for Collaborative Filtering
Orthogonalization via Deflation By Achiya Dax Hydrological Service Jerusalem, Israel
Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel.
Direct Robust Matrix Factorization Liang Xiong, Xi Chen, Jeff Schneider Presented by xxx School of Computer Science Carnegie Mellon University.
Introduction to Video Background Subtraction 1. Motivation In video action analysis, there are many popular applications like surveillance for security,
A Note on Rectangular Quotients By Achiya Dax Hydrological Service Jerusalem, Israel
Inference of Poisson Count Processes using Low-rank Tensor Data Juan Andrés Bazerque, Gonzalo Mateos, and Georgios B. Giannakis May 29, 2013 SPiNCOM, University.
Multi-area Nonlinear State Estimation using Distributed Semidefinite Programming Hao Zhu October 15, 2012 Acknowledgements: Prof. G.
B. Baingana, E. Dall’Anese, G. Mateos and G. B. Giannakis Acknowledgments: NSF Grants , , , , ARO W911NF
ParCube: Sparse Parallelizable Tensor Decompositions
Rank Minimization for Subspace Tracking from Incomplete Data
D. Rincón, M. Roughan, W. Willinger – Towards a Meaningful MRA of Traffic Matrices 1/36 Towards a Meaningful MRA for Traffic Matrices D. Rincón, M. Roughan,
1 Robust Nonparametric Regression by Controlling Sparsity Gonzalo Mateos and Georgios B. Giannakis ECE Department, University of Minnesota Acknowledgments:
1 Consensus-Based Distributed Least-Mean Square Algorithm Using Wireless Ad Hoc Networks Gonzalo Mateos, Ioannis Schizas and Georgios B. Giannakis ECE.
2016/2/131 Structural and Temporal Analysis of the Blogosphere Through Community Factorization Y. Chi, S. Zhu, X. Song, J. Tatemura, B.L. Tseng Proceedings.
Unsupervised Streaming Feature Selection in Social Media
Affine Registration in R m 5. The matching function allows to define tentative correspondences and a RANSAC-like algorithm can be used to estimate the.
Facets: Fast Comprehensive Mining of Coevolving High-order Time Series Hanghang TongPing JiYongjie CaiWei FanQing He Joint Work by Presenter:Wei Fan.
NIPS 2013 Michael C. Hughes and Erik B. Sudderth
Comparison of Large-Scale Proxy-Based Temperature Reconstructions over the Past Few Centuries M.E. Mann, Department of Environmental Sciences University.
Arizona State University1 Fast Mining of a Network of Coevolving Time Series Wei FanHanghang TongPing JiYongjie Cai.
State-Space Recursive Least Squares with Adaptive Memory College of Electrical & Mechanical Engineering National University of Sciences & Technology (NUST)
Dimension reduction (2) EDR space Sliced inverse regression Multi-dimensional LDA Partial Least Squares Network Component analysis.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Introduction to several works and Some Ideas Songcan Chen
Super-resolution MRI Using Finite Rate of Innovation Curves Greg Ongie*, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa.
Jinbo Bi Joint work with Jiangwen Sun, Jin Lu, and Tingyang Xu
ROBUST SUBSPACE LEARNING FOR VISION AND GRAPHICS
Highly Undersampled 0-norm Reconstruction
T. Chernyakova, A. Aberdam, E. Bar-Ilan, Y. C. Eldar
Probabilistic Models for Linear Regression
USPACOR: Universal Sparsity-Controlling Outlier Rejection
Proposed (MoDL-SToRM)
Updating PageRank by Iterative Aggregation
High-Dimensional Matched Subspace Detection When Data are Missing
The European Conference on e-learing ,2017/10
Sebastian Semper1 and Florian Roemer1,2
Presentation transcript:

1 Morteza Mardani, Gonzalo Mateos and Georgios Giannakis ECE Department, University of Minnesota Acknowledgment: AFOSR MURI grant no. FA A Coruna, Spain June 25, 2013 Imputation of Streaming Low-Rank Tensor Data

2 Learning from “Big Data” ` Data are widely available, what is scarce is the ability to extract wisdom from them’ Hal Varian, Google’s chief economist BIG Fast Productive Revealing Ubiquitous Smart K. Cukier, ``Harnessing the data deluge,'' Nov Messy

3 Tensor model Data cube PARAFAC decomposition C= crcr γiγi B= brbr βiβi A= arar αiαi

4 Streaming tensor data Streaming data Goal: given the streaming data, at time t learn the subspace matrices (A t,B t ) and impute the missing entries of Y t ? Tensor subspace comprises R rank-one matrices

5 Prior art Matrix/tensor subspace tracking  Projection approximation (PAST) [Yang’95]  Misses: rank regularization [Mardani et al’13], GROUSE [Balzano et al’10]  Outliers: [Mateos et al’10], GRASTA [He et al’11]  Adaptive LS tensor tracking [Nion et al’09] with full data; tensor slices treated as long vectors Batch tensor completion [Juan et al’13], [Gandy et al’11] Novelty: Online rank regularization with misses  Tensor decomposition/imputation  Scalable and provably convergent iterates

6 Batch tensor completion Rank-regularized formulation [Juan et al’13] Tikhonov regularizer promotes low rank Proposition 1 [Juan et al’13]: Let, then (P1)

7 Tensor subspace tracking Exponentially-weighted LS estimator M. Mardani, G. Mateos, and G. B. Giannakis, “Subspace learning and imputation for streaming Big Data matrices and tensors," IEEE Trans. Signal Process., Apr (submitted). O(|Ω t |R 2 ) operations per iteration (P2) ``on-the-fly’’ imputation Alternating minimization with stochastic gradient iterations (at time t)  Step1: Projection coefficient updates  Step2: Subspace update ft(A,B)ft(A,B)

8 Convergence asymptotically converges to a st. point of batch (P1) Proposition 2: If and are i.i.d., and c1) is uniformly bounded; c2) is in a compact set; and c3) is strongly convex w.r.t. hold, then almost surely (a. s.) As1) Invariant subspace and As2) Infinite memory β = 1

9 Cardiac MRI FOURDIX dataset  263 images of 512 x 512  Y: 32 x 32 x 67, % misses  R=10  e x =0.14  R=50  e x =0.046 (a) (b) (c)(d) (a)Ground truth, (b) acquired image; reconstructed for R=10 (c), R=50 (d)

10 Tracking traffic anomalies Internet-2 backbone network  Y t : weighted adjacency matrix  Available data Y: 11x11x6,048  75% misses, R=18 Link load measurements

11 Conclusions Real-time subspace trackers for decomposition/imputation  Streaming big and incomplete tensor data  Provably convergent scalable algorithms Ongoing research  Incorporating spatiotemporal correlation information via kernels  Accelerated stochastic-gradient for subspace update Applications  Reducing the MRI acquisition time  Unveiling network traffic anomalies for Internet backbone networks