ANTHAN HALKO, PER-GUNNAR MARTINSSON, YOEL SHAOLNISKY, AND MARK TYGERT

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Block LU Factorization Lecture 24 MA471 Fall 2003.
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
Dimensionality Reduction PCA -- SVD
Nequalities Takagi Factorization on a GPU using CUDA Gagandeep S. Sachdev, Vishay Vanjani & Mary W. Hall School of Computing, University of Utah What is.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Hongtao Cheng 1 Efficiently Supporting Ad Hoc Queries in Large Datasets of Time Sequences Author:Flip Korn H. V. Jagadish Christos Faloutsos From ACM.
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Clustering In Large Graphs And Matrices Petros Drineas, Alan Frieze, Ravi Kannan, Santosh Vempala, V. Vinay Presented by Eric Anderson.
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Ordinary least squares regression (OLS)
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Input image Output image Transform equation All pixels Transform equation.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Department of Electrical Engineering National Cheng Kung University
SVD(Singular Value Decomposition) and Its Applications
Chapter 2 Dimensionality Reduction. Linear Methods
Presented By Wanchen Lu 2/25/2013
Deep Learning – Fall 2013 Instructor: Bhiksha Raj Paper: T. D. Sanger, “Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network”,
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Yaomin Jin Design of Experiments Morris Method.
Rozhen 2010, June Singular Value Decomposition of images from scanned photographic plates Vasil Kolev Institute of Computer and Communications Systems.
Accelerating the Singular Value Decomposition of Rectangular Matrices with the CSX600 and the Integrable SVD September 7, 2007 PaCT-2007, Pereslavl-Zalessky.
§ Linear Operators Christopher Crawford PHY
Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization Julio Martin Duarte-Carvajalino, and Guillermo Sapiro.
Scientific Computing Singular Value Decomposition SVD.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Stochastic SVD on Hadoop Shannon Quinn (with thanks to Gunnar Martinsson and Nathan Halko of UC Boulder, and Joel Tropp of CalTech)
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Computer Science: A Structured Programming Approach Using C1 8-7 Two-Dimensional Arrays The arrays we have discussed so far are known as one- dimensional.
Principal Component Analysis (PCA).
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
A Story of Principal Component Analysis in the Distributed Model David Woodruff IBM Almaden Based on works with Christos Boutsidis, Ken Clarkson, Ravi.
Septian Adi Wijaya – Informatics Brawijaya University
REVIEW Linear Combinations Given vectors and given scalars
1.4 The Matrix Equation Ax = b
Compressive Coded Aperture Video Reconstruction
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
PRINCIPAL COMPONENT ANALYSIS (PCA)
Bisection and Twisted SVD on GPU
Matrix Sketching over Sliding Windows
Lecture 8:Eigenfaces and Shared Features
Lecture: Face Recognition and Feature Reduction
Streaming & sampling.
Principal Component Analysis (PCA)
Singular Value Decomposition
Feature Space Based Watermarking in Multi-Images
Unfolding Problem: A Machine Learning Approach
Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
Parallel Matrix Operations
Principal Component Analysis (PCA)
Orthogonality and Least Squares
Parallelization of Sparse Coding & Dictionary Learning
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Lecture 13: Singular Value Decomposition (SVD)
Unfolding with system identification
Lecture 16. Classification (II): Practical Considerations
Orthogonality and Least Squares
Marios Mattheakis and Pavlos Protopapas
Presentation transcript:

ANTHAN HALKO, PER-GUNNAR MARTINSSON, YOEL SHAOLNISKY, AND MARK TYGERT AN ALGORITHM FOR THE PRINCIPAL COMPONENT ANALYSIS OF LARGE DATA SETS ANTHAN HALKO, PER-GUNNAR MARTINSSON, YOEL SHAOLNISKY, AND MARK TYGERT PRESENTER: ZHAOCHONG LIU

OUTLINE Introduction Algorithm Out-of-core computation Computational costs Examples Conclusion

Introduction Use with data sets that are too large to be stored in the random-access memory(RAM) of a typical computer system. SVD Minimize the total number of times that the algorithm has to access each entry of the matrix A being approximated.

Algorithm G: a real n × l matrix with independent and identically distributed entries.

Algorithm

Algorithm 2. Using a pivoted QR-decomposition H = Q R Q : real m × ((i+1)l) matrix and its columns are orthonormal R: real ((i + 1)l) × ((i + 1)l) matrix

Algorithm 3. Compute the n × ((i+1)l) product matrix 4. Form an SVD of T

Algorithm 5. Compute the m × ((i+1)l) product matrix 6. Retrieve the leftmost m × k block U of , the leftmost n × k block V of , and the leftmost uppermost k × k block Σ of

Out-of-core Computation Computation with on-the-fly evaluation of matrix entries Matrix not fit in memory Have access to a computational routine that can evaluate each entry of the matrix individually

Out-of-core Computation Computation with storage on disk Retrieve as many rows of the matrix from disk as will fit in memory Calculate the products of the retrieved rows of the matrix and columns of G Then repeat

Computational costs Costs with on-the-fly evaluation of matrix entries --------------

Computational cost Cost of the first step is Cost of step two is Cost of step three is Cost of step four is Cost of step five is Cost of step six is Summing up :

Computational cost Cost with storage on disk Each equation in step one need at most flops, the total is We can see that step two, four, five and six are not using A matrix, so the result are the same with the previous Cost of step three is Summing up:

Numerical Example First example: Apply the algorithm to the m × n matrix A = E S F

Numerical Example

Numerical Example Second example: Same as the first one, but the singular values are different.

Numerical Example

Numerical Example Third example: Apply the algorithm with k = 3 to an m × 1000 matrix whose rows are independent and identically distributed realizations of the random vector

Numerical Example

Numerical Example

Measured Data Perform the algorithm with PCA of images of faces K = 50 393,216 * 102,042 matrix whose column consist of images

Measured Data

Measured Data

Last Example This data set consists of 10000 two-dimensional images Each image is 129 pixels wide and high K = 250

Last Example

Conclusion This paper describes techniques for the principal component analysis of data sets that are too large to be sorted in random-access memory, and illustrates the performance of the methods on data from various sources, including standard test sets and numerical simulations. The core steps of the procedures parallelize easily; with the advent of widespread multicore and distributed processing.

Pros and Cons Pros: the algorithm is described very clear. the numerical examples are comprehensive Cons: There is no compare on the example evaluation.

Thank you!