EE 290A Generalized Principal Component Analysis

Slides:



Advertisements
Similar presentations
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Advertisements

SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
EE 290A: Generalized Principal Component Analysis Lecture 5: Generalized Principal Component Analysis Sastry & Yang © Spring, 2011EE 290A, University of.
MASKS © 2004 Invitation to 3D vision Lecture 8 Segmentation of Dynamical Scenes.
Principal Component Analysis
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Project 4 out today –help session today –photo session today Project 2 winners Announcements.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
SVD(Singular Value Decomposition) and Its Applications
Summarized by Soo-Jin Kim
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Combined Central and Subspace Clustering for Computer Vision Applications Le Lu 1 René Vidal 2 1 Computer Science Department, Johns Hopkins University,
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
半年工作小结 报告人:吕小惠 2011 年 8 月 25 日. 报告提纲 一.学习了 Non-negative Matrix Factorization convergence proofs 二.学习了 Sparse Non-negative Matrix Factorization 算法 三.学习了线性代数中有关子空间等基础知.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Unsupervised Learning II Feature Extraction
Virtual University of Pakistan
High Dimensional Probabilistic Modelling through Manifolds
Nonlinear Dimensionality Reduction
PREDICT 422: Practical Machine Learning
Chapter 7. Classification and Prediction
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
LECTURE 11: Advanced Discriminant Analysis
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
A Closed Form Solution to Direct Motion Segmentation
Motion Segmentation with Missing Data using PowerFactorization & GPCA
Projective Factorization of Multiple Rigid-Body Motions
Unsupervised Riemannian Clustering of Probability Density Functions
Part I 1 Title 2 Motivation 3 Problem statement 4 Brief review of PCA
Computer Vision, Robotics, Machine Learning and Control Lab
René Vidal and Xiaodong Fan Center for Imaging Science
Segmentation of Dynamic Scenes
Part II Applications of GPCA in Computer Vision
René Vidal Center for Imaging Science
René Vidal Center for Imaging Science
René Vidal Time/Place: T-Th 4.30pm-6pm, Hodson 301
Segmentation of Dynamic Scenes
A Unified Algebraic Approach to 2D and 3D Motion Segmentation
Segmentation of Dynamic Scenes from Image Intensities
Optical Flow Estimation and Segmentation of Moving Dynamic Textures
Observability, Observer Design and Identification of Hybrid Systems
Dynamic Scene Reconstruction using GPCA
Generalized Principal Component Analysis CVPR 2008
Principal Component Analysis (PCA)
Machine Learning Basics
Parameter estimation class 5
Principal Nested Spheres Analysis
Hidden Markov Models Part 2: Algorithms
Principal Component Analysis
10701 / Machine Learning Today: - Cross validation,
Segmentation of Dynamical Scenes
Biointelligence Laboratory, Seoul National University
Feature space tansformation methods
Generally Discriminant Analysis
CS4670: Intro to Computer Vision
Announcements Project 2 artifacts Project 3 due Thursday night
Announcements Project 4 out today Project 2 winners help session today
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
Announcements Artifact due Thursday
Biointelligence Laboratory, Seoul National University
Announcements Artifact due Thursday
The “Margaret Thatcher Illusion”, by Peter Thompson
Presentation transcript:

EE 290A Generalized Principal Component Analysis René Vidal Center for Imaging Science Institute for Computational Medicine Johns Hopkins University Title: Segmentation of Dynamic Scenes and Textures Abstract: Dynamic scenes are video sequences containing multiple objects moving in front of dynamic backgrounds, e.g. a bird floating on water. One can model such scenes as the output of a collection of dynamical models exhibiting discontinuous behavior both in space, due to the presence of multiple moving objects, and in time, due to the appearance and disappearance of objects. Segmentation of dynamic scenes is then equivalent to the identification of this mixture of dynamical models from the image data. Unfortunately, although the identification of a single dynamical model is a well understood problem, the identification of multiple hybrid dynamical models is not. Even in the case of static data, e.g. a set of points living in multiple subspaces, data segmentation is usually thought of as a "chicken-and-egg" problem. This is because in order to estimate a mixture of models one needs to first segment the data and in order to segment the data one needs to know the model parameters. Therefore, static data segmentation is usually solved by alternating data clustering and model fitting using, e.g., the Expectation Maximization (EM) algorithm. Our recent work on Generalized Principal Component Analysis (GPCA) has shown that the "chicken-and-egg" dilemma can be tackled using algebraic geometric techniques. In the case of data living in a collection of (static) subspaces, one can segment the data by fitting a set of polynomials to all data points (without first clustering the data) and then differentiating these polynomials to obtain the model parameters for each group. In this talk, we will present ongoing work addressing the extension of GPCA to time-series data living in a collection of multiple moving subspaces. The approach combines classical GPCA with newly developed recursive hybrid system identification algorithms. We will also present applications of DGPCA in image/video segmentation, 3-D motion segmentation, dynamic texture segmentation, and heart motion analysis.

Principal Component Analysis (PCA) Given a set of points x1, x2, …, xN Geometric PCA: find a subspace S passing through them Statistical PCA: find projection directions that maximize the variance Solution (Beltrami’1873, Jordan’1874, Hotelling’33, Eckart-Householder-Young’36) Applications: data compression, regression, computer vision (eigenfaces), pattern recognition, genomics As we all know, Principal Component Analysis problem refers to the problem of estimating a SINGLE subspace from sample data points. Although there are various ways of solving PCA, a simple solution consist of building a matrix with all the data points, computing its SVD, and then extracting a basis for the subspace from the columns of the U matrix, and the dimension of the subspace from the rank of the U matrix. There is no question that PCA is one of the most popular techniques for dimensionality reduction in various engineering disciplines. In computer vision, in particular, a successful application has been found in face recognition under the name of eigenfaces. Basis for S

Extensions of PCA Higher order SVD (Tucker’66, Davis’02) Independent Component Analysis (Common ‘94) Probabilistic PCA (Tipping-Bishop ’99) Identify subspace from noisy data Gaussian noise: standard PCA Noise in exponential family (Collins et al.’01) Nonlinear dimensionality reduction Multidimensional scaling (Torgerson’58) Locally linear embedding (Roweis-Saul ’00) Isomap (Tenenbaum ’00) Nonlinear PCA (Scholkopf-Smola-Muller ’98) Identify nonlinear manifold by applying PCA to data embedded in high-dimensional space Principal Curves and Principal Geodesic Analysis (Hastie-Stuetzle’89, Tishbirany ‘92, Fletcher ‘04) There have been various attempts to generalize PCA in different directions. For example, Probabilistic PCA considers the case of noisy data and tries to estimate THE subspace in a maximum likelihood sense. For the case of noise in the exponential family, Collins has shown that this can be done using convex optimization techniques. Other extensions considered the case of data lying on manifold, the so-called NonLinear PCA or Kernel PCA. This problem is usually solved by embedding the data into a higher dimensional space and then assuming that the embedded data DOES live in a linear subspace. Of course the correct embedding to use depends on the problem at hand, and learning the embedding is a current a topic of research in the machine learning community. A third extension considers the case of identifying multiple subspaces at the same time. It is this case extension the one I will talk about in this talk under the name of Generalized PCA.

Generalized Principal Component Analysis Given a set of points lying in multiple subspaces, identify The number of subspaces and their dimensions A basis for each subspace The segmentation of the data points “Chicken-and-egg” problem Given segmentation, estimate subspaces Given subspaces, segment the data Given a set of data points lying on a collection of linear subspaces, without knowing which point corresponds to which subspace and without even knowing the number of subspaces, estimate the number of subspaces, a basis for each subspace and the segmentation of the data. As stated, the GPCA problem is one of those challenging CHICKEN-AND-EGG problems in the following sense. If we knew the segmentation of the data points, and hence the number of subspaces, then we could solve the problem by applying standard PCA to each group. On the other hand, if we had a basis for each subspace, we could easily assign each point to each cluster. The main challenge is that we neither know the segmentation of the data points nor a basis for each subspace, and the questions is how to estimate everything from the data only. Previous geometric approaches to GPCA have been proposed in the context of motion segmentation, and under the assumption that the subspaces do NOT intersect. Such approaches are based on first clustering the data, using for example K-means or spectral clustering techniques, and then applying standard PCA to each group. Statistical methods, on the other hand, model the data as a mixture model whose parameters are the mixing proportions and the basis for each subspace. The estimation of the mixture is a (usually non-convex) optimization problem that can be solved with various techniques. One of the most popular ones is the expectation maximization algorithm (EM) which alternates between the clustering of the data and the estimation of the subspaces. Unfortunately, the convergence of such iterative algorithms depends on good initialization. At present, initialization is mostly done either at random, using another iterative algorithm such as K-means, or else in an ad-hoc fashion depending on the particular application. It is here where our main motivation comes into the picture. We would like to know if we can find a way of initializing iterative algorithms in an algebraic fashion. For example, if you look at the two lines in this picture, there has to be a way of estimating those two lines directly from data in a one shot solution, at least in the ABSENCE of noise.

Prior work on subspace clustering Iterative algorithms: K-subspace (Ho et al. ’03), RANSAC, subspace selection and growing (Leonardis et al. ’02) Probabilistic approaches: learn the parameters of a mixture model using e.g. EM Mixtures of PPCA: (Tipping-Bishop ‘99): Multi-Stage Learning (Kanatani’04) Initialization Geometric approaches: 2 planes in R3 (Shizawa-Maze ’91) Factorization approaches: independent subspaces of equal dimension (Boult-Brown ‘91, Costeira-Kanade ‘98, Kanatani ’01) Spectral clustering based approaches: (Yan-Pollefeys’06) Given a set of data points lying on a collection of linear subspaces, without knowing which point corresponds to which subspace and without even knowing the number of subspaces, estimate the number of subspaces, a basis for each subspace and the segmentation of the data. As stated, the GPCA problem is one of those challenging CHICKEN-AND-EGG problems in the following sense. If we knew the segmentation of the data points, and hence the number of subspaces, then we could solve the problem by applying standard PCA to each group. On the other hand, if we had a basis for each subspace, we could easily assign each point to each cluster. The main challenge is that we neither know the segmentation of the data points nor a basis for each subspace, and the questions is how to estimate everything from the data only. Previous geometric approaches to GPCA have been proposed in the context of motion segmentation, and under the assumption that the subspaces do NOT intersect. Such approaches are based on first clustering the data, using for example K-means or spectral clustering techniques, and then applying standard PCA to each group. Statistical methods, on the other hand, model the data as a mixture model whose parameters are the mixing proportions and the basis for each subspace. The estimation of the mixture is a (usually non-convex) optimization problem that can be solved with various techniques. One of the most popular ones is the expectation maximization algorithm (EM) which alternates between the clustering of the data and the estimation of the subspaces. Unfortunately, the convergence of such iterative algorithms depends on good initialization. At present, initialization is mostly done either at random, using another iterative algorithm such as K-means, or else in an ad-hoc fashion depending on the particular application. It is here where our main motivation comes into the picture. We would like to know if we can find a way of initializing iterative algorithms in an algebraic fashion. For example, if you look at the two lines in this picture, there has to be a way of estimating those two lines directly from data in a one shot solution, at least in the ABSENCE of noise.

Basic ideas behind GPCA Towards an analytic solution to subspace clustering Can we estimate ALL models simultaneously using ALL data? When can we do so analytically? In closed form? Is there a formula for the number of models? Will consider the most general case Subspaces of unknown and possibly different dimensions Subspaces may intersect arbitrarily (not only at the origin) GPCA is an algebraic geometric approach to data segmentation Number of subspaces = degree of a polynomial Subspace basis = derivatives of a polynomial Subspace clustering is algebraically equivalent to Polynomial fitting Polynomial differentiation We are therefore interested in finding an analytic solution to the above segmentation problem. In particular we would like to know if we can estimate all the subspaces simultaneously from all the data without having to iterate between data clustering and subspace estimation. Furthermore we would like to know if we can do so in an ANALYTIC fashion, and even more, we would like to understand under what conditions we can do it in closed form and whether there is a formula for the number of subspaces. It turns out that all these questions can be answered using tools from algebraic geometry. Since at first data segmentation and algebraic geometry may seem to be totally disconnected, let me first tell you what the connection is. We will model the number of subspaces as the degree of a polynomial and the subspace parameters (basis) as the roots of a polynomial, or more precisely as the factors of the polynomial. With this algebraic geometric algebraic geometric interpretation in mind, we will be able to prove the following the GPCA problem. First, there are some geometric constraints that do not depend on the segmentation of the data. Therefore, the chicken-and-egg dilemma can be resolved since the segmentation of the dada can be algebraically eliminated. Second, one can prove that the problem has a unique solution, which can be obtained in closed form if and only if the number of subspaces is less than or equal to four And third, the solution can be computed using linear algebraic techniques. In the presence of noise, one can use the algebraic solution to GPCA to initialize any iterative algorithm, for example EM. In the case of zero-mean Gaussian noise, we will show that the E-step can be algebraically eliminated, since we will derive an objective function that depends on the motion parameters only.

Applications of GPCA in computer vision Geometry Vanishing points Image compression Segmentation Intensity (black-white) Texture Motion (2-D, 3-D) Video (host-guest) Recognition Faces (Eigenfaces) Man - Woman Human Gaits Dynamic Textures Water-bird Biomedical imaging Hybrid systems identification One of the reasons we are interested in GPCA is because there are various problems in computer vision that have to do with the simultaneous estimation of multiple models from visual data. Consider for example segmenting an image into different regions based on intensity, texture of motion information. Consider also the recognition of various static and dynamic processes such as human faces or human gaits from visual data. Although in this talk I will only consider the first class of problems, it turns out that, at least from a mathematical perspective, all the above problems can be converted into following generalization of principal component analysis, which we conveniently refer to as GPCA

Introductory example: algebraic clustering in 1D A -> c Number of groups?

Introductory example: algebraic clustering in 1D How to compute n, c, b’s? Number of clusters Cluster centers Solution is unique if Solution is closed form if A -> c

Introductory example: algebraic clustering in 2D What about dimension 2? What about higher dimensions? Complex numbers in higher dimensions? How to find roots of a polynomial of quaternions? Instead Project data onto one or two dimensional space Apply same algorithm to projected data

Representing one subspace One plane One line One subspace can be represented with Set of linear equations Set of polynomials of degree 1

Representing n subspaces Two planes One plane and one line Plane: Line: A union of n subspaces can be represented with a set of homogeneous polynomials of degree n De Morgan’s rule

Fitting polynomials to data points Polynomials can be written linearly in terms of the vector of coefficients by using polynomial embedding Coefficients of the polynomials can be computed from nullspace of embedded data Solve using least squares N = #data points Veronese map

Finding a basis for each subspace Case of hyperplanes: Only one polynomial Number of subspaces Basis are normal vectors Problems Computing roots may be sensitive to noise The estimated polynomial may not perfectly factor with noisy Cannot be applied to subspaces of different dimensions Polynomials are estimated up to change of basis, hence they may not factor, even with perfect data Polynomial Factorization (GPCA-PFA) [CVPR 2003] Find roots of polynomial of degree in one variable Solve linear systems in variables Solution obtained in closed form for

Finding a basis for each subspace To learn a mixture of subspaces we just need one positive example per class Polynomial Differentiation (GPCA-PDA) [CVPR’04]

Choosing one point per subspace With noise and outliers Polynomials may not be a perfect union of subspaces Normals can estimated correctly by choosing points optimally Distance to closest subspace without knowing segmentation?

GPCA for hyperplane segmentation Coefficients of the polynomial can be computed from null space of embedded data matrix Solve using least squares N = #data points Number of subspaces can be computed from the rank of embedded data matrix Normal to the subspaces can be computed from the derivatives of the polynomial

GPCA for subspaces of different dimensions There are multiple polynomials fitting the data The derivative of each polynomial gives a different normal vector Can obtain a basis for the subspace by applying PCA to normal vectors

GPCA for subspaces of different dimensions Apply polynomial embedding to projected data Obtain multiple subspace model by polynomial fitting Solve to obtain Need to know number of subspaces Obtain bases & dimensions by polynomial differentiation Optimally choose one point per subspace using distance

An example Given data lying in the union of the two subspaces We can write the union as Therefore, the union can be represented with the two polynomials

An example Can compute polynomials from Can compute normals from

Dealing with high-dimensional data Minimum number of points K = dimension of ambient space n = number of subspaces In practice the dimension of each subspace ki is much smaller than K Number and dimension of the subspaces is preserved by a linear projection onto a subspace of dimension Can remove outliers by robustly fitting the subspace Open problem: how to choose projection? PCA? Subspace 1 Subspace 2

GPCA with spectral clustering Build a similarity matrix between pairs of points Use eigenvectors to cluster data How to define a similarity for subspaces? Want points in the same subspace to be close Want points in different subspace to be far Use GPCA to get basis Distance: subspace angles

Comparison of PFA, PDA, K-sub, EM

Summary GPCA: algorithm for clustering subspaces Deals with unknown and possibly different dimensions Deals with arbitrary intersections among the subspaces Our approach is based on Projecting data onto a low-dimensional subspace Fitting polynomials to projected subspaces Differentiating polynomials to obtain a basis Applications in image processing and computer vision Image segmentation: intensity and texture Image compression Face recognition under varying illumination In this paper, we have presented a new way of looking at segmentation problems that is based on algebraic geometric techniques. In essence we showed that segmentation is equivalent to polynomial factorization and presented an algebraic solution for the case of mixtures of subspaces, which is a natural generalization of the well known Principal Component Analysis (PCA). Of course, as this theory is fairly new, there are still a lot of open problems. For example, how to do polynomial factorization in the presence of noise, or how to deal with outliers. We are currently working on such problems and expect to report our results in the near future. Also, we are currently working on generalizing the ideas in this paper to the cases of subspaces of different dimensions and to other types of data. In fact, I cannot resist doing some advertising for my talk tomorrow, where I will talk about the segmentation of multiple rigid motions, which corresponds to the case of bilinear data. Finally, before taking your questions, I would like to say that I think it is time that we start thinking about putting segmentation problems in a theoretical footing. I think this can be done by building a mathematical theory of segmentation that not only incorporates well known statistical modeling techniques, but also the geometric constraints inherent in various applications in computer vision. We hope that by combining the algebraic geometric approach to segmentation presented in this paper with the more standard statistical approaches, we should be able to build a new mathematical theory of segmentation in the followi should be able to build such a theory, which a nice mathematical theory of segmentation. Such a theory should find applications in various disciplines, including of course lots of segmentation and recognition problems in computer vision.

Thank You! For more information, Vision, Dynamics and Learning Lab @ Johns Hopkins University Thank You!