René Vidal Time/Place: T-Th 4.30pm-6pm, Hodson 301

Slides:



Advertisements
Similar presentations
Learning Riemannian metrics for motion classification Fabio Cuzzolin INRIA Rhone-Alpes Computational Imaging Group, Pompeu Fabra University, Barcellona.
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
EE 290A: Generalized Principal Component Analysis Lecture 6: Iterative Methods for Mixture-Model Segmentation Sastry & Yang © Spring, 2011EE 290A, University.
EE 290A: Generalized Principal Component Analysis Lecture 5: Generalized Principal Component Analysis Sastry & Yang © Spring, 2011EE 290A, University of.
MASKS © 2004 Invitation to 3D vision Lecture 8 Segmentation of Dynamical Scenes.
Principal Component Analysis
An Introduction to Kernel-Based Learning Algorithms K.-R. Muller, S. Mika, G. Ratsch, K. Tsuda and B. Scholkopf Presented by: Joanna Giforos CS8980: Topics.
Engineering Data Analysis & Modeling Practical Solutions to Practical Problems Dr. James McNames Biomedical Signal Processing Laboratory Electrical & Computer.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Linear Models Tony Dodd January 2007An Overview of State-of-the-Art Data Modelling Overview Linear models. Parameter estimation. Linear in the.
Agenda The Subspace Clustering Problem Computer Vision Applications
Biomedical Image Analysis and Machine Learning BMI 731 Winter 2005 Kun Huang Department of Biomedical Informatics Ohio State University.
Outline Separating Hyperplanes – Separable Case
EE 290A: Generalized Principal Component Analysis Lecture 2 (by Allen Y. Yang): Extensions of PCA Sastry & Yang © Spring, 2011EE 290A, University of California,
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Combined Central and Subspace Clustering for Computer Vision Applications Le Lu 1 René Vidal 2 1 Computer Science Department, Johns Hopkins University,
Data Reduction. 1.Overview 2.The Curse of Dimensionality 3.Data Sampling 4.Binning and Reduction of Cardinality.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
GRASP Learning a Kernel Matrix for Nonlinear Dimensionality Reduction Kilian Q. Weinberger, Fei Sha and Lawrence K. Saul ICML’04 Department of Computer.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
半年工作小结 报告人:吕小惠 2011 年 8 月 25 日. 报告提纲 一.学习了 Non-negative Matrix Factorization convergence proofs 二.学习了 Sparse Non-negative Matrix Factorization 算法 三.学习了线性代数中有关子空间等基础知.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Linear Models Tony Dodd. 21 January 2008Mathematics for Data Modelling: Linear Models Overview Linear models. Parameter estimation. Linear in the parameters.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Ch 12. Continuous Latent Variables ~ 12
LECTURE 11: Advanced Discriminant Analysis
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
A Closed Form Solution to Direct Motion Segmentation
Recursive Identification of Switched ARX Hybrid Models: Exponential Convergence and Persistence of Excitation René Vidal National ICT Australia Brian D.O.Anderson.
Motion Segmentation with Missing Data using PowerFactorization & GPCA
Unsupervised Riemannian Clustering of Probability Density Functions
Part I 1 Title 2 Motivation 3 Problem statement 4 Brief review of PCA
Computer Vision, Robotics, Machine Learning and Control Lab
René Vidal and Xiaodong Fan Center for Imaging Science
Segmentation of Dynamic Scenes
کاربرد نگاشت با حفظ تنکی در شناسایی چهره
Part II Applications of GPCA in Computer Vision
René Vidal Center for Imaging Science
René Vidal Center for Imaging Science
Observability and Identification of Linear Hybrid Systems
Segmentation of Dynamic Scenes
A Unified Algebraic Approach to 2D and 3D Motion Segmentation
Segmentation of Dynamic Scenes from Image Intensities
Optical Flow Estimation and Segmentation of Moving Dynamic Textures
Modeling and Segmentation of Dynamic Textures
Observability, Observer Design and Identification of Hybrid Systems
Dynamic Scene Reconstruction using GPCA
Generalized Principal Component Analysis CVPR 2008
Machine Learning Basics
Machine Learning Dimensionality Reduction
Announcements Project 1 artifact winners
EE 290A Generalized Principal Component Analysis
Probabilistic Models with Latent Variables
Brief Review of Recognition + Context
Principal Component Analysis
Segmentation of Dynamical Scenes
Biointelligence Laboratory, Seoul National University
Generally Discriminant Analysis
CS4670: Intro to Computer Vision
NonLinear Dimensionality Reduction or Unfolding Manifolds
EM Algorithm and its Applications
The “Margaret Thatcher Illusion”, by Peter Thompson
Marios Mattheakis and Pavlos Protopapas
Presentation transcript:

Learning Theory II: Modeling and Segmentation of Multivariate Mixed Data (BME 580.692, CS 600.462) René Vidal Time/Place: T-Th 4.30pm-6pm, Hodson 301 Office Hours: Mondays 5-6, 308B Clark Hall Title: Segmentation of Dynamic Scenes and Textures Abstract: Dynamic scenes are video sequences containing multiple objects moving in front of dynamic backgrounds, e.g. a bird floating on water. One can model such scenes as the output of a collection of dynamical models exhibiting discontinuous behavior both in space, due to the presence of multiple moving objects, and in time, due to the appearance and disappearance of objects. Segmentation of dynamic scenes is then equivalent to the identification of this mixture of dynamical models from the image data. Unfortunately, although the identification of a single dynamical model is a well understood problem, the identification of multiple hybrid dynamical models is not. Even in the case of static data, e.g. a set of points living in multiple subspaces, data segmentation is usually thought of as a "chicken-and-egg" problem. This is because in order to estimate a mixture of models one needs to first segment the data and in order to segment the data one needs to know the model parameters. Therefore, static data segmentation is usually solved by alternating data clustering and model fitting using, e.g., the Expectation Maximization (EM) algorithm. Our recent work on Generalized Principal Component Analysis (GPCA) has shown that the "chicken-and-egg" dilemma can be tackled using algebraic geometric techniques. In the case of data living in a collection of (static) subspaces, one can segment the data by fitting a set of polynomials to all data points (without first clustering the data) and then differentiating these polynomials to obtain the model parameters for each group. In this talk, we will present ongoing work addressing the extension of GPCA to time-series data living in a collection of multiple moving subspaces. The approach combines classical GPCA with newly developed recursive hybrid system identification algorithms. We will also present applications of DGPCA in image/video segmentation, 3-D motion segmentation, dynamic texture segmentation, and heart motion analysis.

Course overview Linear and Nonlinear Dimensionality Reduction Principal component analysis Unsupervised Learning Iterative methods for central and subspace clustering Algebraic methods for central and subspace clustering Applications in Computer Vision 3-D motion segmentation Spatial and temporal video segmentation Estimation and Segmentation of Hybrid Dynamical Models Identification of hybrid systems

Linear dimensionality reduction Principal Component Analysis (PCA) Applications: data compression, regression, image analysis (eigenfaces), pattern recognition As we all know, Principal Component Analysis problem refers to the problem of estimating a SINGLE subspace from sample data points. Although there are various ways of solving PCA, a simple solution consist of building a matrix with all the data points, computing its SVD, and then extracting a basis for the subspace from the columns of the U matrix, and the dimension of the subspace from the rank of the U matrix. There is no question that PCA is one of the most popular techniques for dimensionality reduction in various engineering disciplines. In computer vision, in particular, a successful application has been found in face recognition under the name of eigenfaces.

Generalized PCA (GPCA) Extensions of PCA Probabilistic PCA (Tipping-Bishop ’99) Identify subspace from noisy data Gaussian noise: standard PCA Noise in exponential family (Collins et al.’01) Nonlinear PCA (Scholkopf-Smola-Muller ’98) Identify a nonlinear manifold from sample points Embed data in a higher dimensional space and apply standard PCA What embedding should be used? There have been various attempts to generalize PCA in different directions. For example, Probabilistic PCA considers the case of noisy data and tries to estimate THE subspace in a maximum likelihood sense. For the case of noise in the exponential family, Collins has shown that this can be done using convex optimization techniques. Other extensions considered the case of data lying on manifold, the so-called NonLinear PCA or Kernel PCA. This problem is usually solved by embedding the data into a higher dimensional space and then assuming that the embedded data DOES live in a linear subspace. Of course the correct embedding to use depends on the problem at hand, and learning the embedding is a current a topic of research in the machine learning community. A third extension considers the case of identifying multiple subspaces at the same time. It is this case extension the one I will talk about in this talk under the name of Generalized PCA. Mixtures of PCA (Tipping-Bishop ’99) Identify a collection of subspaces from sample points Generalized PCA (GPCA)

Nonlinear dimensionality reduction Often data lie on a manifold Unfolding the manifold: LLE, Isomap Example on hand gestures

Applications of NLR Faces under different expression Lips under different expressions

Data segmentation and clustering Given a set of points, separate them into multiple groups Discriminative methods: learn boundary Generative methods: learn mixture model, using, e.g. Expectation Maximization

Generalized Principal Component Analysis Polynomials can be expressed linearly in terms of a set of coefficients by using a polynomial embedding called Veronese map Veronese map

Clustering data on non Euclidean spaces Mixtures of linear spaces Mixtures of algebraic varieties Mixtures of Lie groups “Chicken-and-egg” problems Given segmentation, estimate models Given models, segment the data Initialization? Need to combine Algebra/geometry, dynamics and statistics

Applications of GPCA in vision and control Geometry Vanishing points Image compression Segmentation Intensity (black-white) Texture Motion (2-D, 3-D) Scene (host-guest) Recognition Faces (Eigenfaces) Man - Woman Human Gaits Dynamic Textures Water-steam Biomedical imaging Hybrid systems identification One of the reasons we are interested in GPCA is because there are various problems in computer vision that have to do with the simultaneous estimation of multiple models from visual data. Consider for example segmenting an image into different regions based on intensity, texture of motion information. Consider also the recognition of various static and dynamic processes such as human faces or human gaits from visual data. Although in this talk I will only consider the first class of problems, it turns out that, at least from a mathematical perspective, all the above problems can be converted into following generalization of principal component analysis, which we conveniently refer to as GPCA

Segmentation problems in dynamic vision Segmentation of video and dynamic textures Segmentation of rigid-body motions

What are hybrid systems? Previous work on hybrid systems Modeling, analysis, stability Control: reachability analysis, optimal control Verification: safety In applications, one also needs to worry about observability and identifiability Modeling of a UAV, dynamic textures, human gaits

Identification of hybrid systems Challenging “chicken-and-egg” problem Given switching times, can estimate model parameters Given the model parameters, estimate hybrid state Given all above, estimate switching parameters Iterate Difficulties Very sensitive to initialization Needs a minimum dwell time Does not use all data Given input/output data, identify Number of discrete states Model parameters of linear systems Hybrid state (continuous & discrete) Switching parameters (partition of state space)

References

Vision Lab @ Johns Hopkins University For more information, Vision Lab @ Johns Hopkins University Thank You!