Gaussian Process Dynamical Models JM Wang, DJ Fleet, A Hertzmann Dan Grollman, RLAB 3/21/2007.

Slides:



Advertisements
Similar presentations
Generative Models Thus far we have essentially considered techniques that perform classification indirectly by modeling the training data, optimizing.
Advertisements

CSC321: Introduction to Neural Networks and Machine Learning Lecture 24: Non-linear Support Vector Machines Geoffrey Hinton.
Computer vision: models, learning and inference Chapter 8 Regression.
CS Statistical Machine learning Lecture 13 Yuan (Alan) Qi Purdue CS Oct
Dimension reduction (1)
Ch. 4: Radial Basis Functions Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from many Internet sources Longin.
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
EE-148 Expectation Maximization Markus Weber 5/11/99.
A Bayesian Approach to Joint Feature Selection and Classifier Design Balaji Krishnapuram, Alexander J. Hartemink, Lawrence Carin, Fellow, IEEE, and Mario.
Introduction to Data-driven Animation Jinxiang Chai Computer Science and Engineering Texas A&M University.
Hilbert Space Embeddings of Hidden Markov Models Le Song, Byron Boots, Sajid Siddiqi, Geoff Gordon and Alex Smola 1.
Pattern Recognition and Machine Learning
The Kernel Trick Kenneth D. Harris 3/6/15.
3D Human Body Pose Estimation using GP-LVM Moin Nabi Computer Vision Group Institute for Research in Fundamental Sciences (IPM)
Dimensional reduction, PCA
“Human Control of an Anthropomorphic Robot Hand”
Greg GrudicIntro AI1 Introduction to Artificial Intelligence CSCI 3202: The Perceptron Algorithm Greg Grudic.
Bioinformatics Challenge  Learning in very high dimensions with very few samples  Acute leukemia dataset: 7129 # of gene vs. 72 samples  Colon cancer.
A Kernel-based Support Vector Machine by Peter Axelberg and Johan Löfhede.
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
Lecture 10: Support Vector Machines
Continuous Latent Variables --Bishop
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
A Unifying Review of Linear Gaussian Models
Nonlinear Dimensionality Reduction by Locally Linear Embedding Sam T. Roweis and Lawrence K. Saul Reference: "Nonlinear dimensionality reduction by locally.
Projection methods in chemistry Autumn 2011 By: Atefe Malek.khatabi M. Daszykowski, B. Walczak, D.L. Massart* Chemometrics and Intelligent Laboratory.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Predicting Post-Operative Gait of Cerebral Palsy Patients
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Outline Separating Hyperplanes – Separable Case
Motion Blending (Multidimensional Interpolation) Jehee Lee.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 2011 Predicting Solar Generation from Weather Forecasts Using Machine Learning Navin.
Learning Human Pose and Motion Models for Animation Aaron Hertzmann University of Toronto.
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition by D. Tao, X. Li, and J. Maybank, TPAMI 2007 Presented by Iulian Pruteanu.
Graph Embedding: A General Framework for Dimensionality Reduction Dong XU School of Computer Engineering Nanyang Technological University
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Chapter 7 Handling Constraints
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
A Generalization of PCA to the Exponential Family Collins, Dasgupta and Schapire Presented by Guy Lebanon.
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
Multifactor GPs Suppose now we wish to model different mappings for different styles. We will add a latent style vector s along with x, and define the.
Virtual Vector Machine for Bayesian Online Classification Yuan (Alan) Qi CS & Statistics Purdue June, 2009 Joint work with T.P. Minka and R. Xiang.
Local Fisher Discriminant Analysis for Supervised Dimensionality Reduction Presented by Xianwang Wang Masashi Sugiyama.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.
CS 478 – Tools for Machine Learning and Data Mining SVM.
Jan Kamenický.  Many features ⇒ many dimensions  Dimensionality reduction ◦ Feature extraction (useful representation) ◦ Classification ◦ Visualization.
Linear Models for Classification
Paper Reading Dalong Du Nov.27, Papers Leon Gu and Takeo Kanade. A Generative Shape Regularization Model for Robust Face Alignment. ECCV08. Yan.
1/18 New Feature Presentation of Transition Probability Matrix for Image Tampering Detection Luyi Chen 1 Shilin Wang 2 Shenghong Li 1 Jianhua Li 1 1 Department.
Support Vector Machine Debapriyo Majumdar Data Mining – Fall 2014 Indian Statistical Institute Kolkata November 3, 2014.
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
Learning Chaotic Dynamics from Time Series Data A Recurrent Support Vector Machine Approach Vinay Varadan.
Support Vector Machines Exercise solutions Ata Kaban The University of Birmingham.
ViSOM - A Novel Method for Multivariate Data Projection and Structure Visualization Advisor : Dr. Hsu Graduate : Sheng-Hsuan Wang Author : Hujun Yin.
CSC321: Lecture 25: Non-linear dimensionality reduction Geoffrey Hinton.
SVMs in a Nutshell.
Introduction to Machine Learning Prof. Nir Ailon Lecture 5: Support Vector Machines (SVM)
Out of sample extension of PCA, Kernel PCA, and MDS WILSON A. FLORERO-SALINAS DAN LI MATH 285, FALL
Day 17: Duality and Nonlinear SVM Kristin P. Bennett Mathematical Sciences Department Rensselaer Polytechnic Institute.
Neil Lawrence Machine Learning Group Department of Computer Science
Randomness in Neural Networks
Ch3: Model Building through Regression
Machine Learning Basics
Dynamical Statistical Shape Priors for Level Set Based Tracking
Support Vector Machines Introduction to Data Mining, 2nd Edition by
CS 2750: Machine Learning Support Vector Machines
Presentation transcript:

Gaussian Process Dynamical Models JM Wang, DJ Fleet, A Hertzmann Dan Grollman, RLAB 3/21/2007

Mar Dang, RLAB2 Foci Want low-dimensional representations of high-dimensional data –Find the latent space –Dimensionality reduction Want to model dynamics of the system –How it evolves over time –Not just distance in input space

Mar Dang, RLAB3 Domain Human motion (gait) –62 dimensions, video –Regular DR treats each frame (pose) separately –Want to include how poses relate over time

Mar Dang, RLAB4 Non-linear Dynamical Model State evolution in latent space Observation (latent->observed) Model:

Mar Dang, RLAB5 Issues Need parameters A and B and basis functions! –and enough data to constrain them Why not marginalize them out? –Can do it in closed form

Mar Dang, RLAB6 Latent->observed mapping isotropic Gaussian prior on B, marinalize over g: Use RBF for kernel

Mar Dang, RLAB7 Latent dynamics Use 1 st order Markov, marginalize over A with isotropic Gaussian prior: use linear + RBF kernel

Mar Dang, RLAB8 Final Model All coordinates are jointly correlated, as are poses. Simple, no?

Mar Dang, RLAB9 Learning All you have is Y, the observed variables (62 D human pose information) –minimize negative log-posterior numerically: –Initialize latent coordinates with PCA 3D, because 2D is unstable

Mar Dang, RLAB10 Results PCAGPDM

Mar Dang, RLAB11 New motion generation Linear + RBF Linear Original New

Mar Dang, RLAB12 Other uses Forecasting –Use mean-prediction (motion generation) to look ahead Missing Data –Use mean-prediction to fill in gaps Must increase uncertainty in training data via downsampling

Mar Dang, RLAB13 Points / Issues Dimensionality of latent space is set via PCA space. –What if unknown? Scalability –Because of correlation between points, difficult to make sparse.

Mar Dang, RLAB14 Want more?