Multifactor GPs Suppose now we wish to model different mappings for different styles. We will add a latent style vector s along with x, and define the.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Factorial Mixture of Gaussians and the Marginal Independence Model Ricardo Silva Joint work-in-progress with Zoubin Ghahramani.
EKF, UKF TexPoint fonts used in EMF.
Factor Graphs, Variable Elimination, MLEs Joseph Gonzalez TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A AA A A.
Biointelligence Laboratory, Seoul National University
Pattern Recognition and Machine Learning
CS Statistical Machine learning Lecture 13 Yuan (Alan) Qi Purdue CS Oct
Fast Bayesian Matching Pursuit Presenter: Changchun Zhang ECE / CMR Tennessee Technological University November 12, 2010 Reading Group (Authors: Philip.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Modeling the Shape of People from 3D Range Scans
Reducing Drift in Parametric Motion Tracking
(Includes references to Brian Clipp
Artificial Intelligence Lecture 2 Dr. Bo Yuan, Professor Department of Computer Science and Engineering Shanghai Jiaotong University
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Gaussian process emulation of multiple outputs Tony O’Hagan, MUCM, Sheffield.
Visual Recognition Tutorial
A Bayesian Approach to Joint Feature Selection and Classifier Design Balaji Krishnapuram, Alexander J. Hartemink, Lawrence Carin, Fellow, IEEE, and Mario.
Introduction to Data-driven Animation Jinxiang Chai Computer Science and Engineering Texas A&M University.
Hilbert Space Embeddings of Hidden Markov Models Le Song, Byron Boots, Sajid Siddiqi, Geoff Gordon and Alex Smola 1.
Pattern Recognition and Machine Learning
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
3D Human Body Pose Estimation using GP-LVM Moin Nabi Computer Vision Group Institute for Research in Fundamental Sciences (IPM)
Support Vector Machines Based on Burges (1998), Scholkopf (1998), Cristianini and Shawe-Taylor (2000), and Hastie et al. (2001) David Madigan.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Maximum Likelihood (ML), Expectation Maximization (EM)
Gaussian Process Dynamical Models JM Wang, DJ Fleet, A Hertzmann Dan Grollman, RLAB 3/21/2007.
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Tracking with Linear Dynamic Models. Introduction Tracking is the problem of generating an inference about the motion of an object given a sequence of.
A Unifying Review of Linear Gaussian Models
CSC2535: 2013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton.
Gaussian Processes for Transcription Factor Protein Inference Neil D. Lawrence, Guido Sanguinetti and Magnus Rattray.
Review of Lecture Two Linear Regression Normal Equation
Crash Course on Machine Learning
Function Approximation for Imitation Learning in Humanoid Robots Rajesh P. N. Rao Dept of Computer Science and Engineering University of Washington,
Cao et al. ICML 2010 Presented by Danushka Bollegala.
PATTERN RECOGNITION AND MACHINE LEARNING
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Gaussian Processes Nando de Freitas University of British Columbia June 2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this.
Computer vision: models, learning and inference Chapter 19 Temporal models.
EM and expected complete log-likelihood Mixture of Experts
Learning Human Pose and Motion Models for Animation Aaron Hertzmann University of Toronto.
Segmental Hidden Markov Models with Random Effects for Waveform Modeling Author: Seyoung Kim & Padhraic Smyth Presentor: Lu Ren.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
CSC Lecture 8a Learning Multiplicative Interactions Geoffrey Hinton.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Bayesian Generalized Kernel Mixed Models Zhihua Zhang, Guang Dai and Michael I. Jordan JMLR 2011.
Gaussian Processes Li An Li An
Linear Models for Classification
Simulation Study for Longitudinal Data with Nonignorable Missing Data Rong Liu, PhD Candidate Dr. Ramakrishnan, Advisor Department of Biostatistics Virginia.
by Ryan P. Adams, Iain Murray, and David J.C. MacKay (ICML 2009)
Gaussian Processes For Regression, Classification, and Prediction.
Bayesian Speech Synthesis Framework Integrating Training and Synthesis Processes Kei Hashimoto, Yoshihiko Nankaku, and Keiichi Tokuda Nagoya Institute.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Fitting normal distribution: ML 1Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
High Dimensional Probabilistic Modelling through Manifolds
Neil Lawrence Machine Learning Group Department of Computer Science
Deep Feedforward Networks
Dynamical Statistical Shape Priors for Level Set Based Tracking
CSCI 5822 Probabilistic Models of Human and Machine Learning
Hidden Markov Models Part 2: Algorithms
Biointelligence Laboratory, Seoul National University
Parametric Methods Berlin Chen, 2005 References:
Multivariate Methods Berlin Chen
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Multivariate Methods Berlin Chen, 2005 References:
Presentation transcript:

Multifactor GPs Suppose now we wish to model different mappings for different styles. We will add a latent style vector s along with x, and define the following mapping, in which the output depends linearly on style for fixed x : where each g i (x) is a mapping with weight vector w i, and ε represents additive i.i.d. Gaussian noise with zero mean and variance β -1. Fixing just the input s specializes the mapping to a specific style. If we hold fix x and s, then the output is again a zero-mean Gaussian, with covariance: The covariance function is a product of kernel functions of each sets of factors, respectively. This generalizes to M factors: suppose we wish to model the effects of = {x (1), …, x (M) } on the output. Under the Gaussian priors on model parameters, the covariance function is given by Gaussian processes (GP) Suppose we have a 1D output y given by a linear combination of basis functions of input x: If we assume an isotropic Gaussian prior on the model parameters, then y | x is zero-mean Gaussian with covariance: The covariance is a kernel function, which fully specifies the regression model. Popular choices of kernel functions: Bayesian linear regression model: RBF regression: Time-series prediction Motion synthesis Given new style parameters a Gaussian process prediction distribution is defined w.r.t. content. Running dynamics forward in content space generates motions in new style. Introduction Multifactor Gaussian Process Models for Style-Content Separation Jack M. Wang, David J. Fleet, Aaron Hertzmann Department of Computer Science, University of Toronto, Canada {jmwang, fleet, Latent factors We use 6 training sequences, 314 frames of data. They are performed by 3 subjects in 3 gaits with some combinations missing. Poses from the same sequence share the same style factors. In particular, poses from the same row share the same gait vector (g), poses from the same column share the same subject vector (s). Since the motions are periodic, we constrain the content factor (x) to lie on a 2D circle. We do not assume poses are time-warped to match in the content space. Instead, we parameterize each sequence by θ and Δθ, they are learned from data. Approach We introduce a multifactor model for learning distributions of styles of human motion. We parameterize the space of human motion styles by a small number of low-dimensional factors, such as identity and gait, where the dependence on each individual factor may be nonlinear. Our multifactor Gaussian process model can be viewed as a special class of the Gaussian process latent variable model (GP-LVM) (Lawrence, 2005), as well as a Bayesian generalization of multilinear models (Vasilescu & Terzopoulos, 2002). factor 1 factor N data … factor 2 gait, phase,identity,gender, etc… pose Multilinear analysis (Vasilescu & Terzopoulos, 2002) where Given n frames of motion and a learned dynamical model, predict the next k frames Without model of style, test data must be in same style as training data (i.e., same person, moving in the same gait) Tested single and multifactor versions of: –GPDM (Wang et al., 2006) –B-GPDM (Urtasun et al., 2006) Conclusion: multifactor models improve prediction results. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A AA A A AAA A A Application: A locomotion model We focus on periodic human locomotion, and model each pose in a motion sequence as arising from the interaction of three factors: We draw on experience from motion interpolation and apply linear kernels for the style parameters (s, g) and use a RBF kernel to model content (x). In addition, θ d models the different variance in each of the output degrees of freedoms. This defines a Gaussian process for each pose DOF, which is assumed to be independent conditioned on the inputs. The inputs are learned by maximizing the GP-LVM likelihood function (Lawrence, 2005). s: identity of the subject performing the motion g: gait of the motion (walk, run, stride) x: current state of the motion (evolves w.r.t. time) linear kernelsRBF kernel Using prior models of human motion to constrain the inference of 3D pose sequences is a popular approach to improve monocular people tracking, as well as to simplify the process of character animation. The availability of motion capture devices in recent years enables such models to be learned from data, and learning models that generalize well to novel motions has become a major challenge. One of the main difficulties in this domain is that the training data and test data typically come from related but distinct distributions. For example, we would often like to learn a prior model of locomotion of locomotion from the motion capture data of a few individuals performing a few gaits (i.e., walking and running). Such a prior model would then be used to tack a new individual or to generate plausible animations of a related, but new gait not included in the training database. Due to the natural variations in how different individuals perform different gaits – which we broadly refer to as style – learning a model that can represent and generalize to the space of human motions is not straightforward. Nonetheless, it has long been observed that interpolating motion capture data yields plausible new motions, and we attempt to build motion models that can generalize in style. stride run walk subject 1subject 2subject 3 ? ? ? stride run walk subject 1subject 2subject 3 Summary Proposed a Bayesian multifactor model for style-content separation by generalizing multilinear models using Gaussian processes. The model is specified by a product kernel, where each factor is kernelized separately. We learned a locomotion model from very little data, capturing stylistic variations. Explicit models of stylistic variations improve dynamical prediction results on the GPDM. For example, previously: x1x1 x2x2 x3x3