Download presentation
Presentation is loading. Please wait.
Published byWilfrid Jefferson Modified over 9 years ago
1
Learning Human Pose and Motion Models for Animation Aaron Hertzmann University of Toronto
2
Animation is maturing … … but it’s still hard to create
3
Keyframe animation
4
http://www.cadtutor.net/dd/bryce/anim/anim.html q1q1 q2q2 q3q3 q (t)
5
Characters are very complex Woody: - 200 facial controls - 700 controls in his body http://www.pbs.org/wgbh/nova/specialfx2/mcqueen.html
6
Motion capture [Images from NYU and UW]
7
Motion capture
8
Mocap is not a panacea
9
Problem Animation is very time-consuming Fine for big studios Problem for:
10
Goal: model human motion What motions are likely? Applications: Computer animation Computer vision
11
Related work: physical models Accurate, in principle Too complex to work with (but see [Liu, Hertzmann, Popović 2005]) Computationally expensive
12
Related work: motion graphs Input: raw motion capture “Motion graph” (slide from J. Lee)
13
Approach: statistical models of motions Learn a PDF over motions, and synthesize from this PDF [Brand and Hertzmann 1999] What PDF do we use?
14
Style-Based Inverse Kinematics with: Keith Grochow, Steve Martin, Zoran Popović
15
Motivation
16
Body parameterization Pose at time t : q t Root pos./orientation (6 DOFs) Joint angles (29 DOFs) Motion X = [ q 1, …, q T ]
17
Forward kinematics Pose to 3D positions: qtqt [x i,y i,z i ] t FK
18
Problem Statement Generate a character pose based on a chosen style subject to constraints Constraints Degrees of freedom (DOFs) q
19
Real-time Pose Synthesis Off-Line Learning Approach Motion Learning Style Synthesis Pose Constraints
20
Style Representation Objective function –given a pose evaluate how well it matches a style –allow any pose Probability Distribution Function (PDF) –principled way of automatically learning the style
21
y(q) = q orientation(q) velocity(q) [ q 0 q 1 q 2 …… r 0 r 1 r 2 v 0 v 1 v 2 … ] Features
22
Goals for the PDF Learn PDF from any data Smooth and descriptive Minimal parameter tuning Real-time synthesis
23
Mixtures-of-Gaussians
24
GPLVM y1y1 y2y2 y3y3 x1x1 x2x2 Latent Space Feature Space Gaussian Process Latent Variable Model [Lawrence 2004] GP -1 x ~ N (0,I) y ~ GP(x; ) Learning: arg max p(X, | Y) = arg max p(Y | X, ) p(X)
25
Scaled Outputs Different DOFs have different “importances” Solution: RBF kernel function k(x,x’) k i (x,x’) = k(x,x’)/w i 2 Equivalently: learn x Wy where W = diag(w 1, w 2, … w D )
26
Style Learning y1y1 y2y2 y3y3 x1x1 x2x2
27
Precision in Latent Space 2 (x)
28
Pose Synthesis y1y1 y2y2 y3y3 x1x1 x2x2 arg min x,q p(y(q),x|X,Y, ) s.t. C(q) = 0
29
Pose Synthesis arg min x,q p(y(q),x|X,Y, ) s.t. C(q) = 0 Constraints Degrees of freedom (DOFs) q
30
SGPLVM Objective Function y1y1 y2y2 y3y3 x1x1 x2x2
31
Baseball Pitch
32
Track Start
33
Jump Shot
34
The Active Set All training dataActive set data Training Data
35
Annealing Original Style High Variance Medium Variance Original Style
36
Style interpolation Given two styles 1 and 2, can we “interpolate” them? Approach: interpolate in log-domain
37
Style interpolation (1-s)s
38
Style interpolation in log space (1-s) s
39
Applications
40
Interactive Posing
43
Multiple motion style
44
Realtime Motion Capture
45
Style Interpolation
46
Trajectory Keyframing
47
Posing from an Image
48
Modeling motion GPLVM doesn’t model motions Velocity features are a hack How do we model and learn dynamics?
49
Gaussian Process Dynamical Models with: David Fleet, Jack Wang
50
Dynamical models x t+1 xtxt
51
Hidden Markov Model (HMM) Linear Dynamical Systems (LDS) [van Overschee et al ‘94; Doretto et al ‘01] Switching LDS [Ghahramani and Hinton ’98; Pavlovic et al ‘00; Li et al ‘02] Nonlinear Dynamical Systems [e.g., Ghahramani and Roweis ‘00] Dynamical models
52
Gaussian Process Dynamical Model (GPDM) Marginalize out, and then optimize the latent positions to simultaneously minimize pose reconstruction error and (dynamic) prediction error on training data. pose reconstruction latent dynamics Latent dynamical model : Assume IID Gaussian noise, and with Gaussian priors on and
53
Reconstruction where contains the th -dimension of each training pose is a kernel matrix with entries for kernel function (with hyperparameters ) scales different pose dimensions The data likelihood for the reconstruction mapping, given centered inputs has the form:
54
Reconstruction The data likelihood for the reconstruction mapping, given centered inputs has the form: where is a kernel matrix with entries for kernel function (with hyperparameters ) scales different pose dimensions
55
Dynamics The latent dynamic process on has a similar form: where is a kernel matrix defined by kernel function with hyperparameters
56
Subspace dynamical model : Markov Property Remark: Conditioned on, the dynamical model is 1 st -order Markov, but the marginalization introduces longer temporal dependence.
57
Learning To estimate the latent coordinates & kernel parameters we minimize with respect to and. GPDM posterior: reconstruction likelihood priorsdynamics likelihood training motions hyperparameterslatent trajectories
58
Motion Capture Data ~2.5 gait cycles (157 frames)Learned latent coordinates (1st-order prediction, RBF kernel) 56 joint angles + 3 global translational velocity + 3 global orientation from CMU motion capture database
59
3D GPLVM Latent Coordinates large “jumps’ in latent space
60
Reconstruction Variance Volume visualization of. (1 st -order prediction, RBF kernel)
61
Motion Simulation Animation of mean motion (200 step sequence) initial state Random trajectories from MCMC (~1 gait cycle, 60 steps)
62
Simulation: 1 st -Order Mean Prediction Red: 200 steps of mean prediction Green: 60-step MCMC mean Animation
63
Linear Kernel Dynamics Animation 200 steps of mean prediction
64
Missing Data 50 of 147 frames dropped (almost a full gait cycle) spline interpolation
65
Missing Data: RBF Dynamics
66
Missing Data: Linear Dynamics
67
Determining hyperparameters GPDMNeil’s parametersMCEM Data: six distinct walkers
68
Where do we go from here? Let’s look at some limitations of the model 60 Hz120 Hz
69
What do we want? Phase Variation x1x1 x2x2 A walk cycle
70
Branching motions WalkRun
71
Stylistic variation
72
Current work: manifold GPs Latent space (x) Data space (y)
73
Summary GPLVM and GPDM provide priors from small data sets Dependence on initialization, hyperpriors, latent dimensionality Open problems modeling data topology and stylistic variation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.