Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Slides:



Advertisements
Similar presentations
LOCOMOTION IN INTERACTIVE ENVIRONMENTS Navjot Garg.
Advertisements

Motivation Hair animation used in movies, games, virtual reality, etc. Problem due to complexity –Human head has over 100,000 strands of hair –Computation.
Layered Acting for Character Animation By Mira Dontcheva Gary Yngve Zoran Popović presented by Danny House SIGGRAPH 2003.
Automating Graph-Based Motion Synthesis Lucas Kovar Michael Gleicher University of Wisconsin-Madison.
A Momentum-based Bipedal Balance Controller Yuting Ye May 10, 2006.
Character Setup Character Setup is the process of creating handles and controls for anything that a character animator will need to adjust in order to.
Introduction to Control: How Its Done In Robotics R. Lindeke, Ph. D. ME 4135.
Physically Based Motion Transformation Zoran Popović Andrew Witkin SIGGRAPH ‘99.
Introduction to Data-driven Animation Jinxiang Chai Computer Science and Engineering Texas A&M University.
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 24: Motion Capture Ravi Ramamoorthi Most slides courtesy.
Animation From Motion Capture Motion Capture Assisted Animation: Texturing and Synthesis Kathy Pullen Chris Bregler Motion Capture Assisted Animation:
1 7M836 Animation & Rendering Animation Jakob Beetz Joran Jessurun
CSCE 641: Forward kinematics and inverse kinematics Jinxiang Chai.
Rising from Various Lying Postures Wen-Chieh Lin and Yi-Jheng Huang Department of Computer Science National Chiao Tung University, Taiwan.
Curve Analogies Aaron Hertzmann Nuria Oliver Brain Curless Steven M. Seitz University of Washington Microsoft Research Thirteenth Eurographics.
Motion Planning for Legged Robots on Varied Terrain Kris Hauser, Timothy Bretl, Jean-Claude Latombe Kensuke Harada, Brian Wilcox Presented By Derek Chan.
World Space Servoing for Character Animation Under Simulation Pawel Wrotek Electronic Arts Chad Jenkins Brown University Morgan McGuire Williams College.
Behavior Planning for Character Animation Manfred Lau and James Kuffner Carnegie Mellon University.
CS274 Spring 01 Lecture 5 Copyright © Mark Meyer Lecture V Higher Level Motion Control CS274: Computer Animation and Simulation.
Dynamic Response for Motion Capture Animation Victor B. Zordan Anna Majkowska Bill Chiu Matthew Fast Riverside Graphics Lab University of California, Riverside.
Animation. Outline  Key frame animation  Hierarchical animation  Inverse kinematics.
Dynamo Dynamic, Data-driven Character Control with Adjustable Balance Pawel Wrotek Electronic Arts Chad Jenkins Brown University Morgan McGuire Williams.
Composition of complex optimal multi-character motions C. Karen Liu Aaron Hertzmann Zoran Popović.
Computer Animation CS 445/645 Fall Let’s talk about computer animation Must generate 30 frames per second of animation (24 fps for film) Issues.
Lecture Fall 2001 Computer Animation Fundamentals Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics.
Faking Dynamics of Cloth Animation for Animated Films Fabian Di Fiore Expertise Centre for Digital Media Hasselt University, Belgium
Advanced Programming for 3D Applications CE Bob Hobbs Staffordshire university Human Motion Lecture 3.
Adapting Simulated Behaviors For New Characters Jessica K. Hodgins and Nancy S. Pollard presentation by Barış Aksan.
Motion Editing (Geometric and Constraint-Based Methods) Jehee Lee.
Automated Construction of Parameterized Motions Lucas Kovar Michael Gleicher University of Wisconsin-Madison.
Math / Physics 101 GAM 376 Robin Burke Fall 2006.
Whitman and Atkeson.  Present a decoupled controller for a simulated three-dimensional biped.  Dynamics broke down into multiple subsystems that are.
Evolving Virtual Creatures & Evolving 3D Morphology and Behavior by Competition Papers by Karl Sims Presented by Sarah Waziruddin.
Ground Truth Free Evaluation of Segment Based Maps Rolf Lakaemper Temple University, Philadelphia,PA,USA.
Yoonsang Lee Sungeun Kim Jehee Lee Seoul National University Data-Driven Biped Control.
CS-378: Game Technology Lecture #13: Animation Prof. Okan Arikan University of Texas, Austin Thanks to James O’Brien, Steve Chenney, Zoran Popovic, Jessica.
CLASS 10 SCENE GRAPHS BASIC ANIMATION CS770/870. A scene Graph A data structure to hold components of a scene Usually a Tree of a Directed Acyclic Graph.
Character Setup In addition to rigging for character models, rigging artists are also responsible for setting up animation controls for anything that is.
COSMOSMotion Slides.
Benjamin Stephens Carnegie Mellon University Monday June 29, 2009 The Linear Biped Model and Application to Humanoid Estimation and Control.
Lecture 3 Intro to Posture Control Working with Dynamic Models.
1cs426-winter-2008 Notes  Will add references to splines on web page.
Accurate Robot Positioning using Corrective Learning Ram Subramanian ECE 539 Course Project Fall 2003.
Rick Parent - CIS681 Motion Capture Use digitized motion to animate a character.
Fast Query-Optimized Kernel Machine Classification Via Incremental Approximate Nearest Support Vectors by Dennis DeCoste and Dominic Mazzoni International.
Motion Graphs By Lucas Kovar, Michael Gleicher, and Frederic Pighin Presented by Phil Harton.
Interactive Control of Avatars Animated with Human Motion Data By: Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, Nancy S. Pollard Presented.
Lecture Fall 2001 Controlling Animation Boundary-Value Problems Shooting Methods Constrained Optimization Robot Control.
Rick Parent - CIS682 Rigid Body Dynamics simulate basic physics of an object subject to forces Keyframing can be tedious - especially to get ‘realism’
Humanoid دکتر سعید شیری قیداری Amirkabir University of Technology Computer Engineering & Information Technology Department.
Artificial Intelligence in Game Design Lecture 20: Hill Climbing and N-Grams.
Constrained Synthesis of Textural Motion for Animation Shmuel Moradoff Dani Lischinski The Hebrew University of Jerusalem.
Simulation of Characters in Entertainment Virtual Reality.
CSE Advanced Computer Animation Short Presentation Topic: Locomotion Kang-che Lee 2009 Fall 1.
Physics-Based Simulation: Graphics and Robotics Chand T. John.
Computer Animation CS 446 September 5, 2001.
Physically-Based Motion Synthesis in Computer Graphics
Character Animation Forward and Inverse Kinematics
Simulating Biped Behaviors from Human Motion Data
Majkowska University of California. Los Angeles
Prepared by: Engr . Syed Atir Iftikhar
UMBC Graphics for Games
WELCOME.
Synthesis of Motion from Simple Animations
Synthesizing Realistic Human Motion
Computer Animation Algorithms and Techniques
Computer Graphics Lecture 15.
Emir Zeylan Stylianos Filippou
Dimitris Valeris Thijs Ratsma
Presentation transcript:

Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768

Motivation  Motion of virtual character is prevalent in:  Game  Movie (visual effect)  Virtual reality  And more… FIFA 2006 (EA) NaturalMotion endorphin

Motivation What virtual characters should be able to do: 1.Lots of behaviors - leaping, grasping, moving, looking, attacking 2.Exhibit personality - move “sneakily” or “aggressively” 3.Awareness of environment - balance/posture adjustments 4.Physical force-induced movements (jumping, falling, swinging)

Outline  Motion generation techniques Motion capture and key-framing Data-driven synthesis Physical-based animation Hybrid approach  Dynamic motion controllers Quick Ragdoll Introduction Controllers  Transitioning between simulation and motion data Motion search – When and where Simulation-driven transition - How

Mocap and Key-framing (+) Captures style and subtle nuances (+) Absolute control “wyciwyg” (-) Difficult to adapt, edit, reuse (-) Not physically dynamic, especially highly dynamic motion

Data-driven synthesis  Generate motion from examples Blending, displacement map Kinematic controller built upon existing data Optimization / learning statistical model (+) creators retain control Creators define all rules for movement (-) violates the “checks and balances” of motion Motion control abuses its power over physics (-) limits emergent behavior

Physical-based animation  Ragdoll simulation  Dynamic controllers (+) Interacts well with environment (-) “Ragdoll” movement is lifeless (-) Difficult to develop complex behaviors

Hybrid approaches Mocap Stylistic realism Physical simulation Physical realism Hybrid approaches: Combine the best of both approaches  Activate either one when most appropriate  Add life to ragdolls using control systems (only simulate behaviors that are manageable)

A high-level example

Outline  Motion generation techniques Motion capture and key-framing Data-driven synthesis Physical-based animation Hybrid approach  Dynamic motion controllers Quick Ragdoll Introduction Controllers  Transitioning between simulation and motion data Motion search – When and Where Simulation-driven transition - How

Overview of dynamic controller  Decision making: objectives, current state (x[t]) → desired motion (x d [t])  Motion Control: desired motion (x d [t]), current state (x[t]) → motor forces (u[t])  Physics: current state (x[t]) → next state (x[t+1]) Physics Motion Control Decision Making objectives x[t+1] u[t]x d [t] u[t]=MC(x d [t]-x[t])x[t+1]=P(x[t],u[t])x d [t]=Goal(x[t])

Physics: setting up ragdolls  Given a dynamics engine Set primitive for each body part Mass and inertial properties Create 1, 2, or 3-DOF joints between parts Set joint limit constraints for each joint External forces (gravity, impact etc)  Dynamics Engine Supplies Updated positions/orientations Collision resolution with world Motion Control Decision Making objectives x[t+1] u[t]x d [t] u[t]=MC(x d [t]-x[t])x d [t]=Goal(x[t])

Controller types  Basic Joint-torque Controller Low-level control Sparse Pose control (May be specified by artist) Continuous control (e.g.: Tracking mocap data)  Hierarchical Controller Layered controllers Higher level controller determines correct desired value for low level Derived from sensor or state info, support polygon, center of mass, body contacts, etc.

Joint-torque controller Proportional-Derivative (PD servo) Controller  Actuate each joint towards desired target:  Acts like a damped spring attached to joint (rest position at desired angle) θ des is desired joint angle and θ is current angle k s and k d are spring and damper gains

Live demo Created with

Outline  Motion generation techniques Motion capture and key-framing Data-driven synthesis Physical-based animation Hybrid approach  Dynamic motion controllers Quick Ragdoll Introduction Controllers  Transitioning between simulation and motion data Motion search – When and where Simulation-driven transition - How

Simulating falling and recovering behavior [Mandel 2004]

Transitioning between Techniques  Motion data  Simulation When: Significant external forces applied on a virtual character How: Just initialize simulation with pose and velocities extracted from motion data.  Simulation  Motion data When and where: some appropriate pose is reached (hard to decide); Motion frame closest to simulated pose. How: Drive simulation toward matched motion data using PD controller.

Motion state spaces  State space of data-driven technique: Any pose present in the motion database  State space of dynamics-based technique: Set of poses allowable by physical constraints  The latter is larger because it: can produce motion difficult to animate or capture includes large set of unnatural poses  Correspondence must be made to allow transitions between the two

Motion searching  Problem: Find nearest matches in the motion database to the current simulated motion. Approach: 1.Data representation Joint position 2.Process into spatial data structure kd-tree/bbd-tree (box-decomposition) 3.Search structure at runtime Query pose comes from simulation Nearest neighbor search (ANN)

Data Representation: Joint Positions  Need representation that allows numerical comparison of body posture  Joint angles not as discriminating as joint positions Ignore root translation and align about vertical axis May also want to include joint velocities  Joint velocity is considered by taking surrounding frames into distance computation

Distance metric J – Number of joints W j – Joint weight p – global position of joint T - Transformation to align the first frame OriginalJoint positionsAligned positions

Searching process  Approximate Nearest Neighbor (ANN) Search First finds the cell containing the query point in spatial data structure of the input data points. A randomized search then finds surrounding cells containing points within the given ε threshold distance from actual nearest neighbors. Results guaranteed to be within a factor (1+ε) distance of actual nearest neighbors. O(log n 3 ) expected run time and O(nlogn) space requirement  Much better in practice than KNN as dimensionality of points increases

Speeding up search  Curse of dimensionality  Search Each Joint Position Separately  Pair more joints together to increase accuracy n 3-DOF searches is faster than one 3n-DOF search...

Simulating behavior  Model reaction to impacts causing loss of balance  Two controllers handle before and after contact phases respectively  Ensure transitioning to a balanced posture in motion data

Fall controller  Aim: produce biomechanically inspired, protective behaviors in response to the many different ways a human may fall to the ground.

Fall controller  Continuous control strategy 4 controller states according to falling direction: backward, forward, right, left During each state one or both arms are controlled to track predicted landing position of the shoulders Goal of the controlled arm is to have wrists intersect the line between the shoulder and its predicted landing position. A small natural bend is added to the elbow and the desired angles for the rest of the body are set to initial angles at the time the fall controller is activated.

Fall controller  Determine controller state θ is the facing direction of the character. V is the average velocity of the limbs.

Fall controller  Determine target shoulder joint angle Can change when simulation steps forward The k s and k d are properly tuned

Settle controller  Aim: Driving the character to similar motion clip at an appropriate time  Beginning when hands impact the ground.  Two states Absorb impact:  gains are adjusted to reduce hip and upper body velocity.  Last a half second before next state. ANN search:  Find a frame in motion database that is close to currently simulated posture  Use found frame as target while continuing to absorb impact  Simulated motion is smoothly blended into motion data. Final results demo

An alternative on response motion synthesis [Zordan 2005] Problem: Generating dynamic response motion to external impact Insight:  Dynamics is often only needed for a short time (a burst).  After that, the utility of the dynamics decreases and due to the lack of good behavior control  Return to mocap once the character becomes “conscious” again

Generating dynamic response motion 1. Transition to simulation when impact takes place 2. Search motion data for transition-to sequence similar with simulated response motion 3. Run the second simulation with joint- torque controller actuating the character toward matching motion 4. Final blending to eliminate the discontinuity between simulated and transition-to motions

Motion selection  Aim: to find a transition-to motion  Frame windows are compared between simulation and motion data Frames are aligned so that roots position and orientation of start frame in each window coincide Distance between and : p b, θ b : body part position and orientation w i : window weight, quadratic function with highest value at start frame and decreasing for subsequent frames w pb, w θb : linear and angular distance scale for each body part

Transition motion synthesis  Aim: generate the motion to fill the gap between the beginning of interaction and found motion data  Realized in 2 steps: Run a second simulation to track the intermediate sequence Blend the physically generated motion into transition-to motion data

Transition motion synthesis  Simulation 2 An inertia-scaled PD-servo is used to compute torque at each joint The tracked sequence is generated by blend start and end frames using SLERP with an ease-in/ease-out. A deliberate delay in tracking is introduced to make the reaction realistic

Conclusion  Hybrid approaches Complex dynamic behaviors are hard to model physically A viable option to synthesize character motion under wider range of situations Able to incorporate unpredictable interactions, especially in game  Making it more practical Automatic computation of motion controller parameters [Allen 2007] Speeding up search via pre-learned model [Zordan 2007]

References  MANDEL, M., Versatile and interactive virtual humans: Hybrid use of data-driven and dynamics-based motion synthesis. Master's Thesis, Carnegie Mellon University.  ZORDAN V. B., MAJKOWSKA A., CHIU B., FAST M.: Dynamic response for motion capture animation. ACM Trans. Graph. 24, 3 (2005),  B. Allen, D. Chu, A. Shapiro, P. Faloutsos. On the Beat! Timing and Tension for Dyanmic Characters, ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2007  Zordan, V.B., Macchietto, A., Medina, J., Soriano, M., Wu C.C., Interactive Dynamic Response for Games, ACM SIGGRAPH Sandbox Symposium 2007