Vision-based Control of 3D Facial Animation Jin-xiang Chai Jing Xiao Jessica Hodgins Carnegie Mellon University.

Slides:



Advertisements
Similar presentations
Perceptually Guided Expressive Facial Animation Zhigang Deng and Xiaohan Ma Computer Graphics and Interactive Media Lab Department of Computer Science.
Advertisements

Digital Interactive Entertainment Dr. Yangsheng Wang Professor of Institute of Automation Chinese Academy of Sciences
Image-based Clothes Animation for Virtual Fitting Zhenglong Zhou, Bo Shu, Shaojie Zhuo, Xiaoming Deng, Ping Tan, Stephen Lin * National University of.
Robust 3D Head Pose Classification using Wavelets by Mukesh C. Motwani Dr. Frederick C. Harris, Jr., Thesis Advisor December 5 th, 2002 A thesis submitted.
3D Face Modeling Michaël De Smet.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
1 Face Synthesis M. L. Gavrilova. 2 Outline Face Synthesis From Modeling to Synthesis Facial Expression Synthesis Conclusions.
Retargeting Algorithms for Performance-Driven Animation J.P. Lewis Fred Pighin.
SIGGRAPH Course 30: Performance-Driven Facial Animation For Latest Version of Bregler’s Slides and Notes please go to:
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
Summary & Homework Jinxiang Chai. Outline Motion data process paper summary Presentation tips Homework Paper assignment.
Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,
Introduction to Data-driven Animation Jinxiang Chai Computer Science and Engineering Texas A&M University.
Motion Editing and Retargetting Jinxiang Chai. Outline Motion editing [video, click here]here Motion retargeting [video, click here]here.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Accurate Non-Iterative O( n ) Solution to the P n P Problem CVLab - Ecole Polytechnique Fédérale de Lausanne Francesc Moreno-Noguer Vincent Lepetit Pascal.
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 24: Motion Capture Ravi Ramamoorthi Most slides courtesy.
Dmitri Bitouk Shree K. Nayar Columbia University Creating a Speech Enabled Avatar from a Single Photograph.
RECOGNIZING FACIAL EXPRESSIONS THROUGH TRACKING Salih Burak Gokturk.
Face Poser: Interactive Modeling of 3D Facial Expressions Using Model Priors Manfred Lau 1,3 Jinxiang Chai 2 Ying-Qing Xu 3 Heung-Yeung Shum 3 1 Carnegie.
Create Photo-Realistic Talking Face Changbo Hu * This work was done during visiting Microsoft Research China with Baining Guo and Bo Zhang.
Motion Map: Image-based Retrieval and Segmentation of Motion Data EG SCA ’ 04 學生 : 林家如
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Marker-less Face Capture and Automatic Model Construction Part 1: Chris Bregler, NYU Part.
Interactive Control of Avatars Animated with Human Motion Data Jehee Lee Carnegie Mellon University Seoul National University Jehee Lee Carnegie Mellon.
Constraint-based Motion Optimization Using A Statistical Dynamic Model Jinxiang Chai Texas A&M University.
1 Expression Cloning Jung-yong Noh Ulrich Neumann Siggraph01.
Tracking Face Orientation Kentaro Toyama Vision–Based Interaction Group Microsoft Research.
Human Computer Interaction 7. Advanced User Interfaces (I) Data Scattering and RBF Course no. ILE5013 National Chiao Tung Univ, Taiwan By: I-Chen Lin,
Facial Type, Expression, and Viseme Generation Josh McCoy, James Skorupski, and Jerry Yee.
Faces: Analysis and Synthesis Vision for Graphics CSE 590SS, Winter 2001 Richard Szeliski.
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
Oral Defense by Sunny Tang 15 Aug 2003
Face Recognition and Retrieval in Video Basic concept of Face Recog. & retrieval And their basic methods. C.S.E. Kwon Min Hyuk.
I mage and M edia U nderstanding L aboratory for Performance Evaluation of Vision-based Real-time Motion Capture Naoto Date, Hiromasa Yoshimoto, Daisaku.
Database Construction for Speech to Lip-readable Animation Conversion Gyorgy Takacs, Attila Tihanyi, Tamas Bardi, Gergo Feldhoffer, Balint Srancsik Peter.
Efficient Visualization of Lagrangian Coherent Structures by Filtered AMR Ridge Extraction October IEEE Vis Filip Sadlo, Ronald CGL -
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
Computer Graphics Group Tobias Weyand Mesh-Based Inverse Kinematics Sumner et al 2005 presented by Tobias Weyand.
GIP: Computer Graphics & Image Processing 1 1 Medical Image Processing & 3D Modeling.
A FACEREADER- DRIVEN 3D EXPRESSIVE AVATAR Crystal Butler | Amsterdam 2013.
Use and Re-use of Facial Motion Capture M. Sanchez, J. Edge, S. King and S. Maddock.
Facial animation retargeting framework using radial basis functions Tamás Umenhoffer, Balázs Tóth Introduction Realistic facial animation16 is a challenging.
Motion Editing (Geometric and Constraint-Based Methods) Jehee Lee.
N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.
Multifactor GPs Suppose now we wish to model different mappings for different styles. We will add a latent style vector s along with x, and define the.
Presented by Matthew Cook INFO410 & INFO350 S INFORMATION SCIENCE Paper Discussion: Dynamic 3D Avatar Creation from Hand-held Video Input Paper Discussion:
Yoonsang Lee Sungeun Kim Jehee Lee Seoul National University Data-Driven Biped Control.
Perceptual Analysis of Talking Avatar Head Movements: A Quantitative Perspective Xiaohan Ma, Binh H. Le, and Zhigang Deng Department of Computer Science.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
CSCE 441 Computer Graphics: Keyframe Animation/Smooth Curves Jinxiang Chai.
Animated Speech Therapist for Individuals with Parkinson Disease Supported by the Coleman Institute for Cognitive Disabilities J. Yan, L. Ramig and R.
CSCE 441 Computer Graphics: Keyframe Animation/Smooth Curves Jinxiang Chai.
1 Virtual Characters. Two Types UserComputer 2 AvatarAgent.
University of Washington v The Hebrew University * Microsoft Research Synthesizing Realistic Facial Expressions from Photographs Frederic Pighin Jamie.
Unsupervised Learning for Speech Motion Editing Eurographics/SIGGRAPH Symposium on Computer Animation (2003) Yong Cao 1,2 Petros Faloutsos 1 Frederic.
Dynamic Data Analysis Projects in the Image Analysis and Motion Capture Labs Figure: functional brain MRI of a monetary reward task; left: 16 cocaine subjects,
Paper presentation topics 2. More on feature detection and descriptors 3. Shape and Matching 4. Indexing and Retrieval 5. More on 3D reconstruction 1.
Interactive Control of Avatars Animated with Human Motion Data By: Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, Nancy S. Pollard Presented.
Facial Motion Cloning Using Global Shape Deformation Marco Fratarcangeli and Marco Schaerf University of Rome “La Sapienza”
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Facial Animation Wilson Chang Paul Salmon April 9, 1999 Computer Animation University of Wisconsin-Madison.
Constrained Synthesis of Textural Motion for Animation Shmuel Moradoff Dani Lischinski The Hebrew University of Jerusalem.
UCL Human Representation in Immersive Space. UCL Human Representation in Immersive Space Body ChatSensing Z X Y Zr YrXr Real–Time Animation.
CSCE 441 Computer Graphics: Keyframe Animation/Smooth Curves
Mesh Modelling With Curve Analogies
Mesh Modelling With Curve Analogies
Video-based human motion recognition using 3D mocap data
Multimodal Caricatural Mirror
End-to-End Speech-Driven Facial Animation with Temporal GANs
Presentation transcript:

Vision-based Control of 3D Facial Animation Jin-xiang Chai Jing Xiao Jessica Hodgins Carnegie Mellon University

Our Goal Interactive avatar control Designing a rich set of realistic facial actions for a virtual character Providing intuitive and interactive control over these actions

+ High quality- Expensive - Intrusive - Noisy - Low resolution + Inexpensive + Non-intrusive Control InterfaceQuality Control Interface vs. Quality Vision-based animation Online motion capture

Our Idea Vision-based interface Motion capture database Interactive avatar control +

Motion capture Making faces [Guenter et al. 98] Expression Cloning [Noh and Neumann 01] Vision-based tracking for direct animation Physical markers [Williams 90] Edges [Terzopoulos and Waters 93, Lanitis et al. 97] Dense optical flow with 3D models [Essa et al. 96, Pighin et al. 99, DeCarlo et al. 00] Data-driven feature tracking [Gokturk et al. 01] Vision-based animation with blendshape Hand-drawn expression [Buck et al. 00] 3D model avatar model [FaceStation] Related Work

Video Analysis Avatar animation Preprocessed Motion Capture Data Expression Control and Animation Expression Retargeting Act out expressions System Overview Video Analysis

Video Anal y sis Vision-based tracking 3D Head Poses [Xiao et al. 2002] 2D facial features Video Analysis

Expression Control Parameters Extracting 15 expression control parameters from 2D tracking points Distance between two feature points Distance between a point and a line Orientation and center of the mouth Expression control signal t

Avatar animation Preprocessed Motion Capture Data Expression Control and Animation Expression Retargeting Act out expressions Video Analysis System Overview

Motion Capture Data Preprocessing 3D Poses Expression separation Expression control parameter extraction frames (10 minutes) including: 6 basic facial expressions typical everyday facial expressions speech data

Avatar animation Expression Control and Animation Expression Retargeting Act out expression Video Analysis Preprocessed Motion Capture Data System Overview

Expression Control 2D tracking data Vision-based interface Motion capture database 19*2 dofs Expression control parameters 15 dofs 76*3 dofs 3D motion data

Challenges Visual expression control signals are very noisy One to many mapping from expression control parameter space to 3D motion space Temporal coherence Control parameter space3D motion space 15 dofs76*3 dofs

Data-driven Dynamic Filtering Nearest Neighbor Search Noisy control signal Online PCA K=120 closest examples W = 0.33s 7 largest Eigen- curves (99.5 % energy) Filtered control signal Filter by eigen-curves

Expression Mapping Nearest Neighbor Search From expression control parameter space to 3D motion data space d1d1 d2d2 dKdK...  w(d 2 ) w(d K ) w(d 1 )... Filtered control signal Synthesized motion

Avatar animation Act out expression Video Analysis Preprocessed Motion Capture Data Expression Control and Animation System Overview Expression Retargeting

Expression Retarget Synthesized expressionAvatar expression

Expression Retarget xsxs xtxt Learn the surface mapping function using Radial Basis Functions such that x t =f(x s ) Transfer the motion vector by local Jacobian matrix Jf(x s ) by  x t =Jf(x s )  x s xsxs xtxt ? Run time computational cost depends on the number of vertices

Precompute Deformation Basis … T0T0 T1T1 T2T2 T3T3 T4T4 T5T5 … S0S0 S1S1 S2S2 S3S3 S4S4 S5S5 PCA Precompute deformation basis 25 source motion bases –99.5% energy 25 precomputed avatar motion bases

Target Motion Synthesis … … Synthesized expression Avatar expression 0,…. N S0S0 T0T0 S1S1 T1T1 T2T2 S2S2 iSiiSi iTiiTi Run time computational cost is O(N) N is the number of bases S3S3 T3T3 TNTN SNSN

Avatar animation Act out expression Video Analysis Preprocessed Motion Capture Data Expression Control and Animation System Overview Expression Retargeting

Results

Conclusions Developed a performance-based facial animation system for interactive expression control Tracking real-time facial movements in video Preprocessing the motion capture database Transforming low-quality 2D visual control signal to high quality 3D facial expression An efficient online expression retarget

Formal user study on the quality of the synthesized motion Controlling and animating 3D photorealistic facial expression Size of database Future Work