Data-Driven Approach to Synthesizing Facial Animation Using Motion Capture Ioannis Fermanis 6511325 Liu Zhaopeng 6516637.

Slides:



Advertisements
Similar presentations
Arc-length computation and arc-length parameterization
Advertisements

The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
CHAPTER 7 ANGLE MEASUREMENT
Verbs and Adverbs: Multidimensional Motion Interpolation Using Radial Basis Functions Presented by Sean Jellish Charles Rose Michael F. Cohen Bobby Bodenheimer.
Animation. 12 Principles Of Animation (1)Squash and Stretch (2)Anticipation (3)Staging (4)Straight Ahead Action and Pose to Pose (5)Follow Through and.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Physically Based Motion Transformation Zoran Popović Andrew Witkin SIGGRAPH ‘99.
Introduction to Data-driven Animation Jinxiang Chai Computer Science and Engineering Texas A&M University.
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 24: Motion Capture Ravi Ramamoorthi Most slides courtesy.
A Data-Driven Approach to Quantifying Natural Human Motion SIGGRAPH ’ 05 Liu Ren, Alton Patrick, Alexei A. Efros, Jassica K. Hodgins, and James M. Rehg.
Create Photo-Realistic Talking Face Changbo Hu * This work was done during visiting Microsoft Research China with Baining Guo and Bo Zhang.
Recognizing and Tracking Human Action Josephine Sullivan and Stefan Carlsson.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
05/09/02(c) 2002 University of Wisconsin Last Time Global illumination algorithms Grades so far.
Navigating and Browsing 3D Models in 3DLIB Hesham Anan, Kurt Maly, Mohammad Zubair Computer Science Dept. Old Dominion University, Norfolk, VA, (anan,
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
Surface Simplification Using Quadric Error Metrics Michael Garland Paul S. Heckbert.
Multimodal Interaction Dr. Mike Spann
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Computer Graphics 2 In the name of God. Outline Introduction Animation The most important senior groups Animation techniques Summary Walking, running,…examples.
Jinxiang Chai Composite Transformations and Forward Kinematics 0.
Adaptive Rigid Multi-region Selection for 3D face recognition K. Chang, K. Bowyer, P. Flynn Paper presentation Kin-chung (Ryan) Wong 2006/7/27.
12 Principles Of Animation (1)Squash and Stretch (2)Anticipation (3)Staging (4)Straight Ahead Action and Pose to Pose (5)Follow Through and Overlapping.
Character Design CSIS 5838: Graphics and Animation for Gaming.
Course 8 Contours. Def: edge list ---- ordered set of edge point or fragments. Def: contour ---- an edge list or expression that is used to represent.
Performance Comparison of Speaker and Emotion Recognition
Rick Parent - CIS681 Motion Capture Use digitized motion to animate a character.
Mismatch String Kernals for SVM Protein Classification Christina Leslie, Eleazar Eskin, Jason Weston, William Stafford Noble Presented by Pradeep Anand.
Animation Animation is about bringing things to life Technically: –Generate a sequence of images that, when played one after the other, make things move.
Evaluating Translation Memory Software Francie Gow MA Translation, University of Ottawa Translator, Translation Bureau, Government of Canada
Operating Systems Lecture 9 Introduction to Paging Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Invitation to Computer Science, C++ Version, Fourth Edition
Computer Graphics.
Character Animation Forward and Inverse Kinematics
Computer Animation cgvr.korea.ac.kr.
Describing Motion in One Dimension
Database Management System
Statistics: The Z score and the normal distribution
Database Applications (15-415) DBMS Internals- Part VII Lecture 16, October 25, 2016 Mohammad Hammoud.
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
FACE DETECTION USING ARTIFICIAL INTELLIGENCE
Chen Jimena Melisa Parodi Menashe Shalom
Fast Preprocessing for Robust Face Sketch Synthesis
Majkowska University of California. Los Angeles
Principal Component Analysis
Fitting Curve Models to Edges
ISOMAP TRACKING WITH PARTICLE FILTERING
Authoring Directed Gaze for Full-Body Motion Capture
Basics of Motion Generation
Space-for-time tradeoffs
Turning to the Masters: Motion Capturing Cartoons
Easy Generation of Facial Animation Using Motion Graphs
Space-for-time tradeoffs
(c) 2002 University of Wisconsin
WELCOME.
Synthesis of Motion from Simple Animations
Space-for-time tradeoffs
CSCE441: Computer Graphics 2D/3D Transformations
Computer Graphics Lecture 15.
Motion in One Dimension
Motion Graphs Davey Krill May 3, 2006.
Calibration and homographies
Emerging Technologies for Games Review & Revision Strategy
Figure 3. Converting an expression into a binary expression tree.
A system for automatic animation of piano performances
Sign Language Recognition With Unsupervised Feature Learning
Presentation transcript:

Data-Driven Approach to Synthesizing Facial Animation Using Motion Capture Ioannis Fermanis 6511325 Liu Zhaopeng 6516637

Traditional process Animation production typically goes through a number of creation stages: acting(where the animator acts out the expressions in front of a mirror) blocking (where key body and face poses are created) refining pass (where overlapping actions are added, movements are exaggerated, and the timing and spacing of motions is refined) final polish (where animation curves are cleaned and small details are added) This paper proposes a method that produces animation at the “refining” stage

Purpose Advantage of the papers method: Hand-animation: Costly (both time and money), stylized, high quality Motion capture: Realistic, Fast, not stylized, too many keyframes Advantage of the papers method: Stylized animation and high quality with limited budget and time. Reduce the number of keyframes of the motion-capture data.

Motion Curve Corresponding facial feature or body part(such as head or eyes) xyz translation rotation along the time axis

Pattern Match Approach Input: Captured motion. Motion curves. Realistic. One character. Segmentation Pattern searching Continuous adjustment Database: Hand-animated motion. Example curves. Highly stylized. Output

Pattern Match Approach Example Curves: Divided into chunks of progressively smaller sizes. Serve as patterns to be found in the motion-capture curve. Top-down recursive partition. Pattern Match Approach Segmentation Pattern searching Continuous adjustment Struct Keyframe { time; translation/rotation value; type of tangent; tangent’s in/out angles; }

Pattern Match Approach Segmentation Pattern searching Continuous adjustment Measure the difference between example curves and captured motion curves. Find the best match/ Use a threshold. - Large threshold: more stylized - Small threshold: more realistic Sort the example curves by their length,

Pattern Match Approach Segmentation Pattern searching 3. Continuous adjustment Distance Measurement SAX HMM-Viterbi Algorithm SSE: Sum of Squared Errors For all key frames in the segment. Subsequent captured motion curve at time r Example curve segments The simplest and most common approach.

Pattern Match Approach Segmentation Pattern searching 3. Curve Warping Distance Measurement SAX HMM-Viterbi Algorithm SAX (Symbolic Aggregate Approximation): Converts the curve into symbols Reduces dimensionality Allow faster comparison HMM-Viberbi Algorithm: Compare two curves based on both the quality of individual feature matches and the smoothness of the overall mapping across all points.

Pattern Match Approach Segmentation Pattern searching Curve warping Once the best match is found, the motion curve is warped. Warp function with a given parameter θ: Warp function Satisfies: Warped motion curve matches example curve as closely as possible Continuity constraint. Smoothness constraint.

Pattern Match Approach Segmentation Pattern searching Curve warping Simplified Approach: Compute the difference of the values on the y axis of each point The deviation defines how much the motion curve has to be warped Delete all points that are not part of a matching pattern.

The warped motion curve may include artifact.

Evaluation For testing, the authors hired a 3D-animator to create a high quality animation data-base (costed €6000 for 1 minute) To test this approach, Motion-capture was used to create a dataset of 2 emotions (happy emotion and angry emotion) By using the 3 methods described before(distance measure, SAX and HMM), the animation database was searched for matches Then the number of keyframe is drastically reduced, producing the synthesized animation that will be used in the experiments for evaluation

Evaluation Perceptual Tests: 2 Experiments Experiment 1: Consists of 3 Blocks Block 1: 10 animations without sound (5 animation types of anger x 2 examples) Block 2: 10 animations without sound (5 animation types of the happiness x 2 examples) Block 3: 20 animations with sound (5 animation types x 2 examples x 2 emotions) The 5 animation types are : artist-created, motion capture,distance measure,SAX and HMM

Evaluation After the first 2 blocks, participants were asked if these animations are: Expressive Appealing Cartoony After the 3rd block, participants were asked to rate how good was the facial expression at conveying the message (ignoring the lip movement)

Results of experiment 1 Artist-created animation was still the best across all tests HMM was the best at synthesizing animation and a few times equal to artist-created animation Anger was more expressive than happiness Synthesized happy animations were rated the lowest

Evaluation Experiment 2: The same as experiment 1 but this time with refined animations Refined animations: 1 angry and 1 happy animations synthesized using distance measure 2 angry and 1 happy animations synthesized using HMM

Results of experiment 2 Refined animations were better than unrefined when evaluating how expressive and cartoony they are

https://youtu.be/PQP9965jcCk

Evaluation Overall results: No significant difference between the pattern-matching methods Good results in keyframe reduction The unrefined experiment shows good results for angry animation The refined experiment shows good result for happy animation

Future Work The method could be extended to handle curves jointly thus removing artifacts A more sophisticated curve segment dictionary could be used to make the matching process faster without increasing the memory constraint The work can be extended by using machine learning on more data and a larger range of emotions to test generalizability With more publicly available data (to create a database) the methods and results should improve

Evaluation of the paper Positives: The structure of the methodology was good Overall a good idea, combining motion capture and hand-animation with relatively good results Negatives: The structure and explanation of the evaluation and its result was not as good Not all graphs and information is shown in the paper The paper repeats itself unnecessarily Larger sample size would provide more reliable data (18 participants only)

Questions?

Discussion Was adding sound in the experiments a good idea?