Easy Generation of Facial Animation Using Motion Graphs

Slides:



Advertisements
Similar presentations
Filling Algorithms Pixelwise MRFsChaos Mosaics Patch segments are pasted, overlapping, across the image. Then either: Ambiguities are removed by smoothing.
Advertisements

Active Appearance Models
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Fast Algorithms For Hierarchical Range Histogram Constructions
Synchronized Multi-character Motion Editing Manmyung Kim, Kyunglyul Hyun, Jongmin Kim, Jehee Lee Seoul National University.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Face Alignment with Part-Based Modeling
Verbs and Adverbs: Multidimensional Motion Interpolation Using Radial Basis Functions Presented by Sean Jellish Charles Rose Michael F. Cohen Bobby Bodenheimer.
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Automating Graph-Based Motion Synthesis Lucas Kovar Michael Gleicher University of Wisconsin-Madison.
More MR Fingerprinting
On the parameterization of clapping Herwin van Welbergen Zsófia Ruttkay Human Media Interaction, University of Twente.
Physically Based Motion Transformation Zoran Popović Andrew Witkin SIGGRAPH ‘99.
Introduction to Data-driven Animation Jinxiang Chai Computer Science and Engineering Texas A&M University.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Automated Extraction and Parameterization of Motions in Large Data Sets SIGGRAPH’ 2004 Lucas Kovar, Michael Gleicher University of Wisconsin-Madison.
Physical Database Monitoring and Tuning the Operational System.
Motion Texture: A Two-Level Statistical Model for Character Motion Synthesis SIGGRAPH ‘02 Speaker: Alvin Date: 3 August 2004.
February 2001SUNY Plattsburgh Concise Track Characterization of Maneuvering Targets Stephen Linder Matthew Ryan Richard Quintin This material is based.
Energy-efficient Self-adapting Online Linear Forecasting for Wireless Sensor Network Applications Jai-Jin Lim and Kang G. Shin Real-Time Computing Laboratory,
Recognizing Emotions in Facial Expressions
Toshiba Update 04/09/2006 Data-Driven Prosody and Voice Quality Generation for Emotional Speech Zeynep Inanoglu & Steve Young Machine Intelligence Lab.
©2003/04 Alessandro Bogliolo Background Information theory Probability theory Algorithms.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
1 Statistical Mechanics and Multi- Scale Simulation Methods ChBE Prof. C. Heath Turner Lecture 11 Some materials adapted from Prof. Keith E. Gubbins:
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
Particle Filtering in Network Tomography
Eyes Alive Sooha Park - Lee Jeremy B. Badler - Norman I. Badler University of Pennsylvania - The Smith-Kettlewell Eye Research Institute Presentation Prepared.
Non Negative Matrix Factorization
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
SPEECH CONTENT Spanish Expressive Voices: Corpus for Emotion Research in Spanish R. Barra-Chicote 1, J. M. Montero 1, J. Macias-Guarasa 2, S. Lufti 1,
On the Cost/Delay Tradeoff of Wireless Delay Tolerant Geographic Routing Argyrios Tasiopoulos MSc, student, AUEB Master Thesis presentation.
Intelligent Scissors for Image Composition Anthony Dotterer 01/17/2006.
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
Intel Confidential – Internal Only Co-clustering of biological networks and gene expression data Hanisch et al. This paper appears in: bioinformatics 2002.
D. M. J. Tax and R. P. W. Duin. Presented by Mihajlo Grbovic Support Vector Data Description.
Chapter 3 Response Charts.
Performance Comparison of Speaker and Emotion Recognition
Presented by: Fang-Hui Chu Discriminative Models for Speech Recognition M.J.F. Gales Cambridge University Engineering Department 2007.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Flexible Automatic Motion Blending with Registration Curves
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
1 Minimum Bayes-risk Methods in Automatic Speech Recognition Vaibhava Geol And William Byrne IBM ; Johns Hopkins University 2003 by CRC Press LLC 2005/4/26.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Visual homing using PCA-SIFT
Chapter 10 Image Segmentation
OSE801 Engineering System Identification Spring 2010
COMP 9517 Computer Vision Segmentation 7/2/2018 COMP 9517 S2, 2017.
Ch9: Decision Trees 9.1 Introduction A decision tree:
CH. 2: Supervised Learning
Majkowska University of California. Los Angeles
Final Year Project Presentation --- Magic Paint Face
Authoring Directed Gaze for Full-Body Motion Capture
K Nearest Neighbor Classification
Basics of Motion Generation
Synthesis via Numerical optimization
Basic Training for Statistical Process Control
Basic Training for Statistical Process Control
WELCOME.
Synthesis of Motion from Simple Animations
Handwritten Characters Recognition Based on an HMM Model
Computer Animation Displaying animation sequences raster animation
Chapter 7 Finite Impulse Response(FIR) Filter Design
Computer Graphics Lecture 15.
Chapter 7 Finite Impulse Response(FIR) Filter Design
Presenter: Shih-Hsiang(士翔)
Phase-Functioned Neural Networks for Character Control
Data-Driven Approach to Synthesizing Facial Animation Using Motion Capture Ioannis Fermanis Liu Zhaopeng
Presentation transcript:

Easy Generation of Facial Animation Using Motion Graphs J. Serra, O. Cetinaslan, S. Ravikumar, V. Orvalho and D. Cosker J. Serra, O. Cetinaslan, S. Ravikumar, V. Orvalho and D. Cosker Presented by Andreas Kelemen Costas Mavridis

Introduction Userful for secondary characters. Most research has focused on body animation Facial animation seldom studied. This method uses motion graphs to achieve unique, on-the-fly motion synthesis, controlled via a small number of parameters. User specifies thresholds that control compression ratio

Related Work Procedural animations vs performance driven and key-frame techniques Procedural animation divided into three classes: Constraint or rule-based Statistical or knowledge-based Behavioural-based Motion graphs: Fit within statistical procedural animation Widely used for body animation Inspired by the motion graphs of Kovar et al.

This Paper - Method Two steps: Create the region graphs from analyzing the DB Traverse these structures to synthesize the motion using Dijkstra’s algorithm to minimize similarity between source and target node

This Paper - Method As a result: Compact DB representation Faster generation of animations Authoring of animation with a low number of input parameters. Motion graphs approach specifically tuned for facial animation Novel approach to ease the choice of the thresholds A new method for introducing coherent noise in animation

This Paper - Method

Motion Graph Generation Information contained in graph: Nodes contains average displacement of merged poses Mitigate errors in alignment, effects of facial proportions Each edge has a similarity value Average number of consecutive merged nodes Number from neutral-to-peak and/or peak-to-peak expressions and respective standard deviation

Structure of Motion Training Data

Structure of Motion Training Data Dynamic 2D/3D sparse DB, whose samples need to be aligned Any sample can then be incorporated into a graph as long as it has Sparse landmarks Same equivalent pose (Peq) as all other samples Has the transitions between peak poses and a peak pose

Structure of Motion Training Data Pose alignment revolves around two steps: All poses are rigidly aligned with their equivalent pose The first pose is aligned with the average of all samples (Procrustes analysis) to reduce the effect of different proportions

Structure of Motion Training Data Cohn–Kanade (CK and CK+) 68 landmarks reduced to 23 Poses: happiness, sadness, disgust, surprise, anger, fear, contempt & neutral

Structure of Motion Training Data Pre-processing to reduce error/jitter Sigmoid fitting procedure Least squares to find the optimal parameters

Graph Creation

Similarity metric Spatial location of the landmarks Instantaneous velocity

Optimizing for compression Specifying the desired compressions and tolerances The author controls the trade-off between motion quality and flexibility of the graph Compression is calculated differently for each stage This method locally optimizes the thresholds for each stage/sequence

Motion Synthesis

Motion Synthesis Choosing the nodes relevant to desired facial behaviour Can directly choose the sources and destinations in all graphs Can specify the label (emotions) The actual landmark movements still need to be extracted

Motion Synthesis - Reconstructing motion from path Because of compression information is lost Each node allows approximation of information lost when the poses were merged. Using the average of consecutive merged nodes we construct the average displacement landmarks velocity Use the peak nodes average duration as the sequence length Using the peaks durations allows more realistic synthesis

Motion Synthesis - Reconstructing motion from path Normalize the displacements of each facial region The motion as a whole does not become linear, and keeps its core motion properties Smooth the sequences using the Savitzky–Golay window-based filter and sigmoid fitting It can approximate linear motions, such as blinking Removes drastic motions Removes any left zigzag

Motion Synthesis - Introducing variation Inherently introduce noise, as each facial region has its own motion graph and respective path More variety of emotions for crowds Additional variations in the sequences length and the chosen path and relies on the standard deviation The path variations build on top of Dijkstra’s path Noise is controlled by the percentage of the original path that can be changed.

Results

Results Motion graphs were created and tested with 70 sequences from ∼20 subjects of the CK/CK+ Smaller DB with 6 x < Pneutral . . . Ppeak expression . . . Pneutral > and 3 x <Pneutral . . . Ppeak expression 1 . . . Ppeak expression n . . . Pneutral > Lower data compression -> better representation of motion, more time

Results - Bad Local minima occur when creating/using motion graphs Stiff motion/less expressive Limited due to number of poses in the DB DB was not made for facial animation -> subtle poses Animations hard to recognize Relies on well defined peak expressions More complex motions will require extremely large graph Merging of different expressions for high compression No gaze and head movements

Results - Good Reduces DB size Minimum loss of information Low error in landmark positions and velocity Synthesized motion similar to original even with high compression Data compression ratio typically between boundaries defined by user input Structure with one region grows exponentially with number of nodes of the graph Smooth motion Less input, no equipment

Future Works Gaze with complex emotions More complex animations i.e. talking Better results with artistic help (better models, better results) Better Database (Current, only Neutral-Peak but no peak-to-peak)

Questions

Discussion

Discussion As is, can this method be used in movies and games, and are the results adequate for secondary characters? It offers a good alternative, but the method is not ready yet. No speech movement, gaze change. Too generic.

Discussion Referring to future work using more complex animation which technology could be used to improve results? How can this be cost-efficient and usable for better secondary characters? Speaking movement, although it would be difficult to find decent DB for that. Gaze and head movement is important. Biggest range of emotions