Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,

Slides:



Advertisements
Similar presentations
Active Appearance Models
Advertisements

QR Code Recognition Based On Image Processing
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Face Alignment with Part-Based Modeling
Temporally Coherent Completion of Dynamic Shapes Hao Li, Linjie Luo, Daniel Vlasic, Pieter Peers, Jovan Popović, Mark Pauly, Szymon Rusinkiewicz ACM Transactions.
Image-based Clothes Animation for Virtual Fitting Zhenglong Zhou, Bo Shu, Shaojie Zhuo, Xiaoming Deng, Ping Tan, Stephen Lin * National University of.
Modeling the Shape of People from 3D Range Scans
Automatic Feature Extraction for Multi-view 3D Face Recognition
IIIT Hyderabad Pose Invariant Palmprint Recognition Chhaya Methani and Anoop Namboodiri Centre for Visual Information Technology IIIT, Hyderabad, INDIA.
1 Face Synthesis M. L. Gavrilova. 2 Outline Face Synthesis From Modeling to Synthesis Facial Expression Synthesis Conclusions.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Physically Based Motion Transformation Zoran Popović Andrew Witkin SIGGRAPH ‘99.
3D Video Generation and Service Based on a TOF Depth Sensor in MPEG-4 Multimedia Framework IEEE Consumer Electronics Sung-Yeol Kim Ji-Ho Cho Andres Koschan.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 24: Motion Capture Ravi Ramamoorthi Most slides courtesy.
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
ADVISE: Advanced Digital Video Information Segmentation Engine
A Study of Approaches for Object Recognition
High-Quality Video View Interpolation
Vision-based Control of 3D Facial Animation Jin-xiang Chai Jing Xiao Jessica Hodgins Carnegie Mellon University.
Quan Yu State Key Lab of CAD&CG Zhejiang University
Human Computer Interaction 7. Advanced User Interfaces (I) Data Scattering and RBF Course no. ILE5013 National Chiao Tung Univ, Taiwan By: I-Chen Lin,
Hand Signals Recognition from Video Using 3D Motion Capture Archive Tai-Peng Tian Stan Sclaroff Computer Science Department B OSTON U NIVERSITY I. Introduction.
Face Recognition and Retrieval in Video Basic concept of Face Recog. & retrieval And their basic methods. C.S.E. Kwon Min Hyuk.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
Geometry Videos Symposium on Computer Animation 2003 Hector M. Briceño Collaborators: Pedro V. Sander, Leonard McMillan, Steven Gortler, and Hugues Hoppe.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Webcam-synopsis: Peeking Around the World Young Ki Baik (CV Lab.) (Fri)
Efficient Algorithms for Matching Pedro Felzenszwalb Trevor Darrell Yann LeCun Alex Berg.
Recap Low Level Vision –Input: pixel values from the imaging device –Data structure: 2D array, homogeneous –Processing: 2D neighborhood operations Histogram.
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
Structure from images. Calibration Review: Pinhole Camera.
Motion Blending (Multidimensional Interpolation) Jehee Lee.
Computer Graphics Group Tobias Weyand Mesh-Based Inverse Kinematics Sumner et al 2005 presented by Tobias Weyand.
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Efficient Editing of Aged Object Textures By: Olivier Clément Jocelyn Benoit Eric Paquette Multimedia Lab.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Use and Re-use of Facial Motion Capture M. Sanchez, J. Edge, S. King and S. Maddock.
Facial animation retargeting framework using radial basis functions Tamás Umenhoffer, Balázs Tóth Introduction Realistic facial animation16 is a challenging.
Dynamic 3D Scene Analysis from a Moving Vehicle Young Ki Baik (CV Lab.) (Wed)
Computer Vision Lab Seoul National University Keyframe-Based Real-Time Camera Tracking Young Ki BAIK Vision seminar : Mar Computer Vision Lab.
Presented by Matthew Cook INFO410 & INFO350 S INFORMATION SCIENCE Paper Discussion: Dynamic 3D Avatar Creation from Hand-held Video Input Paper Discussion:
1 University of Texas at Austin Machine Learning Group 图像与视频处理 计算机学院 Motion Detection and Estimation.
12/7/10 Looking Back, Moving Forward Computational Photography Derek Hoiem, University of Illinois Photo Credit Lee Cullivan.
Communication Systems Group Technische Universität Berlin S. Knorr A Geometric Segmentation Approach for the 3D Reconstruction of Dynamic Scenes in 2D.
Temporally Coherent Completion of Dynamic Shapes AUTHORS:HAO LI,LINJIE LUO,DANIEL VLASIC PIETER PEERS,JOVAN POPOVIC,MARK PAULY,SZYMON RUSINKIEWICZ Presenter:Zoomin(Zhuming)
1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA.
AAM based Face Tracking with Temporal Matching and Face Segmentation Mingcai Zhou 1 、 Lin Liang 2 、 Jian Sun 2 、 Yangsheng Wang 1 1 Institute of Automation.
A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March.
Computer Vision, CS766 Staff Instructor: Li Zhang TA: Yu-Chi Lai
Rick Parent - CIS681 Motion Capture Use digitized motion to animate a character.
Journal of Visual Communication and Image Representation
Paper presentation topics 2. More on feature detection and descriptors 3. Shape and Matching 4. Indexing and Retrieval 5. More on 3D reconstruction 1.
Video Compression and Standards
Image-Based Modeling of Complex Surfaces Todd Zickler DEAS, Harvard University.
Energy minimization Another global approach to improve quality of correspondences Assumption: disparities vary (mostly) smoothly Minimize energy function:
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Local Stereo Matching Using Motion Cue and Modified Census in Video Disparity Estimation Zucheul Lee, Ramsin Khoshabeh, Jason Juang and Truong Q. Nguyen.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Detail Preserving Shape Deformation in Image Editing
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Fast Preprocessing for Robust Face Sketch Synthesis
Real-Time Human Pose Recognition in Parts from Single Depth Image
Statistical 3D Face Modeling
Presentation transcript:

Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab, Dept. of CS, National Chiao Tung University * Chunghwa Picture Tubes, LTD. † 1

Outline Introduction Acquisition of facial motion Space-time shape-from-shading Experiment and results Conclusion 2

Introduction Performance-driven method is one of the most straightforward method for facial animation. Expression details, e.g. wrinkles, dimples, are key factors but difficult for motion capture. Original captured imagesDeformation without details 3

Introduction (cont.) Physics-based simulation and blend shape methods try to mimic the details. But, the synthesized details are not the exact expressions. Muscle-based [E. Sifakis et al. 2005] Blend shape [Z. Deng et al. 2006] 4

Introduction (cont.) Our goal is to enhance existing motion capture tech. and capture facial details. SFS With the captured images and directional lighting, our optimization-based shape-from-shading (SFS) can estimate details from shading in video. With facial details Captured video 5

The proposed method Combines the benefits of motion capture and shape-from-shading. Motion capture and stereo reconstruction general geometry accurate on feature points and general geometry. Unreliable corresponding matching at textureless regions Shape-from-shading don’t needcorrespondencefor textureless regions don’t need detailed point correspondence for textureless regions. relative undulation Estimate relative undulation. Sensitive to noise. Space-time shape-from-shading Motion capture + Space-time shape-from-shading. 6

The proposed method 7

Approximate geometry by Mocap Tracking by block matching and stereo reconstruction. Deforming a generic face model by radial-basis functions (RBF). 8

Facial details by SFS Estimating time-varying details by iterative approximating shape V and reflectance R. Input image 9

Space-time constraints Only SFS is not enough. For more reliable detailed motions, we proposed using space-time constraints. 10 Highly sensitive to noise After applying our spatial constraints

Spatial constraints Mostly continuous surface High spatial coherence 11 ∆ For J Є Neighbor(p) ∆ Neighbor(p) denotes the 8-neighbor pixel set ∆ Wj is an adaptive weight ∆ Kcs is the weight for spatial constraints. Reduce the noise noise Ztp Ztj Ztp Ztj noise Ztp Ztj

Still flicker According to biomechanics properties: A human facial surface should gradually transit between expressions. Temporal constraints 12 T0T1 flicker T2 T0 A video image sequence

Space-time shape-from-shading Finally, our objective function becomes 13 spatial constraints + temporal constraints + shading constraints = Space-time shape-from-shading

Performance issue if applied our optimization to the whole face. DOF is too large Assigned some small windows. We preferred areas with more wrinkles and creases. 14 D.O.F=N*M (pixels) *i(frames) N M Video image sequence Fi … F2 F1 F3

Experiment Illumination-controlled (single light source) Two video streams. (HDV, 1280*720,30 fps) We pasted 25 to 30 markers on human face. 15 { C1 } {C2 }

Facial detailed results and Comparison 16

Result of synthesis Generic model: 6078 vertices 6315 polygons 17 deformation (RBF) subdivision per-pixel normal mapping

Result of animation 18

Conclusion & Future work We propose capturing detailed motion by conventional Mocap and advanced shape-from- shading. Doesn’t need additional devices, paint pigments, or restrict the wrinkle shape. With spatial and temporal constraints, our optimal shape-from-shading is more reliable. Reflectance parameters are also estimated. 19

Conclusion & Future work In addition to Phong model, we will extend the concept to other reflectance models. E.g Cook-Torrance BRDF model, BSSRDF, etc) Currently, SFS is only applied to designated segments. An more efficient SFS for the whole face will make our animation more realistic. 20

Thank for your attention! 21 forehead details between eyebrows smile