N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.

Slides:



Advertisements
Similar presentations
DDDAS: Stochastic Multicue Tracking of Objects with Many Degrees of Freedom PIs: D. Metaxas, A. Elgammal and V. Pavlovic Dept of CS, Rutgers University.
Advertisements

Detecting Faces in Images: A Survey
Real-Time Dynamic Wrinkles Caroline Larboulette Marie-Paule Cani GRAVIR Lab, Grenoble, France.
Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab.
3D Graphics Rendering and Terrain Modeling
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Computer Graphics Computer Animation& lighting Faculty of Physical and Basic Education Computer Science Dep Lecturer: 16 Azhee W. MD.
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
Face Recognition in Hyperspectral Images Z. Pan, G. Healey, M. Prasad and B. Tromberg University of California Published at IEEE Trans. on PAMI Vol 25,
SIGGRAPH Course 30: Performance-Driven Facial Animation For Latest Version of Bregler’s Slides and Notes please go to:
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
SURGICAL SIMULATIONS: IT’S ALL IN A GAME ! Gaming techniques for medical applications. V. Kotamraju, S. Payandeh, J. Dill Experimental Robotics Laboratory,
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
RECOGNIZING FACIAL EXPRESSIONS THROUGH TRACKING Salih Burak Gokturk.
Overview of Computer Vision CS491E/791E. What is Computer Vision? Deals with the development of the theoretical and algorithmic basis by which useful.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
Face Recognition Based on 3D Shape Estimation
Object Detection and Tracking Mike Knowles 11 th January 2005
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Marker-less Face Capture and Automatic Model Construction Part 1: Chris Bregler, NYU Part.
2007Theo Schouten1 Introduction. 2007Theo Schouten2 Human Eye Cones, Rods Reaction time: 0.1 sec (enough for transferring 100 nerve.
Vision-based Control of 3D Facial Animation Jin-xiang Chai Jing Xiao Jessica Hodgins Carnegie Mellon University.
Computer-Aided Diagnosis and Display Lab Department of Radiology, Chapel Hill UNC Julien Jomier, Erwann Rault, and Stephen R. Aylward Computer.
Introduction What is “image processing and computer vision”? Image Representation.
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM.
A Brief Overview of Computer Vision Jinxiang Chai.
Helsinki University of Technology Laboratory of Computational Engineering Modeling facial expressions for Finnish talking head Michael Frydrych, LCE,
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis Laurent Itti, Christof Koch, and Ernst Niebur IEEE PAMI, 1998.
Mesh Scissoring with Minima Rule and Part Salience Yunjin Lee,Seungyong Lee, Ariel Shamir,Daniel cohen-Or, Hans-Peter Seidel Computer Aided Geometric Design,
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Real-Time High Resolution Photogrammetry John Morris, Georgy Gimel’farb and Patrice Delmas CITR, Tamaki Campus, University of Auckland.
A FACEREADER- DRIVEN 3D EXPRESSIVE AVATAR Crystal Butler | Amsterdam 2013.
Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE
Use and Re-use of Facial Motion Capture M. Sanchez, J. Edge, S. King and S. Maddock.
Facial animation retargeting framework using radial basis functions Tamás Umenhoffer, Balázs Tóth Introduction Realistic facial animation16 is a challenging.
REU Project RGBD gesture recognition with the Microsoft Kinect Steven Hickson.
Object Based Video Coding - A Multimedia Communication Perspective Muhammad Hassan Khan
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
A General Framework for Tracking Multiple People from a Moving Camera
A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Stylization and Abstraction of Photographs Doug Decarlo and Anthony Santella.
Presented by Matthew Cook INFO410 & INFO350 S INFORMATION SCIENCE Paper Discussion: Dynamic 3D Avatar Creation from Hand-held Video Input Paper Discussion:
Lecture 6: 18/5/1435 Computer Animation(2) Lecturer/ Kawther Abas CS- 375 Graphics and Human Computer Interaction.
Department of Psychology & The Human Computer Interaction Program Vision Sciences Society’s Annual Meeting, Sarasota, FL May 13, 2007 Jeremiah D. Still,
Tutorial Visual Perception Towards Computer Vision
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
Subject Name: Computer Graphics Subject Code: Textbook: “Computer Graphics”, C Version By Hearn and Baker Credits: 6 1.
AAM based Face Tracking with Temporal Matching and Face Segmentation Mingcai Zhou 1 、 Lin Liang 2 、 Jian Sun 2 、 Yangsheng Wang 1 1 Institute of Automation.
Journal of Visual Communication and Image Representation
Facial Animation Wilson Chang Paul Salmon April 9, 1999 Computer Animation University of Wisconsin-Madison.
3D Animation 1. Introduction Dr. Ashraf Y. Maghari Information Technology Islamic University of Gaza Ref. Book: The Art of Maya.
Face Detection 蔡宇軒.
Biologically Inspired Vision-based Indoor Localization Zhihao Li, Ming Yang
Processing visual information for Computer Vision
Visual Information Retrieval
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
Computer Vision, Robotics, Machine Learning and Control Lab
Optical Flow Estimation and Segmentation of Moving Dynamic Textures
Final Year Project Presentation --- Magic Paint Face
Outline Perceptual organization, grouping, and segmentation
Multiple View Geometry for Robotics
Presentation transcript:

n n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British Columbia Gaze Driven Animation of Eyes Goals: ````` Goal To generate realistic and interactive computer generated animations of eyes and surrounding soft tissues. Contributions 1.A pipeline for measurement and motion estimation of soft tissues surrounding the eyes using high-speed monocular capture. 2.Construction of a data driven model of eye movement, that includes movement of the globes, periorbital soft tissues and eyelids. 3.A system for interactive animation of all the soft tissues of the eye, driven by gaze and additional parameters. Eye movement without any skin deformation looks unrealistic (a). Our model interactively computes skin deformation introducing realism in eye movements (b). Our generative lid and skin model is trained on one actor (c), and can then be used to transfer expressions to other actors (d). Wrinkles can also be added using our wrinkle model to produce realistic skin deformation (e). We also developed a real-time WebGL application that generates skin deformation at an interactive rate of 60 fps for user controlled gaze (f).

D R Neog, A Ranjan, J L Cardoso, D K Pai Gaze Driven Animation of Eyes System Overview Measurement. We use a single Grasshoppper3 [1] camera, to capture up to 120fps with a resolution of 1960x1200 pixels. A subject specific head mesh is acquired using FaceShift [2] technology with Kinect RGB/D camera. Gaze estimation. We estimate 3D gaze as globe configuration from the video using the method described in [3]. Pupil and iris are segmented using active contour algorithm [4]. Eyelid margins are detected as boundaries of the color segmented sclera region. 1 Measurement and Motion Estimation Overview of our system 2

D R Neog, A Ranjan, J L Cardoso, D K Pai Gaze Driven Animation of Eyes 3 Skin Motion estimation. The 3D motion of the skin is represented by the reduced coordinate representation of skin introduced in [5]. An illumination invariant texture component is tracked using a dense optical flow technique, while simultaneously generating high resolution textures. 1 Measurement and Motion Estimation 2 Generative Model Construction We factor the generative model into two parts: eyelid model and skin motion model. Skin motion is estimated from eyelid shape which is recovered using our lid model from gaze parameters. Other affect parameters can also be included in the model to generate different facial expressions. Both neural network based model and linear model give similar error performances, with neural network being slightly better. Pupil and Eyelid Detection 3D Skin Motion Tracking

D R Neog, A Ranjan, J L Cardoso, D K Pai Gaze Driven Animation of Eyes Training of Gaze parameterized skin motion model. Model Transfer. Using our generative model trained on one character, we produced realistic skin deformation in other characters. Wrinkle Modeling. Realistic wrinkles are produced in the animations using shape from shading approach. Details can be observed around the eye. Static scene observation. We used gaze data of a subject observing a painting, obtained using an eye tracker, to drive our system producing realistic eyelid and skin motion. 4 The PCA+MLR (Principal component analysis with multivariate linear regression) model is easier to implement on GPUs. We used this model for our WebGL application with real-time interactivity (60 fps). 3 Interactive Rendering Results ModelTraining time (s) PCA + MLR PCA + NN Model trained on subject A Transferred expression to subject B Model Transfer Without wrinklesWith wrinkles Wrinkle Modeling Static Scene Observation (red dot shows gaze point on the image) Reconstruction Errors and Training Times

D R Neog, A Ranjan, J L Cardoso, D K Pai Gaze Driven Animation of Eyes Results 5 References: 1.Point Grey Research, Canada 2.Weise, T., Bouaziz, S., Li, H., & Pauly, M. (2011, August). Realtime performance-based facial animation. In ACM Transactions on Graphics (TOG) (Vol. 30, No. 4, p. 77). ACM. 3.Moore, S. T., Haslwanter, T., Curthoys, I. S., & Smith, S. T. (1996). A geometric basis for measurement of three-dimensional eye position using image processing. Vision research, 36(3), Kass, M., Witkin, A., & Terzopoulos, D. (1988). Snakes: Active contour models. International journal of computer vision, 1(4), Li, D., Sueda, S., Neog, D. R., & Pai, D. K. (2013). Thin skin elastodynamics. ACM Transactions on Graphics (TOG), 32(4), Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis & Machine Intelligence, (11), Web-based interactive application. Skin motion during looking around and eyebrow raising are generated using our fast web-based interactive implementation. Saliency Map controlled movement. Using saliency maps [6] we computed gaze signals from a hockey video to feed into our model to produce skin movements. Vestibulo-ocular reflex. Although the head is relatively immobile during training, we can generate movement of skin around eye regions in novel scenarios, such as, vestibulo- ocular reflex in which head moves. Saliency Map Controlled Movement Vestibulo-ocular Reflex Looking around Raising eyebrows