Presentation is loading. Please wait.

Presentation is loading. Please wait.

N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.

Similar presentations


Presentation on theme: "N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British."— Presentation transcript:

1 n n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British Columbia Gaze Driven Animation of Eyes Goals: ````` Goal To generate realistic and interactive computer generated animations of eyes and surrounding soft tissues. Contributions 1.A pipeline for measurement and motion estimation of soft tissues surrounding the eyes using high-speed monocular capture. 2.Construction of a data driven model of eye movement, that includes movement of the globes, periorbital soft tissues and eyelids. 3.A system for interactive animation of all the soft tissues of the eye, driven by gaze and additional parameters. Eye movement without any skin deformation looks unrealistic (a). Our model interactively computes skin deformation introducing realism in eye movements (b). Our generative lid and skin model is trained on one actor (c), and can then be used to transfer expressions to other actors (d). Wrinkles can also be added using our wrinkle model to produce realistic skin deformation (e). We also developed a real-time WebGL application that generates skin deformation at an interactive rate of 60 fps for user controlled gaze (f).

2 D R Neog, A Ranjan, J L Cardoso, D K Pai Gaze Driven Animation of Eyes System Overview Measurement. We use a single Grasshoppper3 [1] camera, to capture up to 120fps with a resolution of 1960x1200 pixels. A subject specific head mesh is acquired using FaceShift [2] technology with Kinect RGB/D camera. Gaze estimation. We estimate 3D gaze as globe configuration from the video using the method described in [3]. Pupil and iris are segmented using active contour algorithm [4]. Eyelid margins are detected as boundaries of the color segmented sclera region. 1 Measurement and Motion Estimation 1 2 3 Overview of our system 2

3 D R Neog, A Ranjan, J L Cardoso, D K Pai Gaze Driven Animation of Eyes 3 Skin Motion estimation. The 3D motion of the skin is represented by the reduced coordinate representation of skin introduced in [5]. An illumination invariant texture component is tracked using a dense optical flow technique, while simultaneously generating high resolution textures. 1 Measurement and Motion Estimation 2 Generative Model Construction We factor the generative model into two parts: eyelid model and skin motion model. Skin motion is estimated from eyelid shape which is recovered using our lid model from gaze parameters. Other affect parameters can also be included in the model to generate different facial expressions. Both neural network based model and linear model give similar error performances, with neural network being slightly better. Pupil and Eyelid Detection 3D Skin Motion Tracking

4 D R Neog, A Ranjan, J L Cardoso, D K Pai Gaze Driven Animation of Eyes Training of Gaze parameterized skin motion model. Model Transfer. Using our generative model trained on one character, we produced realistic skin deformation in other characters. Wrinkle Modeling. Realistic wrinkles are produced in the animations using shape from shading approach. Details can be observed around the eye. Static scene observation. We used gaze data of a subject observing a painting, obtained using an eye tracker, to drive our system producing realistic eyelid and skin motion. 4 The PCA+MLR (Principal component analysis with multivariate linear regression) model is easier to implement on GPUs. We used this model for our WebGL application with real-time interactivity (60 fps). 3 Interactive Rendering Results ModelTraining time (s) PCA + MLR1.15619.55 PCA + NN0.2658.75 Model trained on subject A Transferred expression to subject B Model Transfer Without wrinklesWith wrinkles Wrinkle Modeling Static Scene Observation (red dot shows gaze point on the image) Reconstruction Errors and Training Times

5 D R Neog, A Ranjan, J L Cardoso, D K Pai Gaze Driven Animation of Eyes Results 5 References: 1.Point Grey Research, Canada 2.Weise, T., Bouaziz, S., Li, H., & Pauly, M. (2011, August). Realtime performance-based facial animation. In ACM Transactions on Graphics (TOG) (Vol. 30, No. 4, p. 77). ACM. 3.Moore, S. T., Haslwanter, T., Curthoys, I. S., & Smith, S. T. (1996). A geometric basis for measurement of three-dimensional eye position using image processing. Vision research, 36(3), 445-459. 4.Kass, M., Witkin, A., & Terzopoulos, D. (1988). Snakes: Active contour models. International journal of computer vision, 1(4), 321-331. 5.Li, D., Sueda, S., Neog, D. R., & Pai, D. K. (2013). Thin skin elastodynamics. ACM Transactions on Graphics (TOG), 32(4), 49. 6.Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis & Machine Intelligence, (11), 1254-1259. Web-based interactive application. Skin motion during looking around and eyebrow raising are generated using our fast web-based interactive implementation. Saliency Map controlled movement. Using saliency maps [6] we computed gaze signals from a hockey video to feed into our model to produce skin movements. Vestibulo-ocular reflex. Although the head is relatively immobile during training, we can generate movement of skin around eye regions in novel scenarios, such as, vestibulo- ocular reflex in which head moves. Saliency Map Controlled Movement Vestibulo-ocular Reflex Looking around Raising eyebrows


Download ppt "N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British."

Similar presentations


Ads by Google