Download presentation
Presentation is loading. Please wait.
1
eNTERFACE ’10 Amsterdam, July-August 2010 Hamdi Dibeklio ğ lu Ilkka Kosunen Marcos Ortega Albert Ali Salah Petr Zuzánek
2
Goal of the Project Responsive photograph frame ◦ User interaction leads to different responses Modules of the project ◦ Video segmentation module Dictionary of responses ◦ Behaviour understanding Offline: Labelling dictionary Online: Cl uster user action ◦ System logic Linking user actions to responses
4
5 video recordings (~1.5-2 min.) ◦ Same individual ◦ Different actions and expressions Manual annotation of videos ◦ ANVIL tool ◦ Annotated by different individuals Automatic segmentation ◦ Segmentation based on actions ◦ Optical flow: amount of activity over time Module 1: Offline Segmentation
5
Activity calculation based on feature tracking over the sequence Feature detection ◦ Shi-Tomasi corner detection algorithm Feature tracking ◦ Lucas-Kanade feature tracking algorithm ◦ Pyramidal implementation (Bouguet)
6
Optical Flow Computation Movement analysis
7
To find a calm segment, just search for long period of frames with calculated optical flow below some treshold (we used 40% of average optical flow calculated from all frames) To find an active segment, search for frames with lot of optical flow, and then search forward and backward for the calm segments.
9
Manual vs. Automatic Segmentation
12
Module 2: Real-time Feature Analysis Face detection activates the system ◦ Viola-Jones face detector U ser ’s behaviour can be monitored via ◦ Face detection ◦ Eye detection Valenti et al., isophote-curves based eye detection ◦ Optical flow energy OpenCV Lucas-Kanade algorithm ◦ Colour features ◦ Facial feature analysis The eMotion system
13
User Tracking Face and Eye detection: EyeAPI
14
Face model: 16 surface patches embedded in Bezier volumes. Piecewise Bezier Volume Deformation (PBVD) tracker is used to trace the motion of the facial features. * R. Valenti, N. Sebe, and T. Gevers. Facial expression recognition: A fully integrated approach. In ICIAPW, pages 125–130, 2007.
15
12 motion units Naive Bayes (NB) classifier for categorizing expressions NB Advantage: the posterior probabilities allow a soft output of the system
16
Happiness Surprise Anger Disgust Fear Sadness
18
Module 3: System Response Linking user actions and system responses An action queue is maintained ◦ Different user inputs (transitions) lead to different responses (states) The responses (segments) are ‘unlocked’ one by one
19
Before learningAfter learning
20
Currently two external programs are employed: ◦ SplitCam ◦ eMotion Glyphs are used to provide feedback to the user Glyph brightness is related to distance to activation Once a glyph is activated, the same user activity will elicit the same response Each user can have different behaviours activating glyphs
22
Work on the learning module Testing the segmentation parameters The dual frame mode Speeding up the system Wizard of Oz study Usability studies SEMAINE integration?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.