Project #2 Multimodal Caricatural Mirror Intermediate report

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Descriptive schemes for facial expression introduction.
Bruxelles, October 3-4, Interface Multimodal Analysis/Synthesis System for Human Interaction to Virtual and Augmented Environments IST Concertation.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Simple Face Detection system Ali Arab Sharif university of tech. Fall 2012.
David Wild Supervisor: James Connan Rhodes University Computer Science Department Gaze Tracking Using A Webcamera.
Facial expression as an input annotation modality for affective speech-to-speech translation Éva Székely, Zeeshan Ahmed, Ingmar Steiner, Julie Carson-Berndsen.
Facial feature localization Presented by: Harvest Jang Spring 2002.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
GMM-Based Multimodal Biometric Verification Yannis Stylianou Yannis Pantazis Felipe Calderero Pedro Larroy François Severin Sascha Schimke Rolando Bonal.
MUSCLE movie data base is a multimodal movie corpus collected to develop content- based multimedia processing like: - speaker clustering - speaker turn.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan and Mr Mehrdad Ghaziasgar.
Final Presentation. Lale AkarunOya Aran Alexey Karpov Milos Zeleny Hasim Sak Erinc Dikici Alp Kindiroglu Marek Hruz Pavel Campr Daniel Schorno Alexander.
Shane Tuohy.  In 2008, rear end collisions accounted for almost 25% of all injuries sustained in road traffic accidents on Irish roads [RSA Road Collision.
Kinect Part II Anna Loparev.
Mobile Device and Cloud Server based Intelligent Health Monitoring Systems Sub-track in audio - visual processing NAME: ZHAO Ding SID: Supervisor:
GIP: Computer Graphics & Image Processing 1 1 Medical Image Processing & 3D Modeling.
1 TEMPLATE MATCHING  The Goal: Given a set of reference patterns known as TEMPLATES, find to which one an unknown pattern matches best. That is, each.
A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi.
An efficient method of license plate location Pattern Recognition Letters 26 (2005) Journal of Electronic Imaging 11(4), (October 2002)
Multimodal Information Analysis for Emotion Recognition
Automatic Image Anonymizer Alex Brettingen James Esposito.
University of Coimbra ISR – Institute of Systems and Robotics University of Coimbra - Portugal Institute of Systems and Robotics
ENTERFACE 08 Project 1 “MultiParty Communication with a Tour Guide ECA” Mid-term presentation August 19th, 2008.
Multimodality, universals, natural interaction… and some other stories… Kostas Karpouzis & Stefanos Kollias ICCS/NTUA HUMAINE WP4.
Delivering Business Value through IT Face feature detection using Java and OpenCV 1.
Spring 2007 COMP TUI 1 Computer Vision for Tangible User Interfaces.
Final Year Project Vision based biometric authentication system By Padraic ó hIarnain.
Detecting Eye Contact Using Wearable Eye-Tracking Glasses.
Final Year Project. Project Title Kalman Tracking For Image Processing Applications.
Jennifer Lee Final Automated Detection of Human Emotion.
SpeakEasy Sarthak Bhandari and Daniel Zhang Wednesday, October 21st 2015.
Face Detection Final Presentation Mark Lee Nic Phillips Paul Sowden Andy Tait 9 th May 2006.
Design & Implementation of a Gesture Recognition System Isaac Gerg B.S. Computer Engineering The Pennsylvania State University.
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents mid-term presentation.
On the relevance of facial expressions for biometric recognition Marcos Faundez-Zanuy, Joan Fabregas Escola Universitària Politècnica de Mataró (Barcelona.
 Speech signal processing Speech recognition Speech synthesis Speech compression Speaker diarization and its applications  Image processing Image processing.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan and Mehrdad Ghaziasgar.
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
Managed and Operated by National Security Technologies, LLC Nevada National Security Site SurVisTool “SurVisTool” Visualizing and Analyzing Multiple Time.
OpenCV C++ Image Processing
Perceptive Computing Democracy Communism Architecture The Steam Engine WheelFire Zero Domestication Iron Ships Electricity The Vacuum tube E=mc 2 The.
Driver Drawsiness Detection
Rotoscoping Senior Capstone Project | Ted Trisco
Hand Gestures Based Applications
Hiba Tariq School of Engineering
Automated Detection of Human Emotion
AHED Automatic Human Emotion Detection
OPERAcraft + Kinect = Cinemacraft
Funny Face Application
CIS 470 Mobile App Development
Fusion, Face, HD Face Matthew Simari | Program Manager, Kinect Team
What is blue eyes ? aims on creating computational machines that have perceptual and sensory ability like those of human beings. interactive computer.
EMOTION DIARY Milestone 2 BENEFITS WHAT IS IT? FEATURE HIGHLIGHTS
“Face recognition using Kinect”
Introduction to Image Processing in Python
眼動儀與互動介面設計 廖文宏 6/26/2009.
Region and Shape Extraction
Multimodal Caricatural Mirror
Multimodal Caricatural Mirror
Elecbits Electronic shade.
AHED Automatic Human Emotion Detection
Anarghya Mitra and Zelun Luo
AHED Automatic Human Emotion Detection
Automated Detection of Human Emotion
PRELIMINARY DESIGN REVIEW
Elmar Nöth, Andreas Maier, Michael Stürmer, Maria Schuster Towards Multimodal Evaluation of Speech Pathologies Friday, 13 September 2019.
End-to-End Speech-Driven Facial Animation with Temporal GANs
Presentation transcript:

Project #2 Multimodal Caricatural Mirror Intermediate report

Project Summary Create a Multimodal caricatural mirror : Multimodal = facial + vocal Caricatural = Amplify emotions Mirror = Face your avatar! 2/24/2019

Outline Project Architecture : 2 versions Visual Modality : Face Tracking Facial Features Tracking Facial Expression Recognition Facial Animation Audio Modality : Vocal Features Extraction Emotion detection in Speech Prosody Amplification 2/24/2019

Project Architecture #1 Face tracking Facial Features Tracking Speech Signal Vocal Features Extraction Emotion Detection Facial Animation User Fusion Wide Screen Prosody Amplification Movements Amplification 2/24/2019

Project Architecture #2 The ‘Mamama’ option User Face tracking Speech Signal Facial Features Tracking Vocal Features Extraction Emotion Recognition Prosody Amplification Wide Screen Facial Animation Fusion 2/24/2019

Face Tracking We chose to use an open-source software : The OpenCV face tracker Provides real-time face-tracking using C/C++ open-source Intel Computer Vision Library Exemple using OpenCV face tracker, with OUR face tracked !! Picture/Video to be inserted 2/24/2019

Facial Features Tracking Step 1 : Facial Features Detection (1st frame) Computation of image’s trace transform (luminance on M vertical lines) From sets of local minima, infer positions of facial features (eyebrows, eyes and mouth) Build binary image from N darkest pixels per line Facial features’ positions are detected to be among the above mentioned dark pixels  Automatic Initialization of the Candide grid (1st frame) 2/24/2019

Facial Features Tracking 2/24/2019

Grid Initialization 2/24/2019

Facial Features Tracking Step 2 : Facial Features Tracking (all frames > 1) Video missing here … 2/24/2019

Emotions modeling Four positions to interpolate, for each emotion: CLOSED MOUTH CLOSED MOUTH MAMA INTERMEDIATE FINAL EMOTION 2/24/2019

Emotions modeling Happiness Sadness 2/24/2019

Facial Animation Among 3D face models, we chose to use Candide3 for the animation It includes animation units and MPEG-4 FAPs Animation software is written in C++ by using OpenGL and SDL APIs, which are open source and can run on many platforms. 2/24/2019

Vocal Features extraction For the moment : pitch only Pitch is extracted by means of the autocorrelation method and modified by means of PSOLA. Ex: downtrend of the pitch is removed, pitch movements amplified and downtrend is set back 2/24/2019

Emotion Detection & Prosody Amplification For vocal features, our aim is to classify : Emotions inducing small pitch variations Emotions inducing high pitch variations Can be done based on pitch or other features such as spectral ones. Original Pitch-powered 2/24/2019