UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos Dept.

Slides:



Advertisements
Similar presentations
UNIVERSIDAD DE MURCIA LÍNEA DE INVESTIGACIÓN DE PERCEPCIÓN ARTIFICIAL Y RECONOCIMIENTO DE PATRONES - GRUPO DE COMPUTACIÓN CIENTÍFICA A CAMERA CALIBRATION.
Advertisements

Université du Québec École de technologie supérieure Face Recognition in Video Using What- and-Where Fusion Neural Network Mamoudou Barry and Eric Granger.
Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Face Alignment with Part-Based Modeling
M.S. Student, Hee-Jong Hong
F ACE TRACKING EE 7700 Name: Jing Chen Shaoming Chen.
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
Automatic Feature Extraction for Multi-view 3D Face Recognition
Cambridge, Massachusetts Pose Estimation in Heavy Clutter using a Multi-Flash Camera Ming-Yu Liu, Oncel Tuzel, Ashok Veeraraghavan, Rama Chellappa, Amit.
Facial feature localization Presented by: Harvest Jang Spring 2002.
Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
ICIP 2000, Vancouver, Canada IVML, ECE, NTUA Face Detection: Is it only for Face Recognition?  A few years earlier  Face Detection Face Recognition 
1 Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion (IEEE 2009) Junlan Yang University of Illinois,Chicago.
UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP A PERCEPTUAL INTERFACE USING INTEGRAL PROJECTIONS Ginés García Mateos.
International Conference on Image Analysis and Recognition (ICIAR’09). Halifax, Canada, 6-8 July Video Compression and Retrieval of Moving Object.
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
A Bayesian Formulation For 3d Articulated Upper Body Segmentation And Tracking From Dense Disparity Maps Navin Goel Dr Ara V Nefian Dr George Bebis.
Robust Object Segmentation Using Adaptive Thresholding Xiaxi Huang and Nikolaos V. Boulgouris International Conference on Image Processing 2007.
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP Estimating 3D Facial Pose in Video with Just Three Points Ginés García.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Vehicle Movement Tracking
Real-time Hand Pose Recognition Using Low- Resolution Depth Images
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
Smart Traveller with Visual Translator. What is Smart Traveller? Mobile Device which is convenience for a traveller to carry Mobile Device which is convenience.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
ENEE 408G Multimedia Signal Processing Video Stabilization for Pocket PC Application Professor: Dr. Liu Group 4 Student: Hamed Hsiu-huei.
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Sana Naghipour, Saba Naghipour Mentor: Phani Chavali Advisers: Ed Richter, Prof. Arye Nehorai.
Abstract Some Examples The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to.
3D Fingertip and Palm Tracking in Depth Image Sequences
Robust Hand Tracking with Refined CAMShift Based on Combination of Depth and Image Features Wenhuan Cui, Wenmin Wang, and Hong Liu International Conference.
Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE
3D SLAM for Omni-directional Camera
 Tsung-Sheng Fu, Hua-Tsung Chen, Chien-Li Chou, Wen-Jiin Tsai, and Suh-Yin Lee Visual Communications and Image Processing (VCIP), 2011 IEEE, 6-9 Nov.
A 3D Model Alignment and Retrieval System Ding-Yun Chen and Ming Ouhyoung.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
A New Fingertip Detection and Tracking Algorithm and Its Application on Writing-in-the-air System The th International Congress on Image and Signal.
Dynamic Captioning: Video Accessibility Enhancement for Hearing Impairment Richang Hong, Meng Wang, Mengdi Xuy Shuicheng Yany and Tat-Seng Chua School.
Crowd Analysis at Mass Transit Sites Prahlad Kilambi, Osama Masound, and Nikolaos Papanikolopoulos University of Minnesota Proceedings of IEEE ITSC 2006.
Expectation-Maximization (EM) Case Studies
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Fast and Robust Algorithm of Tracking Multiple Moving Objects for Intelligent Video Surveillance Systems Jong Sun Kim, Dong Hae Yeom, and Young Hoon Joo,2011.
Corner Detection & Color Segmentation CSE350/ Sep 03.
User Attention Tracking in Large Display Face Tracking and Pose Estimation Yuxiao Hu Media Computing Group Microsoft Research, Asia.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
AAM based Face Tracking with Temporal Matching and Face Segmentation Mingcai Zhou 1 、 Lin Liang 2 、 Jian Sun 2 、 Yangsheng Wang 1 1 Institute of Automation.
Detecting Eye Contact Using Wearable Eye-Tracking Glasses.
Introduction Performance of metric learning is heavily dependent on features extracted Sensitive to Performance of Filters used Need to be robust to changes.
IEEE International Conference on Multimedia and Expo.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Face Detection Final Presentation Mark Lee Nic Phillips Paul Sowden Andy Tait 9 th May 2006.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
Face Detection 蔡宇軒.
CONTENTS:  Introduction.  Face recognition task.  Image preprocessing.  Template Extraction and Normalization.  Template Correlation with image database.
Signal and Image Processing Lab
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
ISOMAP TRACKING WITH PARTICLE FILTERING
Vehicle Segmentation and Tracking in the Presence of Occlusions
AHED Automatic Human Emotion Detection
Presentation transcript:

UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos Dept. de Informática y Sistemas University of Murcia - SPAIN

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Introduction Main objective: develop a new technique to track human faces and facial features: –Working in real-time: fast processing –Under realistic conditions: robust to facial expressions, lighting conditions, 3D head pose and movements –With high location accuracy: facial features location (eyes, nose, mouth)

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Introduction Index of the presentation: –Face integral projections –Integral projection models –Alignment of projections –The tracking process –Experimental results –Conclusions

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Face integral projections Definition. Let i(x,y) be an image, and R(i) a region in it – Vertical integral projection P VR : {y min,..., y max }  R P VR (y) = i(x,y);  (x,y)  R(i) – Horizontal integral projection P HR : {x min,..., x max }  R P HR (x) = i(x,y);  (x,y)  R(i) FACE P VFACE (y) y EYES P HEYES (x) x

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Face integral projections Dimensionality reduction: 3D world  2D images  1D integral projections Advantages: –Fast to compute and to process Disadvantages: –Loss of information. Is it relevant for the problem? What happens when applied to human faces?

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Face integral projections Different individuals

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Face integral projections Different facial expressions

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Face integral projections Different segmented regions

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Integral project. models Face integral projec. is an interesting and robust feature for tracking It has been applied using heuristic analysis: max-min search, fuzzy logic, thresholding projections Proposal: define and work with adaptable projection models How to model a variety of projection patterns?

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Integral project. models A projection model is a pair: M : {m min,..., m max }  R (Mean) V : {m min,..., m max }  R(Variance)

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Integral project. models Advantages of working with explicit projection models: –The model is learnt from examples. In tracking, it is adapted to tracked faces –We can define a signal to model distance:

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Integral project. models Advantages of working with explicit projection models: –The model can be reprojected Reprojection (by outer product) using 1 vertical IP and 2 horizontal IP

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Alignment of projections Corresponding facial features should be projected on the same locations Before alignment After alignment

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Alignment of projections Alignment with respect to a model Problem formulation: –Let S: {s min,..., s max }  R be a signal –Let M,V: {m min,..., m max }  R be a model –Let S’ be a family of scale and translations alignments of S: –Find parameters ( a,b,c,d,e ) which minimize:

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, The tracking process Tracking is based on the alignment of integral projections Main steps: 1. Prediction and segmentation 2. Vertical alignment 3. Horizontal alignment 4. Orientation estimation

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, The tracking process Features to track Input to the tracker –Bounding ellipse –Facial features: eyes and mouth –Face model –State of tracking in frame t-1 –Frame t

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, The tracking process 1. Prediction and segmentation Null predictor: locations in frame t-1 are used to extract the face in frame t Predicted location Wrapped Segmented

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, The tracking process 2. Vertical alignment Using the vertical projection of the face, and the model, align the face vertically Segmented Model P VFACE (y) Align P VFACE to M VFACE y Align using the obtained parameters (a,b,c,d,e)

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, The tracking process 3. Horizontal alignment Using the horizontal projection of the eyes’ region, align the face horizontally Segmented after step 2 Model P HEYES (x) Align P HEYES to M HEYES y Align using the obtained parameters (a,b,c,d,e) x

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, The tracking process 4. Orientation estimation Using vertical projections of each eye, estimate the orientation of the face Segmented after step 3 Model P VEYE1, P VEYE2 Align P VEYEi to M VEYEi y

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, The tracking process Global structure of the tracker 1. Prediction and segmentation 2. Vertical alignment 3. Horizontal alignment 4. Orientation est.

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Experimental results Experiments: –Location accuracy –Execution time per frame –Robustness to facial expressions, 3D pose, lighting conditions Different sources: TV, video- conference camera and DVD Compared with CamShift algorithm

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Experimental results FILE NAME: tl5-02.aviSOURCE: TV FORMAT: 640x480 (25 fps)LENGTH: 280 frames MODEL SIZE (pixels): 97x123 AVG/MAX ERROR (mm): 3.41 / 12.9 TIME/FRAME (ms): 4.02

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Experimental results FILE NAME: a3-05.aviSOURCE: TV FORMAT: 640x480 (25 fps)LENGTH: 541 frames MODEL SIZE (pixels): 101x136 AVG/MAX ERROR (mm): 1.95 / 9.76 TIME/FRAME (ms): 4.62

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Experimental results FILE NAME: a3-2.aviSOURCE: TV FORMAT: 320x240 (25 fps)LENGTH: 440 frames MODEL SIZE (pixels): 75x95 AVG/MAX ERROR (mm): - TIME/FRAME (ms): 3.74

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Experimental results FILE NAME: ggm2.aviSOURCE: QuickCam FORMAT: 320x240 (25 fps)LENGTH: 655 frames MODEL SIZE (pixels): 70x91 AVG/MAX ERROR (mm): 1.83 / 9.29 TIME/FRAME (ms): 3.69

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Experimental results FILE NAME: sw2-1.aviSOURCE: DVD FORMAT: 320x240 (30 fps)LENGTH: 427 frames MODEL SIZE (pixels): 94x115 AVG/MAX ERROR (mm): - TIME/FRAME (ms): 3.57

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Experimental results Location accuracy: –Errors in mm (in the face plane) using a ground-truth location of facial features –Average error below 4 mm, maximum error 14 mm –With CamShift: average error over 10 mm, maximum error 30 mm

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Experimental results Execution time, per frame: –Off-the-self PC: AMD Athlon at 1.2 GHz –Average time below 5 ms, with 640x480 resolution, face size 100x120 pixels –With CamShift: average time about 10 ms, unable to work in one video sequence

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Conclusions The tracking problem is decomposed into three main independent steps: –Vertical alignment –Horizontal alignment –Orientation estimation The process is fast, accurate and robust in the tested conditions It is exclusively based on integral projections

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Conclusions The tracker is not affected by background distractors It can be applied either in color and grey-scale images Main limitation: maximum allowed movement (approx. 1 m/s, at 25 fps) Future work: improve the prediction step, e.g. with Kalman filters

REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos AVBPA’2003 GUILDFORD JUNE, Last This work has been supported by Spanish CICYT project DPI C03-01 Demo videos: Grupo PARP web page: Thank you very much