The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.

Slides:



Advertisements
Similar presentations
CVPR2013 Poster Modeling Actions through State Changes.
Advertisements

Active Appearance Models
Matthias Wimmer, Ursula Zucker and Bernd Radig Chair for Image Understanding Computer Science Technische Universität München { wimmerm, zucker, radig
Age Differences in Emotion Recognition of Briefly Presented Faces Lisa Emery, Kory Morgan, Kaitlyn Pechanek & Caitlin Williams Reprints may be obtained.
Lecture 19 – Recognition 2. Identity Age Attractiveness Grammar Emotions Humanface Gender.
Face Recognition Method of OpenCV
HYBRID-BOOST LEARNING FOR MULTI-POSE FACE DETECTION AND FACIAL EXPRESSION RECOGNITION Hsiuao-Ying ChenChung-Lin Huang Chih-Ming Fu Pattern Recognition,
Intelligent Systems Lab. Recognizing Human actions from Still Images with Latent Poses Authors: Weilong Yang, Yang Wang, and Greg Mori Simon Fraser University,
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Face Databases. Databases for Face Recognition The appearance of a face is affected by many factors because of its non-rigidity and complex 3D structure:
Akshay Asthana, Jason Saragih, Michael Wagner and Roland G öcke ANU, CMU & U Canberra In part funded by ARC grant TS
Meta-Cognition, Motivation, and Affect PSY504 Spring term, 2011 March 16, 2010.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Automatic Pose Estimation of 3D Facial Models Yi Sun and Lijun Yin Department of Computer Science State University of New York at Binghamton Binghamton,
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
Recognizing Emotions in Facial Expressions
1 Modeling Facial Shape and Appearance M. L. Gavrilova.
Selective Transfer Machine for Personalized Facial Action Unit Detection Wen-Sheng Chu, Fernando De la Torre and Jeffery F. Cohn Robotics Institute, Carnegie.
Wang, Z., et al. Presented by: Kayla Henneman October 27, 2014 WHO IS HERE: LOCATION AWARE FACE RECOGNITION.
Facial Feature Detection
Active Appearance Models for Face Detection
Gender and 3D Facial Symmetry: What’s the Relationship ? Xia BAIQIANG (University Lille1/LIFL) Boulbaba Ben Amor (TELECOM Lille1/LIFL) Hassen Drira (TELECOM.
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
Person-Specific Domain Adaptation with Applications to Heterogeneous Face Recognition (HFR) Presenter: Yao-Hung Tsai Dept. of Electrical Engineering, NTU.
Out-of-plane Rotations Environment constraints ● Surveillance systems ● Car driver images ASM: ● Similarity does not remove 3D pose ● Multiple-view database.
1 Facial Expression Recognition using KCCA with Combining Correlation Kernels and Kansei Information Yo Horikawa Kagawa University, Japan.
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Multimodal Information Analysis for Emotion Recognition
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
Transfer Learning for Image Classification Group No.: 15 Group member : Feng Cai Sauptik Dhar Sauptik.
Intelligent Control and Automation, WCICA 2008.
Paper Reading Dalong Du Nov.27, Papers Leon Gu and Takeo Kanade. A Generative Shape Regularization Model for Robust Face Alignment. ECCV08. Yan.
Data Mining, ICDM '08. Eighth IEEE International Conference on Duy-Dinh Le National Institute of Informatics Hitotsubashi, Chiyoda-ku Tokyo,
Performance Comparison of Speaker and Emotion Recognition
ACADS-SVMConclusions Introduction CMU-MMAC Unsupervised and weakly-supervised discovery of events in video (and audio) Fernando De la Torre.
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
Timo Ahonen, Abdenour Hadid, and Matti Pietikainen
Introduction Performance of metric learning is heavily dependent on features extracted Sensitive to Performance of Filters used Need to be robust to changes.
Speaker Change Detection using Support Vector Machines V.Kartik, D.Srikrishna Satish and C.Chandra Sekhar Speech and Vision Laboratory Department of Computer.
RECOGNIZING FACIAL EXPRESSIONS THROUGH TRACKING Salih Burak Gokturk.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Automatic Pronunciation Scoring of Specific Phone Segments for Language Instruction EuroSpeech 1997 Authors: Y. Kim, H. Franco, L. Neumeyer Presenter:
9.913 Pattern Recognition for Vision Class9 - Object Detection and Recognition Bernd Heisele.
Facial Expressions and Emotions Mental Health. Total Participants Adults (30+ years old)328 Adults (30+ years old) Adolescents (13-19 years old)118 Adolescents.
Finding Clusters within a Class to Improve Classification Accuracy Literature Survey Yong Jae Lee 3/6/08.
A presentation of research conducted by Erin L. Percival, with faculty sponsor Dr. Kent Drummond, and support from Wyoming EPSCoR.
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Face recognition using Histograms of Oriented Gradients
Department of Psychology Stephen F. Austin State University
Performance Analysis of 1D and 2D Statistical Measures on Standard Facial Image Databases International Conference on Emerging Trends in Engineering &
An Image Database Retrieval Scheme Based Upon Multivariate Analysis and Data Mining Presented by C.C. Chang Dept. of Computer Science and Information.
Face recognition using improved local texture pattern
Hybrid Features based Gender Classification
Voluntary (Motor Cortex)
Object Recognition in the Dynamic Link Architecture
Unsupervised Face Alignment by Robust Nonrigid Mapping
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo.
Hu Li Moments for Low Resolution Thermal Face Recognition
Domingo Mery Department of Computer Science
AHED Automatic Human Emotion Detection
Multimodal Caricatural Mirror
AHED Automatic Human Emotion Detection
Housam Babiker, Randy Goebel and Irene Cheng
Domingo Mery Department of Computer Science
Image Processing and Multi-domain Translation
Presentation transcript:

The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar Conference on Computer Vision and Pattern Recognition 2010 Speaker:Liu, Yi-Hsien

Outline Introduction The CK+ Dataset Emotion Labels Baseline System Experiments Conclusion

Introduction In 2000, the Cohn-Kanade (CK) database was released Automatically detecting facial expressions has become an increasingly important research area 1.目的是為了促進對於自動偵測個人臉部表情的研究,自發表之後CK資料庫就成為對於演算法的發展與評估最被廣泛使用的資料庫之一 2.自動偵測臉部表情涉及了computer vision, machine learning and behavioral sciences 而且可被用在像是 security, human-computer-interaction, driver safety, and health-care 等等應用上

Introduction(Cont.) The CK database contains 486 sequences across 97 subjects. Each of the sequences contains images from onset (neutral frame) to peak expression (last frame). The peak frame was reliably FACS(Facial Action Coding System ) coded for facial action units (AUs). Facial Action Coding System (FACS)

Introduction(Cont.) Facial Action Coding System (FACS) is a system to taxonomize human facial movements by their appearance on the face 臉部行為編碼系統,是藉由對臉部的外觀進行人類臉部動作分類的系統

Introduction(Cont.) While AU codes are well validated, emotion labels are not The lack of a common performance metric against which to evaluate new algorithms Standard protocols for common databases have not emerged 在CK發表後的十年間,CK資料庫常常被拿來使用,但也因此出現了三個問題 所以emotion labels常常被用來指定成某個另外的情緒,而不是他原本代表的情緒 缺少跟新演算法評估的基準

The CK+ Dataset Participants were 18 to 50 years of age, 69% female, 81% Euro-American, 13% Afro-American, and 6% other groups Image sequences for frontal views and 30-degree views were digitized into either 640x490 or 640x480 pixel arrays with 8- bit gray-scale or 24-bit color values.

The CK+ Dataset(Cont.) For the CK+ distribution, they have augmented the dataset further to include 593 sequences from 123 subjects (an additional 107 (22%) sequences and 26 (27%) subjects). For the 593 posed sequences, full FACS coding of peak frames is provided.

Emotion Labels They included all image data from the pool of 593 sequences that had a nominal emotion label based on the subject’s impression of each of the 7 basic emotion categories: Anger, Contempt, Disgust, Fear, Happy, Sadness and Surprise. 如果就這樣把這些標籤當成ground truth是很不可靠的 ,直接這樣進行訓練之類的動作可能會發生錯誤

Emotion Labels(Cont.) Compared the FACS codes with the Emotion Prediction Table from the FACS After the first pass, a more loose comparison was performed The third step involved perceptual judgment of whether or not the expression resembled the target emotion category. 所以,他們根據FACS來為這些情緒標記,實行步驟就是這三個步驟 情緒預測表格列出除了輕視(Contempt)以外每種情緒的原型和主要變化的臉部配置,也就是AU的組合,如果某片段滿足某情緒原型和主要變化,就暫時將這片段分類到那情緒 如果有個片段包含某個不屬於此情緒原型和變型的AU,就要判斷這片段到底屬不屬於這情緒,判斷標準就像這表格,下面的圖是例子 3. 需要第三個步驟是因為FACS碼只敘述最後一張的表情而沒有顧及到在形成最後一張的表情時所需要的臉部變化,也就是說,要決定某個表情是否為某情緒的表現是需要直接從頭看到尾

Emotion Labels(Cont.)

Emotion Labels(Cont.) As a result of this multistep selection process, 327 of the 593 sequences were found to meet criteria for one of seven discrete emotions.

Baseline System 這篇論文採用了基於active appearance models的系統來擷取特徵,接著使用support vector machines來分類表情跟情緒,流程圖就是上面這張圖

Baseline System(Cont.) Active Appearance Models (AAMs) The shape s of an AAM is described by a 2D triangulated mesh. In particular, the coordinates of the mesh vertices define the shape s = [x1; y1; x2; y2; …. ; xn; yn]

Baseline System(Cont.) SPTS:The similarity normalized shape, refers to the 68 vertex points for both the x- and y- coordinates, resulting in a raw 136 dimensional feature vector CAPP: The canonical normalized appearance, refers to where all the shape variation has been normalized with respect to the base shape 上面是SPTS下面是CAPP

Baseline System(Cont.) SVMs(Support Vector Machines) attempt to find the hyper plane that maximizes the margin between positive and negative observations for a specified class. 實線Support Hyper-planes,這篇論文使用的是二分法,例如在分類時分為Angry 跟 not Angry,或Happy 跟 Not Happy

Experiments Emotion detection. To maximize the amount of training and testing data, they believe the use of a leave-one-subject-out cross-validation configuration should be used.

Experiments(Cont.) SPTS Happy跟Surprise的偵測率比起其他情緒好很多,因為這兩種情緒在臉上造成很多變形

Experiments(Cont.) CAPP 可以發現Disgust(厭惡)的偵測率大幅提高,因為Disgust情緒會造成鼻子附近的紋理大量變化

Experiments(Cont.) SPTS+CAPP 鄙視(Contempt)大幅提高,從20幾%上升至80幾%,可能的解釋是,鄙視是個很細微的情緒,所以很容易跟其他情緒搞混,但同時使用形狀跟外觀兩種特徵來偵測之後,就比較不容易被搞混

Conclusion In this paper, they try to address those three issues by presenting the Extended Cohn-Kanade (CK+) database Added another 107 sequences as well as another 26 subjects. The peak expression for each sequence is fully FACS coded and emotion labels have been revised and validated

Conclusion(Cont.) Propose the use of a leave-one-out subject cross-validation strategy for evaluating performance Present baseline results on this using our Active Appearance Model (AAM)/support vector machine (SVM) system.