Session 4B: Friday Afternoon, December 6th

Slides:



Advertisements
Similar presentations
An effector pattern is an agent causing an effect, ie
Advertisements

Role Based Access control By Ganesh Godavari. Outline of the talk Motivation Terms and Definitions Current Access Control Mechanism Role Based Access.
The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
Matthias Wimmer, Ursula Zucker and Bernd Radig Chair for Image Understanding Computer Science Technische Universität München { wimmerm, zucker, radig
HYBRID-BOOST LEARNING FOR MULTI-POSE FACE DETECTION AND FACIAL EXPRESSION RECOGNITION Hsiuao-Ying ChenChung-Lin Huang Chih-Ming Fu Pattern Recognition,
Adviser:Ming-Yuan Shieh Student:shun-te chuang SN:M
Connecting Areas and ways of knowing. Theories of Emotion The major theories of motivation can be grouped into three main categories: physiological, neurological,
Funding Networks Abdullah Sevincer University of Nevada, Reno Department of Computer Science & Engineering.
Automatic Pose Estimation of 3D Facial Models Yi Sun and Lijun Yin Department of Computer Science State University of New York at Binghamton Binghamton,
TAUCHI – Tampere Unit for Computer-Human Interaction Automated recognition of facial expressi ns and identity 2003 UCIT Progress Report Ioulia Guizatdinova.
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
Role Based Access control By Ganesh Godavari. Outline of the talk Motivation Terms and Definitions Current Access Control Mechanism Role Based Access.
Recognizing Emotions in Facial Expressions
Module 16 Emotions Kimberly, Diana, Kristen, JP, Chris, Michael, Chris.
Crowdsourcing Game Development for Collecting Benchmark Data of Facial Expression Recognition Systems Department of Information and Learning Technology.
1 Facial Expression Recognition using KCCA with Combining Correlation Kernels and Kansei Information Yo Horikawa Kagawa University, Japan.
EXPRESSED EMOTIONS Monica Villatoro. Vocab to learn * Throughout the ppt the words will be bold and italicized*  Emotions  Facial Codes  Primary Affects.
UNIT 8B: MOTIVATION AND EMOTION: EMOTIONS, STRESS AND HEALTH
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
Copyright © Allyn & Bacon 2007 Chapter 8 Emotion and Motivation.
Proportions of the Portrait
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Intelligent Control and Automation, WCICA 2008.
AdvisorStudent Dr. Jia Li Shaojun Liu Dept. of Computer Science and Engineering, Oakland University Automatic 3D Image Segmentation of Internal Lung Structures.
Situation We now accept that grammar is not restricted to writing but is present in speech. Problem This can lead to assumptions that there is one kind.
MACHINE VISION GROUP Face image mapping from NIR to VIS Jie Chen Machine Vision Group
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Recognizing Partially Occluded, Expression Variant Faces.
Ekman’s Facial Expressions Study A Demonstration.
Facial Expressions Recognition “Words lie, your face doesn’t”
Facial Expressions and Emotions Mental Health. Total Participants Adults (30+ years old)328 Adults (30+ years old) Adolescents (13-19 years old)118 Adolescents.
Interpreting Ambiguous Emotional Expressions Speech Analysis and Interpretation Laboratory ACII 2009.
Deformation Modeling for Robust 3D Face Matching Xioguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Objectives * Develop an awareness of body language * Use movement and body language to as a means of communication * Understand the principles of pantomime.
What is it? Details Look at the whole.  Form of communication ◦ Communicating without words ◦ Non-verbal ◦ Using facial expressions ◦ Gestures  Can.
Do Expression and Identity Need Separate Representations?
An Emotive Lifelike Robotics Face for Face-to-Face Communication
Modeling Facial Shape and Appearance
Body Language The author: Ilyushkina N. M. Gymnasium-17.
ROBUST FACE NAME GRAPH MATCHING FOR MOVIE CHARACTER IDENTIFICATION
Non-invasive Techniques for Driver Fatigue Monitoring
Organizational Behavior – Session 12 Dr. S. B. Alavi, 2009.
Proposed architecture of a Fully Integrated
Feature based vs. holistic processing
Presented By, Ankit Ranka Oct 19, 2009
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Expressing and Experiencing Emotion
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo.
Session 3D: Thursday Afternoon, June 27th
IEEE ICIP Feature Normalization for Part-Based Image Classification
Botulinum toxin and the facial feedback hypothesis: Can looking better make you feel happier?  Murad Alam, MD, Karen C. Barrett, PhD, Robert M. Hodapp,
The Human Face as a Dynamic Tool for Social Communication
Learning Hierarchical Features from Generative Models
PowerPoint® Presentation by Jim Foley
Feature based vs. holistic processing
Session: Video Analysis and Action Recognition, Friday 9 November 2012
EMOTION DIARY Milestone 2 BENEFITS WHAT IS IT? FEATURE HIGHLIGHTS
Facial Proportions.
Year 7 Drama Lesson 4 Exaggeration
Rachael E. Jack, Oliver G.B. Garrod, Philippe G. Schyns 
The Human Face as a Dynamic Tool for Social Communication
Boyu Wang and Minh Hoai Stony Brook University
Here’s how we Draw Realistic Portraits
Emotion and Motivation
CSC321 Winter 2007 Lecture 21: Some Demonstrations of Restricted Boltzmann Machines Geoffrey Hinton.
Emotion.
Facial Proportions.
The “Language” of Emotion:
Data-Driven Approach to Synthesizing Facial Animation Using Motion Capture Ioannis Fermanis Liu Zhaopeng
Presentation transcript:

Session 4B: Friday Afternoon, December 6th Poster Spotlights Session 4B: Friday Afternoon, December 6th Capturing Global Semantic Relationships for Facial Action Unit Recognition Ziheng Wang, Yongqiang Li, Shangfei Wang, Qiang Ji

P4B-09 Motivation Approach Result Capturing Global Semantic Relationships for Facial Action Unit Recognition Motivation Facial action units (AU) are NOT independent Current methods can only model pair-wise AU relations Propose to use RBM to capture global AU relations for AU recognition Happiness Surprise Anger Fear Stretch mouth Raise eye lid Raise brow Raise cheek Pull lip corner Approach Result CK+ SEMAINE In this paper we tackle the problem of facial action unit recognition by exploiting the complex semantic relationships among the action units. The relationships among AUs contain crucial information yet have not been thoroughly exploited. We build a hierarchical model that combines the bottom-level image features and the top-level AU relationships to jointly recognize AUs in a principled manner. The proposed model has two major advantages over existing methods. 1) Unlike methods that can only capture local pair-wise AU dependencies, our model is developed upon the restricted Boltzmann machine and therefore can exploit the global relationships among AUs. 2) Although AU relationships are influenced by many related factors such as facial expressions, these factors are generally ignored by the current methods. Our model, however, can successfully capture them to more accurately characterize the AU relationships. Experimental results on benchmark databases demonstrate the effectiveness of the proposed approach in modelling complex AU relationships as well as its superior AU recognition performance over existing approaches.   Top Down: Global semantic relations among facial action units Bottom Up: AU recognition from image measurements