Feature Detection and Emotion Recognition Chris Matthews Advisor: Prof. Cotter.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Artificial Intelligence 12. Two Layer ANNs
Multi-Layer Perceptron (MLP)
Slides from: Doug Gray, David Poole
NEURAL NETWORKS Backpropagation Algorithm
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Reading for Next Week Textbook, Section 9, pp A User’s Guide to Support Vector Machines (linked from course website)
Machine Learning Neural Networks
Simple Neural Nets For Pattern Classification
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
CONTENT BASED FACE RECOGNITION Ankur Jain 01D05007 Pranshu Sharma Prashant Baronia 01D05005 Swapnil Zarekar 01D05001 Under the guidance of Prof.
Face Recognition Jeremy Wyatt.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
Hub Queue Size Analyzer Implementing Neural Networks in practice.
Oral Defense by Sunny Tang 15 Aug 2003
Face Detection and Neural Networks Todd Wittman Math 8600: Image Analysis Prof. Jackie Shen December 2001.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
BEE4333 Intelligent Control
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 35 – Review for midterm.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Multiple-Layer Networks and Backpropagation Algorithms
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
Neural Network Introduction Hung-yi Lee. Review: Supervised Learning Training: Pick the “best” Function f * Training Data Model Testing: Hypothesis Function.
Artificial Intelligence Techniques Multilayer Perceptrons.
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
Multimodal Information Analysis for Emotion Recognition
Perceptron Networks and Vector Notation n CS/PY 231 Lab Presentation # 3 n January 31, 2005 n Mount Union College.
Handwritten Recognition with Neural Network Chatklaw Jareanpon, Olarik Surinta Mahasarakham University.
I Robot.
Multi-Layer Perceptron
Intelligent Control and Automation, WCICA 2008.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
ADALINE (ADAptive LInear NEuron) Network and
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Essential components of the implementation are:  Formation of the network and weight initialization routine  Pixel analysis of images for symbol detection.
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
Obama and Biden, McCain and Palin Face Recognition Using Eigenfaces Justin Li.
Ekman’s Facial Expressions Study A Demonstration.
Grim Grins Project Number 5.. Grim Grins: The Team. Team members: Adrian Hoitan (Romania) Serkan Öztürk (Turkey) Günnar Yagcilar (Turkey) Póth Miklós.
Neural Networks Lecture 4 out of 4. Practical Considerations Input Architecture Output.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Neural networks.
ANN-based program for Tablet PC character recognition
Data Mining, Neural Network and Genetic Programming
CSE 473 Introduction to Artificial Intelligence Neural Networks
Classification with Perceptrons Reading:
Final Year Project Presentation --- Magic Paint Face
CSE 473 Introduction to Artificial Intelligence Neural Networks
Artificial Neural Networks for Pattern Recognition
Artificial Neural Network & Backpropagation Algorithm
Perceptron as one Type of Linear Discriminants
network of simple neuron-like computing elements
Mihir Patel and Nikhil Sardana
Artificial Intelligence 12. Two Layer ANNs
Random Neural Network Texture Model
Presentation transcript:

Feature Detection and Emotion Recognition Chris Matthews Advisor: Prof. Cotter

Motivation #1: Attempt to Answer a Long-Existing Question Used to definitively characterize what expressions the Mona Lisa is displaying (F.Y.I., she is 83% happy, 9% disgusted, 6% fearful and 2% angry, according to BBC News) Used to definitively characterize what expressions the Mona Lisa is displaying (F.Y.I., she is 83% happy, 9% disgusted, 6% fearful and 2% angry, according to BBC News)

Motivation #2: Create “Life-Like” Robots Create convincing artificial intelligence. Create convincing artificial intelligence.

Motivation #3: Enhance Society! Currently being used to teach autistic children to pick out facial subtleties and their corresponding emotions Currently being used to teach autistic children to pick out facial subtleties and their corresponding emotions

Methodology FEATURE DETECTION Isolate and crop particular areas of the face Isolate and crop particular areas of the face EMOTION RECOGNITION Training Train neural networks for each area Train neural networks for each area Combine the resultants from each and come out with a definitive result Combine the resultants from each and come out with a definitive result Alter variables of the networks by trial-and-error until the desired results are achieved Alter variables of the networks by trial-and-error until the desired results are achievedTesting Input new photos into the trained network and check results Input new photos into the trained network and check results

Feature Detection: SUSAN filtering for Edge Detection Because no derivatives are implemented in SUSAN, the algorithm excels in noisy images Because no derivatives are implemented in SUSAN, the algorithm excels in noisy images

Massive Problem: Boolean images don’t necessarily make Computer Vision problems easier! Mouth not fully enclosed Mouth not fully enclosed Only the pupil of the left eye is enclosed Only the pupil of the left eye is enclosed Even if everything was perfectly encapsulated, how would one make sense of the detected objects? Even if everything was perfectly encapsulated, how would one make sense of the detected objects?

Lesson Learned: Complete Automation is difficult!  New methodology for isolating parts of the face – manual labor.  Draw matrices over the approximate area of interest  Apply filters to detect the actual object of interest  Crop again based on those findings

Example: The Uncentered Eye The neural network will perform poorly if there is variance in either the x or y directions, from photo to photo The neural network will perform poorly if there is variance in either the x or y directions, from photo to photo

Voila!

On to the Emotion Training… Once the areas have been defined and scaled, they can be used as inputs to neural networks Once the areas have been defined and scaled, they can be used as inputs to neural networks

Introduction to Neural Networks: The Perceptron

Perceptron Implementation Initialize weight matrix and bias array to small, random values. Initialize weight matrix and bias array to small, random values. Feed an image through the network Feed an image through the network Calculate the error Calculate the error Readjust the weight matrix and bias array based on the error Readjust the weight matrix and bias array based on the error Iteratively train the network using a dictionary of photos. Iteratively train the network using a dictionary of photos.

Yet another problem! Each neuron has one weight value for each pixel Each neuron has one weight value for each pixel Weight matrix is too large to train! Weight matrix is too large to train!

Solution: PCA Principle Component Analysis generates a set of eigenvectors. Principle Component Analysis generates a set of eigenvectors. Each picture can be reconstructed using a weighted sum of these eigenvectors. Each picture can be reconstructed using a weighted sum of these eigenvectors.

Final Architecture Use a set of adaptive backpropagation networks, training on PCA coefficients. Use a set of adaptive backpropagation networks, training on PCA coefficients. Use majority rules to determine the emotion. Use majority rules to determine the emotion.

Results Training with 60 photos yielded 100% accuracy mapping to only two targets: happy and sad Training with 60 photos yielded 100% accuracy mapping to only two targets: happy and sad Training with 112 photos yielded 60% accuracy mapping to four targets: angry, fearful, happy, and sad. Training with 112 photos yielded 60% accuracy mapping to four targets: angry, fearful, happy, and sad.

Future Work Find larger and more diverse image dictionaries Find larger and more diverse image dictionaries Improve Feature Detection Improve Feature Detection Read Psychological Journals and apply their findings into the algorithms Read Psychological Journals and apply their findings into the algorithms

Questions?

A gross simplification of how SUSAN works Smallest Univalue Segment Assimilating Nucleus Smallest Univalue Segment Assimilating Nucleus Edge if n = (½)*pi*r^2 Edge if n = (½)*pi*r^2 Corner if n << (½)*pi*r^2 Corner if n << (½)*pi*r^2