Advanced Science and Technology Letters Vol.91 (Bioscience and Medical Research 2015), pp.24-27 Towards emotion.

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

Recognizing Human Actions by Attributes CVPR2011 Jingen Liu, Benjamin Kuipers, Silvio Savarese Dept. of Electrical Engineering and Computer Science University.
Event-related potentials (ERPs) have been used in past research to study the correlates and consequences of alcohol use (Porjesz et al., 2005). In particular,
Study of Change Blindness EEG Synchronization using Wavelet Coherence Analysis Professor: Liu Student: Ruby.
Farag Saad i-KNOW 2014 Graz- Austria,
Brain-computer interfaces: classifying imaginary movements and effects of tDCS Iulia Comşa MRes Computational Neuroscience and Cognitive Robotics Supervisors:
Vocal Emotion Recognition with Cochlear Implants Xin Luo, Qian-Jie Fu, John J. Galvin III Presentation By Archie Archibong.
HST 583 fMRI DATA ANALYSIS AND ACQUISITION Neural Signal Processing for Functional Neuroimaging Emery N. Brown Neuroscience Statistics Research Laboratory.
Adviser:Ming-Yuan Shieh Student:shun-te chuang SN:M
SPECIFICITY OF ERP TO CHANGE OF EMOTIONAL FACIAL EXPRESSION. Michael Wright Centre for Cognition and Neuroimaging, Brunel University, Uxbridge, UB8 3PH,
Standard electrode arrays for recording EEG are placed on the surface of the brain. Detection of High Frequency Oscillations Using Support Vector Machines:
1 Affective Learning with an EEG Approach Xiaowei Li School of Information Science and Engineering, Lanzhou University, Lanzhou, China
Feature Screening Concept: A greedy feature selection method. Rank features and discard those whose ranking criterions are below the threshold. Problem:
Presented by Zeehasham Rasheed
1 Automated Feature Abstraction of the fMRI Signal using Neural Network Clustering Techniques Stefan Niculescu and Tom Mitchell Siemens Medical Solutions,
Emotional Intelligence and Agents – Survey and Possible Applications Mirjana Ivanovic, Milos Radovanovic, Zoran Budimac, Dejan Mitrovic, Vladimir Kurbalija,
Learning Table Extraction from Examples Ashwin Tengli, Yiming Yang and Nian Li Ma School of Computer Science Carnegie Mellon University Coling 04.
Analysis of Temporal Lobe Paroxysmal Events Using Independent Component Analysis Jonathan J. Halford MD Department of Neuroscience, Medical University.
Acknowledgements This research was also supported by the Brazil Scientific Mobility Program (BSMP) and the Institute of International Education (IIE).
Presented by Tienwei Tsai July, 2005
Physiological sensors and EEG A short introduction to (neuro-)physiological measurements.
Table 3:Yale Result Table 2:ORL Result Introduction System Architecture The Approach and Experimental Results A Face Processing System Based on Committee.
IntroductionMethods Participants  7 adults with severe motor impairment performed EEG recording sessions in their own homes.  9 adults with no motor.
A Regression Approach to Music Emotion Recognition Yi-Hsuan Yang, Yu-Ching Lin, Ya-Fan Su, and Homer H. Chen, Fellow, IEEE IEEE TRANSACTIONS ON AUDIO,
Multimodal Information Analysis for Emotion Recognition
Transforming transcendence into trait - An electrophysiological approach: The Model of Mindfulness Meditation Aviva Berkovich Ohana 1, Dr Avi Goldstein.
Current work at UCL & KCL. Project aim: find the network of regions associated with pleasant and unpleasant stimuli and use this information to classify.
Exploration of Instantaneous Amplitude and Frequency Features for Epileptic Seizure Prediction Ning Wang and Michael R. Lyu Dept. of Computer Science and.
Functional Brain Signal Processing: EEG & fMRI Lesson 4
Aaron Raymond See Department of Electrical Engineering Southern Taiwan University 11/17/20151.
Classifying Event-Related Desynchronization in EEG, ECoG, and MEG Signals Kim Sang-Hyuk.
Analysis of Movement Related EEG Signal by Time Dependent Fractal Dimension and Neural Network for Brain Computer Interface NI NI SOE (D3) Fractal and.
2005/12/021 Fast Image Retrieval Using Low Frequency DCT Coefficients Dept. of Computer Engineering Tatung University Presenter: Yo-Ping Huang ( 黃有評 )
Spectral Analysis of Resting State Electroencephalogram (EEG) in Subjects With and Without Family Histories of Alcoholism Spectral Analysis of Resting.
Advanced Science and Technology Letters Vol.32 (Architecture and Civil Engineering 2013), pp A Study on.
Performance Comparison of Speaker and Emotion Recognition
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
Predicting Voice Elicited Emotions
Cultural Differences and Similarities in Emotion Recognition Vladimir Kurbalija, Mirjana Ivanović, Miloš Radovanović, Zoltan Geler, Dejan Mitrović, Weihui.
Advanced Science and Technology Letters Vol.28 (EEC 2013), pp Histogram Equalization- Based Color Image.
Abstract Automatic detection of sleep state is important to enhance the quick diagnostic of sleep conditions. The analysis of EEGs is a difficult time-consuming.
EEG processing based on IFAST system and Artificial Neural Networks for early detection of Alzheimer’s disease.
Advanced Science and Technology Letters Vol.28 (EEC 2013), pp Fuzzy Technique for Color Quality Transformation.
Bringing Order to the Web : Automatically Categorizing Search Results Advisor : Dr. Hsu Graduate : Keng-Wei Chang Author : Hao Chen Susan Dumais.
Next, this study employed SVM to classify the emotion label for each EEG segment. The basic idea is to project input data onto a higher dimensional feature.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Multi-Class Sentiment Analysis with Clustering and Score Representation Yan Zhu.
Topographic mapping on memory test of verbal/spatial information assessed by event related potentials (ERPs) Petrini L¹², De Pascalis V², Arendt-Nielsen.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Modeling Human Emotion during watching movies using EEG Prepared By : Muniratul Husna Bt. Mohamad Sokri Matric No. : Lecturer : Dr. Farzana binti.
Detection Of Anger In Telephone Speech Using Support Vector Machine and Gaussian Mixture Model Prepared By : Siti Marahaini Binti Mahamood.
Descriptive Statistics The means for all but the C 3 features exhibit a significant difference between both classes. On the other hand, the variances for.
* the sampling rate and filter bandwidth were set to 500Hz and 1-10 Hz, respectively. * an additional 60Hz notch filter was employed to avoid the power-line.
Neurofeedback of beta frequencies:
Recognition of arrhythmic Electrocardiogram using Wavelet based Feature Extraction Authors Atrija Singh Dept. Of Electronics and Communication Engineering.
CLASSIFICATION OF SLEEP EVENTS IN EEG’S USING HIDDEN MARKOV MODELS
G. Suarez, J. Soares, S. Lopez, I. Obeid and J. Picone
Aaron Raymond See Department of Electrical Engineering
Automatic Sleep Stage Classification using a Neural Network Algorithm
When to engage in interaction – and how
Major Project Presentation Phase - I
Seunghui Cha1, Wookhyun Kim1
Hybrid Features based Gender Classification
Stock Market Prediction
What is Pattern Recognition?
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo.
Optimizing Channel Selection for Seizure Detection
Machine Learning for Visual Scene Classification with EEG Data
SYSTEMATIC REVIEW OF COMPUTATIONAL MODELS FOR BRAIN PARCELLATION
Marian Stewart Bartlett, Gwen C. Littlewort, Mark G. Frank, Kang Lee 
Presentation transcript:

Advanced Science and Technology Letters Vol.91 (Bioscience and Medical Research 2015), pp Towards emotion recognition of EEG brain signals using Hjorth parameters and SVM Raja Majid Mehmood 1 and Hyo Jong Lee 1, 2, 1 Division of Computer Science and Engineering 2 Center for Advanced Image and Information Technology ChonbukNational University, Jeonju,SOUTH KOREA Abstract. There are several techniques of Psychophysiology data collation from human subjects. But, we focused in this paper to present the emotion detection of EEG brain signals using a classifier which is known as Support Vector Machine (SVM). The emotions were elicited in the subjects through the presentation of emotional stimuli. These stimuli were extracted from International Affective Picture System (IAPS) database. We used five different types of emotional stimuli in our experiment such as, happy, calm, neutral, sad and scared. The raw EEG data were preprocessed to remove artifacts and a number of features were selected as input to the classifiers. The results showed that it is difficult to train a classifier to be accurate over the high number of emotions (5 emotions) but SVM with proposed features were reasonably accurate over smaller emotion group (2 emotions) identifying the emotional states with an accuracy up to 70%. Keywords: EEG, Emotion Stimulus, Hjorth, Support Vector Machine. 1 Introduction Human computer interaction became a part of everyday life. Similarly, emotions are important and constantly exist in human’s daily life. Emotion can provide many possibilities in enhancing the interaction with emotion based computers e.g. affective interaction for autism or epilepsy patient. Emotion related research helps to computer’s scientist in the development of emotion based HCI system. Many researchers had contributed successful researches on emotion recognition such as speech, text, facial expressions or gestures as stimuli [1]. Emotion could be developed through “inner” thinking process by referring to the brain from the human senses (for example; visual, audio, tactile, odor, and taste). Many application areas (medical applications, EEG-based games, and marketing study) used the algorithms of “inner” emotion detection from EEG signals [2]. Ahmad et al. [3] presented the emotion recognition method using SVM with an accuracy of 56% over 15 subjects. Previous EEG study [8] generally investigated the method for classification of negative or positive valence/arousal emotions. The main goal of our research is to introduce the Hjorth Parameters [4] for feature selection. Further, these features were used as input to SVM [5]. We focused on ISSN: ASTL Copyright © 2015 SERSC

Advanced Science and Technology Letters Vol.91 (Bioscience and Medical Research 2015) recognition of emotions from Electroencephalogram (EEG) signals. EEG signal features were selected under five brain regions on five different types of emotional stimulus. The EEG electrodes are divided into five brain regions, grouped as frontal (Fp1, Fp2, F3, F4, F7, F8 and Fz), central (C3, C4 and CZ); temporal (T7, and T8), parietal (P3, P4, P7, and P8), and occipital (O1 and O2). We also categorized the emotional stimuli into five emotions such as, happy, calm, neutral, sad, and scared. The remainder of the paper is structured as follows: Materials and Methods of our research will be described in Section 2. Section 3 includes the result and discussion. Finally the conclusion will be presented in Section 4. 2 Materials and Methods Thirty healthy males in the age group of 23 to 25 years old were recruited. The subjects were given a simple instruction about the research work and stages of this experiment. Further, EEG signals were collected using the 10/20 internationally recognized placement system. The EEG signals were recorded through the Brain - Vision System (Brain Products, Germany). We had used 18 electrodes (Fp1, Fp2, F3, F4, Fz, F7, F8, C3, C4, Cz, T7, T8, P3, P4, P7, P8, O1, and O2) on the scalp to record EEG signals using the Easy Cap. We adopted a method that is commonly used to evoke the different emotions from subjects by presenting the emotional stimulus with corresponding content [6]. The whole experiment was designed to produce emotion within the valence-arousal space [7]. We defined the five affective states as sad, scared, happy, calm and neutral. On the basis of these ratings, 35 pictures (7 pictures × 5 states) were selected from uniformly distributed clusters along the valence and arousal axes. The selected pictures were presented randomly for four seconds following another four seconds for resetting emotion with a blurred image. The fixation mark (cross) was projected for eight seconds in the middle of the screen to attract the sight of the subject. Fig. 1 shows the timing diagram of this experiment, where the total time of collecting EEG recording in this experiment was 296 seconds for each subject. Fig. 1. Stimulus timing diagram The raw EEG data were preprocessed through artifact rejection [8], filtering [9], and epoch selection. We had used the EEGLAB Toolbox in MATLAB. The processed EEG signals were used to compute the Hjorth Parameters in the time - frequency domain. Hjorth Parameters are based on statistical calculations which used to describe the characteristics of EEG signal in time domain. Hjorth Parameters Copyright © 2015 SERSC 25

Advanced Science and Technology Letters Vol.91 (Bioscience and Medical Research 2015) are also known as normalized slope descriptors (NSDs) include activity, mobility and complexity [10]. The processed data are further used for feature selection. Three Hjorth Parameters were used for feature selection from selected frequency wave and brain lobe. We selected the total of 75 features for each emotion class to input the classifier. These features were consisted on three Hjorth Parameter at five frequency bands and five brain lobes. Five frequency bands (n) were considered within the frequency range of Hz. We selected the 5 frequency filters ( n F 2 = {[0.5~4] [4~8] [8~13] [13~30] [30~50]}) and each filter contains 2 points which are low and high filter point. The second variant of feature selection is brain lobes, which are five in total. These brain lobes were presented as frontal, central, temporal, parietal and occipital. The duration of extracting window is first 1000 milliseconds of every epoch. All EEG signal patterns were obtained at i th frequency filter and j th brain lobe and k th epoch. [F 75 ]k= Emo(Fri, Bj, Ek) (1) where ‘i’, ‘j’ and ‘k’ are indices for the frequency band, brain lobes and epoch, respectively. The function ‘Emo’ extracts the desired features at i th frequency band and j th brain lobe for every k th epoch. Further, these features prepare for WEKA [11] to process the feature dataset into SVM with ten-fold cross validation. 3 Results and Discussion The classification results for all 30 subjects are presented in Fig. 2. The highest accuracy was obtained with SVM (70%) and presented on the y-axis. Fig. 2 presented the emotion groups on the x-axis. These groups are E5, E4, E3 and E2 consist of five emotions (happy, calm, neutral, sad and scared), four emotions (happy, calm, sad and scared), three emotions (happy, neutral, sad) and two emotions (happy, sad), respectively. Fig. 2. SVM emotion groups vs. accuracy The main purpose of our experiments was to evaluate our selected feature selection method through SVM. From our results we can conclude that it is not trivial to process and classify data to be accurate over the high number of emotions. The classification result of all emotions was about 30%, but we can see the improvement in accuracy over reduction of size in emotional group and classification accuracy of 70% is getting better up to two emotions. Accuracyo/c E5 E4 E3 E2 Emotion Groups 26 Copyright© 2015 SERSC

Advanced Science and Technology Letters Vol.91 (Bioscience and Medical Research 2015) 4 Conclusion We proposed a novel EEG feature extraction method for emotional stimuli (i.e., Happy, Calm, Neutral, Sad and Scared). In this paper, we employed the Hjorth parameters for our feature extraction method. Band-pass filtering and combination of several EEG channels into specific brain lobes extracted the significant features for the SVM. As previous researches discussed the feature selection is a key challenge in affective computing. That’s why the accuracy in our experiments greatly increased on the small group of emotions. We would be interested to explore more features and different combinations of them to see how it affects the accuracy over many emotions. It would also be motivated to examine the results over more number of subjects in the future experiment. Acknowledgments. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 2012R1A2A2A03). References 1.A. Kappas, and N. C. Krämer, Face-to-face communication over the Internet: emotions in a web of culture, language, and technology: Cambridge University Press, O. Sourina, and Y. Liu, "A Fractal-based Algorithm of Emotion Recognition from EEG using Arousal-Valence Model." pp A. T. Sohaib, S. Qureshi, J. Hagelbäck et al., "Evaluating classifiers for emotion recognition using EEG," Foundations of Augmented Cognition, pp : Springer, B. Hjorth, “The physical significance of time domain descriptors in EEG analysis,” Electroencephalography and clinical Neurophysiology, vol. 34, no. 3, pp , R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using second order information for training SVM. Journal of Machine Learning Research 6, , L. Aftanas, A. Varlamov, S. Pavlov et al., “Event-related synchronization and desynchronization during affective processing: emergence of valence-related time-dependent hemispheric asymmetries in theta and upper alpha band,” International journal of Neuroscience, vol. 110, no. 3-4, pp , P. J. Lang, M. M. Bradley, and B. N. Cuthbert, “International affective picture system (IAPS): Instruction manual and affective ratings,” The center for research in psychophysiology, University of Florida, G. Gómez-Herrero, W. De Clercq, H. Anwar et al., "Automatic removal of ocular artifacts in the EEG without an EOG reference channel." pp A. Widmann, and E. Schröger, “Filter effects and filter artifacts in the analysis of electrophysiological data,” Frontiers in psychology, vol. 3, B. Hjorth, “EEG analysis based on time domain properties,” Electroencephalography and clinical Neurophysiology, vol. 29, no. 3, pp , Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, Ian H. Witten (2009); The WEKA Data Mining Software: An Update; SIGKDD Explorations, Volume 11, Issue 1. Copyright © 2015 SERSC 27